path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
python/learn/matplotlib/tutorials_jupyter/introductory/.ipynb_checkpoints/lifecycle-checkpoint.ipynb | ###Markdown
The Lifecycle of a PlotThis tutorial aims to show the beginning, middle, and end of a singlevisualization using Matplotlib. We'll begin with some raw data andend by saving a figure of a customized visualization. Along the way we'll tryto highlight some neat features and best-practices using Matplotlib... currentmodule:: matplotlibNoteThis tutorial is based off of `this excellent blog post `_ by Chris Moffitt. It was transformed into this tutorial by Chris Holdgraf.A note on the Object-Oriented API vs Pyplot===========================================Matplotlib has two interfaces. The first is an object-oriented (OO)interface. In this case, we utilize an instance of :class:`axes.Axes`in order to render visualizations on an instance of :class:`figure.Figure`.The second is based on MATLAB and uses a state-based interface. This isencapsulated in the :mod:`pyplot` module. See the :doc:`pyplot tutorials` for a more in-depth look at the pyplotinterface.Most of the terms are straightforward but the main thing to rememberis that:* The Figure is the final image that may contain 1 or more Axes.* The Axes represent an individual plot (don't confuse this with the word "axis", which refers to the x/y axis of a plot).We call methods that do the plotting directly from the Axes, which givesus much more flexibility and power in customizing our plot.NoteIn general, try to use the object-oriented interface over the pyplot interface.Our data========We'll use the data from the post from which this tutorial was derived.It contains sales information for a number of companies.
###Code
# sphinx_gallery_thumbnail_number = 10
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
data = {'Barton LLC': 109438.50,
'Frami, Hills and Schmidt': 103569.59,
'Fritsch, Russel and Anderson': 112214.71,
'Jerde-Hilpert': 112591.43,
'Keeling LLC': 100934.30,
'Koepp Ltd': 103660.54,
'Kulas Inc': 137351.96,
'Trantow-Barrows': 123381.38,
'White-Trantow': 135841.99,
'Will LLC': 104437.60}
group_data = list(data.values())
group_names = list(data.keys())
group_mean = np.mean(group_data)
###Output
_____no_output_____
###Markdown
Getting started===============This data is naturally visualized as a barplot, with one bar pergroup. To do this with the object-oriented approach, we'll first generatean instance of :class:`figure.Figure` and:class:`axes.Axes`. The Figure is like a canvas, and the Axesis a part of that canvas on which we will make a particular visualization.NoteFigures can have multiple axes on them. For information on how to do this, see the :doc:`Tight Layout tutorial `.
###Code
fig, ax = plt.subplots()
###Output
_____no_output_____
###Markdown
Now that we have an Axes instance, we can plot on top of it.
###Code
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
###Output
_____no_output_____
###Markdown
Controlling the style=====================There are many styles available in Matplotlib in order to let you tailoryour visualization to your needs. To see a list of styles, we can use:mod:`pyplot.style`.
###Code
print(plt.style.available)
###Output
_____no_output_____
###Markdown
You can activate a style with the following:
###Code
plt.style.use('fivethirtyeight')
###Output
_____no_output_____
###Markdown
Now let's remake the above plot to see how it looks:
###Code
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
###Output
_____no_output_____
###Markdown
The style controls many things, such as color, linewidths, backgrounds,etc.Customizing the plot====================Now we've got a plot with the general look that we want, so let's fine-tuneit so that it's ready for print. First let's rotate the labels on the x-axisso that they show up more clearly. We can gain access to these labelswith the :meth:`axes.Axes.get_xticklabels` method:
###Code
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
###Output
_____no_output_____
###Markdown
If we'd like to set the property of many items at once, it's useful to usethe :func:`pyplot.setp` function. This will take a list (or many lists) ofMatplotlib objects, and attempt to set some style element of each one.
###Code
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
###Output
_____no_output_____
###Markdown
It looks like this cut off some of the labels on the bottom. We cantell Matplotlib to automatically make room for elements in the figuresthat we create. To do this we'll set the ``autolayout`` value of ourrcParams. For more information on controlling the style, layout, andother features of plots with rcParams, see:doc:`/tutorials/introductory/customizing`.
###Code
plt.rcParams.update({'figure.autolayout': True})
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
###Output
_____no_output_____
###Markdown
Next, we'll add labels to the plot. To do this with the OO interface,we can use the :meth:`axes.Axes.set` method to set properties of thisAxes object.
###Code
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
###Output
_____no_output_____
###Markdown
We can also adjust the size of this plot using the :func:`pyplot.subplots`function. We can do this with the ``figsize`` kwarg.NoteWhile indexing in NumPy follows the form (row, column), the figsize kwarg follows the form (width, height). This follows conventions in visualization, which unfortunately are different from those of linear algebra.
###Code
fig, ax = plt.subplots(figsize=(8, 4))
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
###Output
_____no_output_____
###Markdown
For labels, we can specify custom formatting guidelines in the form offunctions by using the :class:`ticker.FuncFormatter` class. Below we'lldefine a function that takes an integer as input, and returns a stringas an output.
###Code
def currency(x, pos):
"""The two args are the value and tick position"""
if x >= 1e6:
s = '${:1.1f}M'.format(x*1e-6)
else:
s = '${:1.0f}K'.format(x*1e-3)
return s
formatter = FuncFormatter(currency)
###Output
_____no_output_____
###Markdown
We can then apply this formatter to the labels on our plot. To do this,we'll use the ``xaxis`` attribute of our axis. This lets you performactions on a specific axis on our plot.
###Code
fig, ax = plt.subplots(figsize=(6, 8))
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
ax.xaxis.set_major_formatter(formatter)
###Output
_____no_output_____
###Markdown
Combining multiple visualizations=================================It is possible to draw multiple plot elements on the same instance of:class:`axes.Axes`. To do this we simply need to call another one ofthe plot methods on that axes object.
###Code
fig, ax = plt.subplots(figsize=(8, 8))
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
# Add a vertical line, here we set the style in the function call
ax.axvline(group_mean, ls='--', color='r')
# Annotate new companies
for group in [3, 5, 8]:
ax.text(145000, group, "New Company", fontsize=10,
verticalalignment="center")
# Now we'll move our title up since it's getting a little cramped
ax.title.set(y=1.05)
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
ax.xaxis.set_major_formatter(formatter)
ax.set_xticks([0, 25e3, 50e3, 75e3, 100e3, 125e3])
fig.subplots_adjust(right=.1)
plt.show()
###Output
_____no_output_____
###Markdown
Saving our plot===============Now that we're happy with the outcome of our plot, we want to save it todisk. There are many file formats we can save to in Matplotlib. To seea list of available options, use:
###Code
print(fig.canvas.get_supported_filetypes())
###Output
_____no_output_____
###Markdown
We can then use the :meth:`figure.Figure.savefig` in order to save the figureto disk. Note that there are several useful flags we'll show below:* ``transparent=True`` makes the background of the saved figure transparent if the format supports it.* ``dpi=80`` controls the resolution (dots per square inch) of the output.* ``bbox_inches="tight"`` fits the bounds of the figure to our plot.
###Code
# Uncomment this line to save the figure.
# fig.savefig('sales.png', transparent=False, dpi=80, bbox_inches="tight")
###Output
_____no_output_____ |
counterfactualms/experiments/plotting/interactive_plots.ipynb | ###Markdown
Plotting
###Code
idx = 560
interventions = [
{'age': 40.},
{'ventricle_volume': 80000.},
{'lesion_volume': 1e-5},
{'edss': 1e-5}
]
plot_gen_intervention_range(model_name, interventions, idx, normalise_all=True, num_samples=32)
###Output
_____no_output_____
###Markdown
Interactive PlottingDifference is the test image minus the original. Red shows higher intensity in the test image relative to the original, and blue shows lower intensity in the test image relative to the original.
###Code
interactive_plot(model_name);
###Output
_____no_output_____ |
i18n/locales/ja/ch-ex/Solutions/Exercise for 2.4.ipynb | ###Markdown
Solution: Basic synthesis of single qubit gates 1Show that the Hadamard gate can be written in the following two forms$$H = \frac{X+Z}{\sqrt{2}} \equiv \exp\left(i \frac{\pi}{2} \, \frac{X+Z}{\sqrt{2}}\right)$$Here $\equiv$ is used to denote that the equality is valid up to a global phase, and hence that the resulting gates are physically equivalent.Hint: it might even be easiest to prove that $e^{i\frac{\pi}{2} M} \equiv M$ for any matrix whose eigenvalues are all $\pm 1$, and that such matrices uniquely satisfy $M^2=I$. 2The Hadamard can be constructed from `rx` and `rz` operations as$$ R_x(\theta) = e^{i\frac{\theta}{2} X}, ~~~ R_z(\theta) = e^{i\frac{\theta}{2} Z},\\ H \equiv \lim_{n\rightarrow\infty} \left( ~R_x\left(\frac{\theta}{n}\right) ~~R_z \left(\frac{\theta}{n}\right) ~\right)^n$$For some suitably chosen $\theta$. When implemented for finite $n$, the resulting gate will be an approximation to the Hadamard whose error decreases with $n$.The following shows an example of this implemented with Qiskit with an incorrectly chosen value of $\theta$ (and with the global phase ignored).* Determine the correct value of $\theta$.* Show that the error (when using the correct value of $\theta$) decreases quadratically with $n$.
###Code
qr = QuantumRegister(1)
cr = ClassicalRegister(1)
error = {}
for n in range(1,11):
# Create a blank circuit
qc = QuantumCircuit(qr,cr)
# Implement an approximate Hadamard
theta = np.pi/np.sqrt(2) # here we correctly choose theta=pi/sqrt(2)
for j in range(n):
qc.rx(theta/n,qr[0])
qc.rz(theta/n,qr[0])
# We need to measure how good the above approximation is. Here's a simple way to do this.
# Step 1: Use a real hadamard to cancel the above approximation.
# For a good approximatuon, the qubit will return to state 0. For a bad one, it will end up as some superposition.
qc.h(qr[0])
# Step 2: Run the circuit, and see how many times we get the outcome 1.
# Since it should return 0 with certainty, the fraction of 1s is a measure of the error.
qc.measure(qr,cr)
shots = 20000
job = execute(qc, Aer.get_backend('qasm_simulator'),shots=shots)
try:
error[n] = (job.result().get_counts()['1']/shots)
except:
pass
plot_histogram(error)
# The linear nature of error^(-1/2) shows that the error has a quadratic decay.
inverse_square_of_error = {}
for n in error:
inverse_square_of_error[n] = (error[n])**(-1/2)
plot_histogram(inverse_square_of_error)
###Output
_____no_output_____
###Markdown
3An improved version of the approximation can be found from,$$H \equiv \lim_{n\rightarrow\infty} \left( ~ R_z \left(\frac{\theta}{2n}\right)~~ R_x\left(\frac{\theta}{n}\right) ~~ R_z \left(\frac{\theta}{2n}\right) ~\right)^n$$.Implement this, and investigate the scaling of the error.
###Code
qr = QuantumRegister(1)
cr = ClassicalRegister(1)
error = {}
for n in range(1,11):
# Create a blank circuit
qc = QuantumCircuit(qr,cr)
# Implement an approximate Hadamard
theta = np.pi/np.sqrt(2) # here we correctly use theta=pi/sqrt(2)
for j in range(n):
qc.rz(theta/(2*n),qr[0])
qc.rx(theta/n,qr[0])
qc.rz(theta/(2*n),qr[0])
# We need to measure how good the above approximation is. Here's a simple way to do this.
# Step 1: Use a real hadamard to cancel the above approximation.
# For a good approximatuon, the qubit will return to state 0. For a bad one, it will end up as some superposition.
qc.h(qr[0])
# Step 2: Run the circuit, and see how many times we get the outcome 1.
# Since it should return 0 with certainty, the fraction of 1s is a measure of the error.
qc.measure(qr,cr)
shots = 100000
job = execute(qc, Aer.get_backend('qasm_simulator'),shots=shots)
try:
error[n] = (job.result().get_counts()['1']/shots)
except:
pass
plot_histogram(error)
# The linear nature of error^(-1/3) shows that the error has a cubic decay.
# Note: this needs loads of shots to get a good result.
inverse_cube_of_error = {}
for n in error:
error[n]
inverse_cube_of_error[n] = (error[n])**(-1/3)
plot_histogram(inverse_cube_of_error)
###Output
_____no_output_____ |
lesson4-14/Part5-Inference-and-validation.ipynb | ###Markdown
Inference and ValidationNow that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:```pythontestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)```The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
###Code
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
Here I'll create a model like normal, using the same one from my solution for part 4.
###Code
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
###Output
_____no_output_____
###Markdown
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
###Code
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
###Output
torch.Size([64, 10])
###Markdown
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
###Code
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
###Output
tensor([[ 1],
[ 1],
[ 1],
[ 1],
[ 1],
[ 1],
[ 1],
[ 1],
[ 1],
[ 1]])
###Markdown
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.If we do```pythonequals = top_class == labels````equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
###Code
equals = top_class == labels.view(*top_class.shape)
###Output
_____no_output_____
###Markdown
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error```RuntimeError: mean is not implemented for type torch.ByteTensor```This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
###Code
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
###Output
Accuracy: 7.8125%
###Markdown
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:```python turn off gradientswith torch.no_grad(): validation pass here for images, labels in testloader: ...```>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
###Code
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
## TODO: Implement the validation pass and print out the validation accuracy
print(f'Accuracy: {accuracy.item()*100}%')
###Output
_____no_output_____
###Markdown
OverfittingIf we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.htmltorch.nn.Dropout) module.```pythonclass Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): make sure input tensor is flattened x = x.view(x.shape[0], -1) Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x```During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.```python turn off gradientswith torch.no_grad(): set model to evaluation mode model.eval() validation pass here for images, labels in testloader: ... set model back to train modemodel.train()``` > **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
###Code
## TODO: Define your model with dropout added
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
###Output
_____no_output_____
###Markdown
InferenceNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
###Code
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
###Output
_____no_output_____ |
source/notebooks/endpoint_demo.ipynb | ###Markdown
In this notebook you can get a quick preview of what the outcome when you complete the full notebook for this solution.Here we are using a pre-trained XGBoost model to make predictions for our test dataset, and evaluate its accuracy.You can select Run->Run All from the menu to run all cells in Studio (or Cell->Run All in a SageMaker Notebook Instance).
###Code
import sys
sys.path.append('./src/')
from package import config
###Output
_____no_output_____
###Markdown
Read in the data
###Code
import boto3
from zipfile import ZipFile
s3 = boto3.resource('s3')
object = s3.Object(f"{config.SOLUTIONS_S3_BUCKET}-{config.AWS_REGION}",f"{config.SOLUTION_NAME}/data/creditcardfraud.zip")
object.download_file("creditcardfraud.zip")
with ZipFile('creditcardfraud.zip', 'r') as zf:
zf.extractall()
###Output
_____no_output_____
###Markdown
Split intro train/test
###Code
import numpy as np
import pandas as pd
data = pd.read_csv('creditcard.csv', delimiter=',')
feature_columns = data.columns[:-1]
label_column = data.columns[-1]
features = data[feature_columns].values.astype('float32')
labels = (data[label_column].values).astype('float32')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
features, labels, test_size=0.1, random_state=42)
###Output
_____no_output_____
###Markdown
Set up a predictor, using the demo endpoint, and a pre-trained model
###Code
from sagemaker.predictor import csv_serializer, RealTimePredictor
xgb_predictor = RealTimePredictor(endpoint="{}-demo".format(config.SOLUTION_PREFIX),
serializer=csv_serializer,
deserializer=None,
content_type='text/csv')
# Because we have a large test set, we call predict on smaller batches
def predict(current_predictor, data, rows=500):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ''
for array in split_array:
predictions = ','.join([predictions, current_predictor.predict(array).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
###Output
_____no_output_____
###Markdown
Make predictions and evaluate accuracy
###Code
raw_preds = predict(xgb_predictor, X_test)
from sklearn.metrics import balanced_accuracy_score, cohen_kappa_score
# scikit-learn expects 0/1 predictions, so we threshold our raw predictions
y_preds = np.where(raw_preds > 0.5, 1, 0)
print("Balanced accuracy = {}".format(balanced_accuracy_score(y_test, y_preds)))
print("Cohen's Kappa = {}".format(cohen_kappa_score(y_test, y_preds)))
###Output
_____no_output_____ |
core/collection/ignored_done/00_02_seed_followers.ipynb | ###Markdown
Daily Seed FollowersGet the followers from all the seed accounts
###Code
# required imports to access api_db, misc, misc.CONFIG, ...
import sys
sys.path = ['.', '..', '../..'] + sys.path
from collection import *
###Output
_____no_output_____
###Markdown
Conditional ExecutionEach file needs to verify if it should be executed or not based on the configurations (for some files this is not optional but all should have this section, even if it is tautological). Example:```pythonif not misc.CONFIG["collection"]["execute_this_script"]: exit()```
###Code
# Conditional execution
if not misc.CONFIG["collection"]["seed"]["followers"]: exit()
###Output
_____no_output_____
###Markdown
driver code iterate every user, iterate all their followers -> insert into db
###Code
seed = api_db.col_users.find({"depth":0}, {"depth": 1, "political": 1, "news": 1}, no_cursor_timeout=True)
for user in seed:
print("getting followers for:%s" % user, end="", flush=True)
for follower_ids in paged_followers(user["_id"]):
# adds new users and counts 1 per each appearances (incrementing differently if this is political or news) -> this is used later to exclude users that are most likely not portuguese from portugal
custom_counter = "follows_political" if "political" in user else "follows_news"
upsert_user_ids_inc_custom_depth(follower_ids, custom_counter, user["depth"] + 1)
print("Done")
print("DONE")
###Output
_____no_output_____ |
python_basics.ipynb | ###Markdown
###Code
import random
[random.randint(0, 100) for i in range(5)]
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
boston_dataset = datasets.load_boston()
X_full = boston_dataset.data
Y = boston_dataset.target
print (X_full.shape)
print (Y.shape)
print(boston_dataset.DESCR)
selector = SelectKBest(f_regression, k=1)
selector.fit(X_full, Y)
X = X_full[:, selector.get_support()]
print (X.shape)
###Output
(506, 1)
###Markdown
Python Basics Reading This PageThis page incorporates text information, python code, and the output of the python code.This section is the text information, the next section is the python code, and the section following the python code is the output of the python code.
###Code
# This is the python code section
print( "This is the output of the python code." )
###Output
This is the output of the python code.
###Markdown
PrintingThe most common way to output information to the user is through the `print` command. Each `print` command will output a single line of text.To print text you specify, put the text between quotation marks.`print("This is some text.")`You can print multiple things on one line if you separate them with commas.`print("Item 1 is:", "Printing in Python")`
###Code
print("This is some text.")
print() # print a blank line
print("Item 1 is:", "Printing in Python")
###Output
This is some text.
Item 1 is: Printing in Python
###Markdown
Escape CharactersThere are some special characters that have special ways of printing them out in a print statement.|Escape Character|Result||:---:|:---||\t|tab (5 spaces)||\n|newline (enter key)||\"|"||\'|'||\\|\|
###Code
print("tab\ttab\ttab.")
print("This is line 1.\nThis is line 2.")
print("\"This is a quotation.\"")
print("\'A quotation using single quotes.\'")
print("Printing a \\.")
###Output
tab tab tab.
This is line 1.
This is line 2.
"This is a quotation."
'A quotation using single quotes.'
Printing a \.
###Markdown
CommentsYou can put comments in a python program that will be ignored by python. These are useful for adding notes about what the program or a particular line is doing. You can also use them to temporarilly turn off pieces of code that you don't want to delete.There are two types of comments: single line comments and block comments.Single line comments start with the `` symbol and continue until the end of the line. Everything from the `` through the end of the line is ignored by python; you can put anything you want in the comment.Block quotes start with a line containing only `"""` and continues until another line containing only `"""`. Everything between those two lines is ignored by python.
###Code
# This is a single line comment.
num = 5 # This is also a single line comment
"""
This is a block comment.
All of this is ignored by python.
"""
###Output
_____no_output_____
###Markdown
VariablesVariables are used to store values. Variables should start with a lower case letter, cannot be a reserved word (like int, or float), cannot have spaces, can have numbers, but cannot start with a number. Variable names should be descriptive enough **Good Variable Names** `name` `date` `first_name` `student3` **Bad Variable Names** `1num` `first name` `int` `float` **Acceptable Variable Names** - these are valid Python, but are not the "python way". `Name` `firstName` `FirstName` Data TypesVariables can hold a number of data types. These are the most common ones you will see.|Data Type|What It Holds||:---: | :--- ||`int`|Integers. These are positive or negative whole numbers (no decimals).||`float`|Floating point numbers. These are numbers that have a decimal.||`str`|String. This is text (a string of characters).|
###Code
integer_var = 5 # store an integer value
print( integer_var ) # print the contents of the variable
print( "integer_var =", integer_var ) # print a better message
float_var = 5.22 # store an integer value
print( float_var ) # print the contents of the variable
print( "float_var =", float_var) # print a better message
string_var = "\"This is some text.\"" # store an integer value
print( string_var ) # print the contents of the variable
print( "string_var =", string_var ) # print a better message
###Output
5
integer_var = 5
5.22
float_var = 5.22
"This is some text."
string_var = "This is some text."
###Markdown
Math OperatorsThe following are the common math operators you will use.|Operator|Effect||:---:|:---||+|Addition||-|Subtraction||\*|Multiplication||/|Division||//|Floor division (also integer division). Does division and drops the decimal.||%|Modulus. Does division and returns the remainder.||\*\*|Exponential. Example: 5 ** 2 is 5 to the power of 2.
###Code
print( "5 + 2 =", 5 + 2 ) # addition
print( "5 - 2 =", 5 - 2 ) # subtraction
print( "5 * 2 =", 5 * 2 ) # multiplication
print( "5 / 2 =", 5 / 2 ) # division
print( "5 // 2 =", 5 // 2 ) # floor division
print( "5 % 2 =", 5 % 2 ) # modulus (or mod)
print( "5 ** 2 =", 5 ** 2 ) # exponential
###Output
5 + 2 = 7
5 - 2 = 3
5 * 2 = 10
5 / 2 = 2.5
5 // 2 = 2
5 % 2 = 1
5 ** 2 = 25
###Markdown
Extra Practice With Modulus (%)Modulus is often the mathematical operator that gives students the most trouble, though it is quite useful. All modulus does is return the remainder after division happens. Think way back to when you were first learning long division. If you were asked to divide 5 by 2 you would not have said 2.5 like you would now. Back then you would have reported your answer as a whole number "2" and a remainder "1". Modulus (%) ignores the whole number portion of the answer and gives you the remainder. One thing this is useful for is to find if one number divides into another number evenly. Some examples are below:
###Code
print( "5 % 2 =", 5 % 2 ) # 5/2 == 2 with a remainder of 1, so 5 % 2 == 1
print( "4 % 2 =", 4 % 2 ) # 4/2 == 2 with a remainder of 0. 2 goes into 4 evenly.
print( "30 % 10 =", 30 % 10 ) # 30/10 == 3 with a remainder of 0. 10 goes into 30 evenly.
print( "15 % 10 =", 15 % 10 ) # 15/10 == 1 with a remainder of 5, so 15 % 10 == 5
# How could we find if a number is even or odd?
# Test to see if it divides evenly by 2.
print( "6 % 2 =", 6 % 2 ) # remainder is zero so it is an even number
print( "7 % 2 =", 7 % 2 ) # remainder is not zero so it is an odd number
###Output
5 % 2 = 1
4 % 2 = 0
30 % 10 = 0
15 % 10 = 5
6 % 2 = 0
7 % 2 = 1
###Markdown
Shortcut OperatorsShortcut operators are just what they sound like: they are a shortcut. They are an optional way of shortening common operations.|Operator|Effect||:---:|:---||`num += 5`|`num = num + 5`||`num -= 5`|`num = num - 5`||`num *= 5`|`num = num * 5`||`num /= 5`|`num = num / 5`|
###Code
num = 10
print( num )
num += 5
print( num )
num -= 5
print( num )
num *= 5
print( num )
num /= 5
print( num )
###Output
10
15
10
50
10.0
###Markdown
User Input`input` accepts user input and stores it as a string. The string you send to `input` will be what python prints to prompt the user for input, then it will wait until the user enters some input. **Important: user input is always stored as a string.**
###Code
input("What is your name? ") # gets input from the user but doesn't store it.
# not useful.
print() # print a blank line
name = input("What is your name? ") # stores the user input in the variable "name"
print( name ) # print what the user entered.
print( "Hello,", name ) # print a nicer message
###Output
What is your name? Mr. Avis
What is your name? Mr. Avis
Mr. Avis
Hello, Mr. Avis
###Markdown
You can convert the strings that `input` gives you by telling python to treat the string as a different data type (like `int` or `float`). If python cannot convert the string to the other data type, then there will be an error.You can always do this data type conversion, these examples are specific to receiving user input as `int` or `float` instead of strings.
###Code
name = input( "What is your name? " ) # store user input as a string
age = int( input( "How old are you? " ) ) # store user input as an int
decimal = float( input( "What is your favorite decimal? " ) ) # store user input as a float
print( name, "is", age, "years old. Your favorite decimal is", decimal )
###Output
What is your name? Mr. Avis
How old are you? 41
What is your favorite decimal? 42.42
Mr. Avis is 41 years old. Your favorite decimal is 42.42
###Markdown
Python basics The `print()` function and string literals If this is your first time looking at Python code, the first thing that you might notice is that it is very easy to understand. For example, to print something to the screen, it's just:
###Code
print('Hola Astronomy Club IITK!')
###Output
Hola Astronomy Club IITK!
###Markdown
(Well, sneaking that little Spanish "hello"...ahemm)This is a Python statement, consisting of the built-in command `print` and a string surrounded by single quotes. Double quotes are fine inside a string:
###Code
print('She said, "Hola, Astronomy Club IITK!"')
###Output
She said, "Hola, Astronomy Club IITK!"
###Markdown
But if you want single quotes inside your string, you had better delimit it with double quotes:
###Code
print("She said, 'Hola, Astronomy Club IITK!'")
###Output
She said, 'Hola, Astronomy Club IITK!'
###Markdown
If you need both single quotes and double quotes, you can use backslashes to escape characters.
###Code
print('He cried, "Go Corona! Corona Go! Isn\'t that what everyone want?"')
###Output
He cried, "Go Corona! Corona Go! Isn't that what everyone want?"
###Markdown
If you need a string that contains newlines, use triple quotes (`'''`) or triple double quotes (`"""`):PS. Enjoy this Shakespearean work, Julius Caesar.
###Code
print("""Cowards die many times before their deaths;
The valiant never taste of death but once.
Of all the wonders that I yet have heard,
It seems to me most strange that men should fear;
Seeing that death, a necessary end,
Will come when it will come.""")
###Output
Cowards die many times before their deaths;
The valiant never taste of death but once.
Of all the wonders that I yet have heard,
It seems to me most strange that men should fear;
Seeing that death, a necessary end,
Will come when it will come.
###Markdown
Let's say that you need to print a few different things on the same line. Just separate them with commas, as in:
###Code
project = 'Computational Astrophysics'
print("Welcome to ", project)
###Output
Welcome to Computational Astrophysics
###Markdown
Oops. I'm getting ahead of myself—you've now seen your first variable assignment in Python. Strings can be concatened by adding them:
###Code
'abc' + 'def'
###Output
_____no_output_____
###Markdown
Or repeated by multiplying them:
###Code
'abcdef' * 2
###Output
_____no_output_____
###Markdown
Numeric and boolean literals Python's numeric types include integers and both real and complex floating point numbers:
###Code
a = 30 # an integer
b = 0xDEADBEEF # an integer in hexadecimal
c = 3.14159 # a floating point number
d = 5.1e10 # scientific notation
e = 2.5 + 5.3j # a complex number
hungry = True # boolean literal
need_coffee = False # another boolean literal
###Output
_____no_output_____
###Markdown
By the way, all of the text on a given line after the trailing hash sign (``) is a comment, ignored by Python.The arithmetic operators in Python are similar to C, C++, Java, and so on. There is addition (and subtraction):
###Code
a + c
###Output
_____no_output_____
###Markdown
Multiplication:
###Code
a * e
###Output
_____no_output_____
###Markdown
Division:
###Code
a / c
###Output
_____no_output_____
###Markdown
***Important note***: unlike C, C++, Java, etc., ***division of integers gives you floats***:
###Code
7 / 3
###Output
_____no_output_____
###Markdown
If you want integer division, then use the double-slash `//` operator:
###Code
a = 7
b = 3
7 // 3
###Output
_____no_output_____
###Markdown
The `%` sign is the remainder operator:
###Code
32 % 26
###Output
_____no_output_____
###Markdown
Exponentiation is accomplished with the `**` operator:
###Code
print(5 ** 3, 9**-0.5)
###Output
125 0.3333333333333333
###Markdown
Tuples A tuple is a sequence of values. It's just about the handiest thing since integers. A tuple is immutable: once you have created it, you cannot add items to it, remove items from it, or change items. Tuples are very handy for storing short sequences of related values or returning multiple values from a function. This is what tuples look like:
###Code
some_tuple = ('a', 'b', 'c')
another_tuple = ('caffeine', 6.674e-11, 3.14, 2.718)
nested_tuple = (5, 4, 3, 2, ('a', 'b'), 'c')
###Output
_____no_output_____
###Markdown
Once you have made a tuple, you might want to retrieve a value from it. You index a tuple with square brackets, ***starting from zero***:
###Code
some_tuple[0]
some_tuple[1]
###Output
_____no_output_____
###Markdown
You can access whole ranges of values using ***slice notation***:
###Code
nested_tuple[1:4]
###Output
_____no_output_____
###Markdown
Or, to count backward from the end of the tuple, use a ***negative index***:
###Code
another_tuple[-1]
another_tuple[-2]
###Output
_____no_output_____
###Markdown
Strings can be treated just like tuples of individual charaters:
###Code
project = 'Computational Astrophysics'
print(project[3:6])
###Output
put
###Markdown
Lists What if you want a container like a tuple but to which you can add or remove items or alter existing items? That's a list. The syntax is almost the same, except that you create a list using square brackets `[]` instead of round ones `()`:
###Code
your_list = ['foo', 'bar', 'bat', 'baz']
my_list = ['xyzzy', 1, 3, 5, 7]
###Output
_____no_output_____
###Markdown
But you can change elements:
###Code
my_list[1] = 2
print(my_list)
###Output
['xyzzy', 2, 3, 5, 7]
###Markdown
Or append elements to an existing list:
###Code
my_list.append(11)
print(my_list)
###Output
['xyzzy', 2, 3, 5, 7, 11]
###Markdown
Or delete elements:
###Code
del my_list[0]
print(my_list)
###Output
[2, 3, 5, 7, 11]
###Markdown
Sets Sometimes you need a collection of items where order doesn't necessarily matter, but each item is guaranteed to be unique. That's a set, created just like a list or tuple but with curly braces `{}`:
###Code
a = {5, 6, 'foo', 7, 7, 8}
print(a)
###Output
{5, 6, 7, 8, 'foo'}
###Markdown
You can add items to a set:
###Code
a.add(3)
print(a)
###Output
{3, 5, 6, 7, 8, 'foo'}
###Markdown
Or take them away:
###Code
a.remove(3)
print(a)
###Output
{5, 6, 7, 8, 'foo'}
###Markdown
You also have set-theoretic intersections with the `&` operator:
###Code
{1, 2, 3, 4, 5, 6} & {3, 4}
###Output
_____no_output_____
###Markdown
And union with the `|` operator:
###Code
{1, 2, 3, 4, 5, 6} | {6, 7}
###Output
_____no_output_____
###Markdown
And set difference with the `-` operator:
###Code
{1, 2, 3, 4, 5, 6} - {3, 4}
###Output
_____no_output_____
###Markdown
Dictionaries Sometimes, you want a collection that is like a list, but whose indices are strings or other Python values. That's a dictionary. Dictionaries are handy for any type of database-like operation, or for storing mappings from one set of values to another. You create a dictionary by enclosing a list of key-value pairs in curly braces:
###Code
my_grb = {'name': 'GRB 130702A', 'redshift': 0.145, 'ra': (14, 29, 14.78), 'dec': (15, 46, 26.4)}
my_grb
###Output
_____no_output_____
###Markdown
You can index items in dictionaries with square braces `[]`, similar to tuples or lists:
###Code
my_grb['dec']
###Output
_____no_output_____
###Markdown
or add items to them:
###Code
my_grb['url'] = 'http://gcn.gsfc.nasa.gov/other/130702A.gcn3'
my_grb
###Output
_____no_output_____
###Markdown
or delete items from them:
###Code
del my_grb['url']
my_grb
###Output
_____no_output_____
###Markdown
Dictionary keys can be any **immutable** kind of Python object: tuples, strings, integers, and floats are all fine. Values in a dictionary can be **any Python value at all**, including lists or other dictionaries:
###Code
{
'foods': ['chicken', 'veggie burger', 'banana'],
'cheeses': {'muenster', 'gouda', 'camembert', 'mozarella'},
(5.5, 2): 42,
'plugh': 'bat'
}
###Output
_____no_output_____
###Markdown
The `None` object Sometimes you need to represent the absence of a value, for instance, if you have a gap in a dataset. You might be tempted to use some special value like `-1` or `99` for this purpose, but **don't**! Use the built-in object `None`.
###Code
a = None
###Output
_____no_output_____
###Markdown
Conditionals In Python, control flow statements such as conditionals and loops have blocks indicated with indentation. Any number of spaces or tabs is fine, as long as you are consistent within a block. Common choices include four spaces, two spaces, or a tab.You can use the `if`...`elif`...`else` statement to have different bits of code run depending on the truth or falsehood of boolean expressions. For example:
###Code
a = 5
if a < 3:
print("i'm in the 'if' block")
messsage = 'a is less than 3'
elif a == 3:
print("i'm in the 'elif' block")
messsage = 'a is 3'
else:
print("i'm in the 'else' block")
message = 'a is greater than 3'
print(message)
###Output
i'm in the 'else' block
a is greater than 3
###Markdown
You can chain together inequalities just like in mathematical notation:
###Code
if 0 < a <= 5:
print('a is greater than 0 but less than or equal to 5')
###Output
a is greater than 0 but less than or equal to 5
###Markdown
You can also combine comparison operators with the boolean `and`, `or`, and `not` operators:
###Code
if a < 6 or a > 8:
print('yahoo!')
if a < 6 and a % 2 == 1:
print('a is an odd number less than 6!')
if not a == 5: # same as a != 5
print('a is not 5')
###Output
_____no_output_____
###Markdown
The comparison operator `is` tests whether two Python values are not only equal, but represent the same object. Since there is only one `None` object, the `is` operator is particularly useful for detecting `None`.
###Code
food = None
if food is None:
print('No, thanks')
else:
print('Here is your', food)
###Output
No, thanks
###Markdown
Likewise, there is an `is not` operator:
###Code
if food is not None:
print('Yum!')
###Output
_____no_output_____
###Markdown
The `in` and `not in` operators are handy for testing for membership in a string, set, or dictionary:
###Code
if 3 in {1, 2, 3, 4, 5}:
print('indeed it is')
if 'i' not in 'team':
print('there is no "i" in "team"')
###Output
there is no "i" in "team"
###Markdown
When referring to a dictionary, the `in` operator tests if the item is among the **keys** of the dictionary.
###Code
d = {'foo': 3, 'bar': 5, 'bat': 9}
if 'foo' in d:
print('the key "foo" is in the dictionary')
###Output
the key "foo" is in the dictionary
###Markdown
The `for` and `while` loops In Python, there are just two types of loops: `for` and `while`. `for` loops are useful for repeating a set of statements for each item in a collection (tuple, set, list, dictionary, or string). `while` loops are not as common, but can be used to repeat a set of statements until a boolean expression becomes false.
###Code
for i in [0, 1, 2, 3]:
print(i**2)
###Output
0
1
4
9
###Markdown
The built-in function `range`, which returns a list of numbers, is often handy here:
###Code
for i in range(4):
print(i**2)
###Output
0
1
4
9
###Markdown
Or you can have the range start from a nonzero value:
###Code
for i in range(-2, 4):
print(i**2)
###Output
4
1
0
1
4
9
###Markdown
You can iterate over the keys and values in a dictionary with `.items()`:
###Code
for key, val in d.items():
print(key, '...', val**3)
###Output
foo ... 27
bar ... 125
bat ... 729
###Markdown
The syntax of the `while` loop is similar to the `if` statement:
###Code
a = 1
while a < 5:
a = a * 2
print(a)
###Output
2
4
8
###Markdown
List comprehensions Sometimes you need a loop to create one list from another. List comprehensions make this very terse. For example, the following `for` loop:
###Code
a = []
for i in range(5):
a.append(i * 10)
###Output
_____no_output_____
###Markdown
is equivalent to this list comprehension:
###Code
a = [i * 10 for i in range(5)]
###Output
_____no_output_____
###Markdown
You can even incorporate conditionals into a list comprehension. The following:
###Code
a = []
for i in range(5):
if i % 2 == 0:
# i is even
a.append(i * 10)
###Output
_____no_output_____
###Markdown
can be written as:
###Code
a = [i * 10 for i in range(5) if i % 2 == 0]
a
###Output
_____no_output_____
###Markdown
Conditional expressions Conditional expressions are a closely related shorthand. The following:
###Code
if 6/2 == 3:
a = 'foo'
else:
a = 'bar'
###Output
_____no_output_____
###Markdown
is equivalent to:
###Code
a = 'foo' if 6/2 == 3 else 'bar'
###Output
_____no_output_____
###Markdown
Functions Functions are created with the `def` statement. A function may either have or not have a `return` statement to send back a return value.
###Code
def square(n):
return n * n
a = square(3)
print(a)
###Output
9
###Markdown
If you want to return multiple values from a function, return a tuple. Parentheses around the tuple are optional.
###Code
def powers(n):
return n**2, n**3
print(powers(3))
###Output
(9, 27)
###Markdown
If a function returns multiple values, you can automatically unpack them into multiple variables:
###Code
square, cube = powers(3)
print(square)
###Output
9
###Markdown
If you pass a mutable value such as a list to a function, then **the function may modify that value**. For example, you might implement the Fibonacci sequence like this:
###Code
def fibonacci(seed, n):
while len(seed) < n:
seed.append(seed[-1] + seed[-2])
# Note: no return statement
seed = [1, 1]
fibonacci(seed, 10)
print(seed)
###Output
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
###Markdown
You can also give a function's arguments default values, such as:
###Code
def fibonacci(seed, n=6):
while len(seed) < n:
seed.append(seed[-1] + seed[-2])
# Note: no return statement
seed = [1, 1]
fibonacci(seed)
print(seed)
###Output
[1, 1, 2, 3, 5, 8]
###Markdown
If a function has a large number of arguments, it may be easier to read if you pass the arguments by keyword, as in:
###Code
seq = [1, 1]
fibonacci(seed=seq, n=4)
###Output
_____no_output_____
###Markdown
IV. The Python standard library Python comes with an extensive **[standard library](http://docs.python.org/2/library/index.html)** consisting of individual **modules** that you can opt to use with the `import` statement. For example:
###Code
import math
math.sqrt(3)
from math import pi
pi
###Output
_____no_output_____
###Markdown
Some particularly useful parts of the Python standard library are:* [`random`](https://docs.python.org/3/library/random.html): random number generators* [`pickle`](https://docs.python.org/3/library/pickle.html): read/write Python objects into files* [`sqlite3`](https://docs.python.org/3/library/sqlite3.html): SQLite database acces* [`os`](https://docs.python.org/3/library/os.html): operating system services* [`os.path`](https://docs.python.org/3/library/os.path.html): file path manipulation* [`subprocess`](https://docs.python.org/3/library/subprocess.html): launch external processes* [`email`](https://docs.python.org/3/library/email.html): compose, parse, receive, or send e-mail* [`pdb`](https://docs.python.org/3/library/pdb.html): built-in debugger* [`re`](https://docs.python.org/3/library/re.html): regular expressions* [`http`](https://docs.python.org/3/library/http.html): built-in lightweight web client and server* [`optparse`](https://docs.python.org/3/library/optparse.html): build pretty command-line interfaces* [`itertools`](https://docs.python.org/3/library/itertools.html): exotic looping constructs* [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html): parallel processingJust visit them and go through once at your own pace.
###Code
import random
import pickle
import sqlite3
import os
import os.path
import subprocess
import email
import pdb
import re
import http
import optparse
import itertools
import multiprocessing
###Output
_____no_output_____
###Markdown
Error handling It can be important for your code to be able to handle error conditions. For example, let's say that you are implementing a sinc function:
###Code
def sinc(x):
return math.sin(x) / x
print(sinc(0))
###Output
_____no_output_____
###Markdown
Oops! We know that by definition $\mathrm{sinc}(0) = 1$ , so we should catch this error:
###Code
def sinc(x):
try:
result = math.sin(x) / x
except ZeroDivisionError:
result = 1
return result
print(sinc(0))
###Output
1
###Markdown
Reading and writing files The built-in `open` function opens a file and returns a `file` object that you can use to read or write data. Here's an example of writing data to a file:
###Code
myfile = open('myfile.txt', 'w') # open file for writing
myfile.write("red 1\n")
myfile.write("green 2\n")
myfile.write("blue 3\n")
myfile.close()
###Output
_____no_output_____
###Markdown
And here is reading it:
###Code
d = {} # create empty dictionary
for line in open('myfile.txt', 'r'): # open file for reading
color, num = line.split() # break apart line by whitespace
num = int(num) # convert num to integer
d[color] = num
print(d)
###Output
{'red': 1, 'green': 2, 'blue': 3}
###Markdown
Python Basics Summary of work:In this work we intrtroduce Python and some of the basic data types. There are several resourses available to study python including the following online turorials.- [Python Documentation](https://docs.python.org/3/tutorial/)- [Tutorialpoint](https://www.tutorialspoint.com/python/python_overview.htm) Comments in Python
###Code
#This is the first comment in this python program.
###Output
_____no_output_____
###Markdown
Python 2 vs 3
###Code
# 7/3 in python 2 produces 2, which is the nearest integer.
7/3
#print "Hello World" #is acceptable in Python 2
print ("Hello World") # in Python 3, print must be followed by ()
###Output
Hello World
###Markdown
Python as a Calculator
###Code
7 + 9
6+ 7*8
(50 - 5*6) / 4
11 / 3 # classic division returns a float
14// 3 # floor division discards the fractional part
3 ** 2 # 3 squared
10%2 # reminder
13%3
###Output
_____no_output_____
###Markdown
Order of OperationsPython follows ** PEMDAS** convention of Mathematics.
###Code
2*(3-1)
(1+1)**(5-2)
2**1+1 #is not 4
3*1**3 # is not 27
6+4/2 # is not 5
###Output
_____no_output_____
###Markdown
Variables Name and Key WordsVariable names can be arbitrary long. They may contain both letters and numbers but they have to begin with a letter. Underscore character ( _ ) can be used. We cannot use special characters like @, $,&, etc while defining variables.
###Code
#Illegal variables
12my_variable, more@
#Legal variables
my_var = 12
my_second_variable_123 = 89
width = 40
length = 5
Area = width * length
Area
###Output
_____no_output_____
###Markdown
Reserved Keywords for Pythonand, del, from, not, while,as, elif, global,or,with,assert, else, if, pass, yield, break, except, import, print, class, exec, raise, continue, finally, is, return, def, for, lambda, try,in Basic Python Variable TypesThere are several variable types in Python 3. Here is a list of the most common types: - int (Integer/Long)- float- complex- bool (Boolean)
###Code
type(13)
type(10.5)
type(5j )
complex(2,3)
z = 2 - 3j
z.real
z.imag
a = True
type(a)
#help(str)
###Output
_____no_output_____
###Markdown
Additional Data Types- str -> String- list-> Ordered Array- tuple -> Ordered Immutable Array- dict -> Unordered list of keys and values String
###Code
first_name = 'Hum Nath' #or first name = "Hum Nath"
last_name = 'Bhandari'
name = first_name + ' '+ last_name
name
type(name)
name[0]
name[:]
name[:-7]
name[-1]
name[1:3]
name[-9:-2]
name + name
name*3
firstname = 'Peter W. Smith'.split(' ')[0] # [0] selects the first element of the list
middlename = 'Peter W. Smith'.split(' ')[1] # [1] selects the second element of the list
lastname = 'Peter W. Smith'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(middlename)
print(lastname)
###Output
Peter
W.
Smith
###Markdown
String formatting
###Code
firstname = "Josh"
lastname = "Brooks"
formatted_name = "%s, %s" % (lastname, firstname)
#formatted_name = "%s, %s." % (last_name, first_name[0])
print(formatted_name)
print("pi ≈ %.2f" % 3.14159)
"Speed Limit "+ 65
"Speed Limit "+ str(65)
###Output
_____no_output_____
###Markdown
Lists
###Code
grade = [ 80, 70, 50]
x = [1, 'RWU', 2, 'Bristol']
type(x)
len(x)
x[1]
x[3]
y = [2, 'Math', 5, "Data Science"]
x + y # it concantinates two string
x = [1, 'RWU', 2, 'Bristol']
x.extend([23,78])
x
a = []
type(a)
a.append(30)
a
a.append("grade")
a
a.append(80)
a
a[0] = 70
a
1 in [1,2,3] # check for list membership
0 in [1,5,9]
#Sorting
x = [ 5,3, 1, 8]
y = sorted(x)
x
y
x.sort()
x
x
###Output
_____no_output_____
###Markdown
Tuple
###Code
my_tuple = (1,2,3)
my_tuple[0]
my_tuple[0] = 30
my_tuple + (3,4,7,6)
###Output
_____no_output_____
###Markdown
Dictionary
###Code
my_dict = {"name":"John", "grade":90, "subject": "Data Science"}
my_dict
type(my_dict)
my_dict["name"]
my_dict.items()
my_dict.keys()
my_dict.values()
for counter in my_dict:
print(my_dict[counter])
###Output
John
90
Data Science
###Markdown
Functions
###Code
def find_sum(x,y):
return x+y
find_sum(20,10)
def fahrenheit(T):
return (9/5)*T + 32
def celsius(T):
return (5/9)*(T-32)
celsius(32)
fahrenheit(0)
add = lambda x, y : x + y
add(4,8)
fahrenheit = lambda x: (9/5)*x + 32
celsius = lambda x: (5/9)*(x-32)
###Output
_____no_output_____
###Markdown
Math Library
###Code
import math
###Output
_____no_output_____
###Markdown
Basic Math Functions in Python- fabs(x)- ceil(x)- floor(x)- exp(x)- log(x) natural logarithm- log10(x)- max(x1, x2,...)- pow(x, y)- round(x [,n]) round to n digits from the decimal points- sqrt(x)- sin(x)- cos(x)- tan(x)
###Code
abs(-4.8)
math.fabs(-4.8)
math.ceil(-4.6)
math.log(8)
math.pow(2,3)
round(3.413894, 3)
math.sqrt(9)
n=5
k=1
math.factorial(n) / (math.factorial(k) * math.factorial(n-k))
math.sin(math.pi/2)
math.tan(math.pi/4)
###Output
_____no_output_____
###Markdown
Random Numbers
###Code
import random
a = random.random( )
a
b = random.uniform(1, 7)
b
c = random.randint(1, 10)
c
items = [1, 2, 3, 4, 5, 6, 7,8,9]
random.shuffle(items)
items
random.sample([1, 2, 3, 4, 5,6,7,8], 4)
def binomilal_coef(n,k):
"""
This function returns the binominal coef
Parameters:
===========
n, k int
return n!/(k!*(n-k)!)
"""
value = math.factorial(n)/(math.factorial(k)*math.factorial(n-k))
return value
binomilal_coef(52,2)
###Output
_____no_output_____
###Markdown
Conditional Execution
###Code
x = 10;
if x > 0:
print("x is positive")
grade = 50;
if grade >= 60:
print("You passed this class")
else:
print("Sorry you are failed! Please repeate this course in the next semester!")
###Output
Sorry you are failed! Please repeate this course in the next semester!
###Markdown
Chained Conditionals
###Code
x = 2; y = 8
if x < y:
print("x is less than y")
elif(x > y):
print("x is greater than y")
else:
print("x and y are equal")
grade = 66
if grade >= 90:
print("A")
elif grade >= 80:
print("B")
elif grade >= 70:
print("C")
elif grade >=60:
print("D")
else:
print("Sorry you are failled in this class")
###Output
D
###Markdown
Nested Conditionals
###Code
if x == y:
print("x and y are equal")
else:
if x< y:
print ("x is less than y")
else:
print("x is greater than y")
###Output
x is less than y
###Markdown
Loops in Python
###Code
for counter in [1,2,3,4,5,6]:
print(counter)
###Output
1
2
3
4
5
6
###Markdown
range() functionrange(stop)range(start,stop[ ,step])
###Code
for counter in range(6):
print(counter)
for counter in range(1,6):
print(counter)
for counter in range(1,6,2):
print(counter)
###Output
1
3
5
###Markdown
While Loop
###Code
counter = 0
while counter < 6:
print(counter)
counter += 1
###Output
0
1
2
3
4
5
###Markdown
Taking User Input in Python
###Code
name = input("Tell me your name ")
print("Hello " + name + "!")
age = input("How old are you? ")
print("You are " + age + " years old, " + name + "!")
###Output
_____no_output_____
###Markdown
Python: basic featureshttps://www.python.org/
###Code
print("Hello, World!")
a = 5
b = 2
a + b
1 + a * b
a ** b
# different in python 3: a//b
# for same behaviour run: from __future__ import division
a / b
a / float(b)
a % b
min(a, b)
a == b
a != b
a += 3
a
# Python Lists
a = [1, "hello", 5.5]
a
len(a)
a[2]
a.append("how are you?")
a
for x in a:
print(x)
for i, x in enumerate(a):
print("element {}: {}".format(i, x))
a[0] = 10
a
# Python Tuples:
b = (-1, "bye", 'c')
b
b[-1]
b[0] = 10
b
x, y = b
x
y
# Python Dictionaries (Keys, values)
a = {"name":"Mary", "age":23, "sign":"capricorn"}
a
a[1]
a["job"] = "student"
a
# Python Funtions
def f(a, b=4, c=5):
if a > 2 and b < 10:
return a
elif c == 5:
return b
else:
return a + b + c
f(4)
f(4, 11)
f(4, c=6, b=11)
###Output
_____no_output_____
###Markdown
NumPy: multi-dimensional arrays and scientific computinghttps://www.numpy.org/
###Code
import numpy as np
a = np.array([0, 2, 4, 6, 8, 10, 12, 14, 16])
a
a.ndim
a.shape
a[2]
a[2:]
a[:4]
a[2:7]
a[2:7:2]
a[-1]
a[::-1]
a[[0, 4, 5]]
b = a > 3
b
a[b]
a = np.array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]])
a
a.ndim
a.shape
a[1, 2]
a[0]
a[:, 1:3]
a.T
a + 10
a ** 2
a * [10, 20, 30, 40]
np.sin(a)
np.mean(a)
a.mean(axis=1)
np.max(a)
np.max(a, axis=1)
np.arange(10)
np.linspace(2, 4, 5)
np.zeros((2, 3))
np.full((2, 3), 2.5)
###Output
_____no_output_____
###Markdown
matplotlib: plottinghttps://matplotlib.org/
###Code
import matplotlib.pyplot as plt
#%matplotlib notebook
%matplotlib inline
x = np.linspace(-5, 5, 50)
y = np.sin(x)
y2 = y ** 2
y3 = -x / 5
plt.figure()
plt.plot(x, y, label='sin')
plt.plot(x, y2, '.', label='$\sin^{2}$')
plt.plot(x, y3, linewidth=3)
plt.annotate('example text', xy=(0.5, -0.75))
plt.xlabel("X axis")
plt.ylabel("Y axis")
plt.title("Example plot")
plt.legend()
plt.show()
fig, ax = plt.subplots(2, sharex=True)
ax[0].plot(x, y)
ax[1].plot(x, y2)
ax[1].set_ylabel('y axis')
plt.show()
y, x = np.mgrid[0:20, 0:30]
z = (x - 4)**2+ y**2
plt.figure()
plt.pcolormesh(x, y, z, shading='auto')
plt.show()
###Output
_____no_output_____
###Markdown
SciPy: extra modules for scientific computationhttps://www.scipy.org/
###Code
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
def f(x, a, b, c):
return a * np.exp(-b * x) + c
n = 60
x = np.linspace(0, 5, n)
y = f(x, 5, 2, 0.5) + 2 * np.random.rand(n)
popt, pcov = curve_fit(f, x, y)
perr = np.sqrt(np.diag(pcov))
y_fit = f(x, *popt)
msd = np.sum((y - y_fit) ** 2) / n
pnames = ['a', 'b', 'c']
results = ''
for name, value, error in zip(pnames, popt, perr):
results += '{} = {:.2f}$\pm${:.2f}\n'.format(name, value, error)
results += 'MSD = {:.2f}'.format(msd)
plt.plot(x, y, '.', label='data')
plt.plot(x, y_fit, label='fit: $ae^{-bx} + c$')
plt.annotate(results, xy=(0.7, 0.55), xycoords='axes fraction')
plt.legend()
plt.show()
%run langmuir_fit.py
###Output
_____no_output_____
###Markdown
Python Basics* Wenchang Yang ([email protected])* Department of Geosciences, Princeton University* Sep 30, 2019, Junior Colloquium Start Jupyter NotebookRun a Terminal (or iTerm), and in the Terminal type: jupyter notebook Using Jupyter NoteookYou use Jupyter to create an iPython Notebook.The notebook contains a series of "cells".Each cell can be either Code (that is, Python) or Markdown (that is, fancy text). You can set the cell type from the menu on the tool bar.To execute a cell, that is, run it, select the cell and press Shift-Enter. Or select the cell and click the run button. Python Comments
###Code
# Comments are intended to help a person (including yourself) read your code.
# Comments start with a "#".
x = 1 # A comment can also follow a Python statement.
###Output
_____no_output_____
###Markdown
Indentation is part of Python Syntax Try to run the cell below:
###Code
# Indentation matters
a = 1
b = 2
###Output
_____no_output_____
###Markdown
VariablesA variable is a named place in computer memory into which you put avalue or values.You make up the name, preferably something meaningful. Start with aletter, then letters and/or numbers and underscore. Upper/lower case matters.Examples of variable names: filename1 largestValue number_of_students i I Numbers The Python can act as a simple calculator: you type an expression at it and it will write the value. Expression syntax is straightforward: the operators `+`, `-`, `*` and `/` work just like in most other languages (for example, Pascal or C); parentheses (()) can be used for grouping. For example:
###Code
(50 - 5*6) / 4
###Output
_____no_output_____
###Markdown
With Python, it is possible to use the ** operator to calculate powers
###Code
2 ** 7 # 2 to the power of 7
###Output
_____no_output_____
###Markdown
The equal sign (`=`) is used to assign a value to a variable:
###Code
width = 20
height = 5
area = width * height
###Output
_____no_output_____
###Markdown
You can use the `print` function to show the result.
###Code
print(area)
###Output
100
###Markdown
StringsBesides numbers, Python can also manipulate strings, which can be expressed in several ways. They can be enclosed in single quotes (`'...'`) or double quotes (`"..."`) with the same result.
###Code
print('hellow world') # single quotes
print("doesn't") # ...or use double quotes instead
###Output
hellow world
doesn't
###Markdown
Strings can be concatenated with `+`:
###Code
'Life is short, ' + 'I use Python.'
###Output
_____no_output_____
###Markdown
Strings can be indexed (subscripted), with the first character having index **0**.
###Code
word = 'Python'
word[0] # character in position 0
word[5] # character in position 5
###Output
_____no_output_____
###Markdown
In addition to indexing, slicing is also supported. While indexing is used to obtain individual characters, slicing allows you to obtain substring:
###Code
word[0:2] # characters from position 0 (included) to 2 (excluded)
###Output
_____no_output_____
###Markdown
Note that the start is always **included**, and the end always **excluded.** ListsA list is an ordered series of objects.Lists are indexed by integers, **starting from 0.**
###Code
# Here is a list of square numbers
squares = [1, 4, 9, 16, 25]
print(squares)
###Output
[1, 4, 9, 16, 25]
###Markdown
Like strings, lists can be indexed and sliced:
###Code
squares[0] # indexing returns the item
squares[1:3] # slicing returns a new list
###Output
_____no_output_____
###Markdown
Lists also support operations like concatenation:
###Code
squares + [36, 49, 64, 81, 100]
###Output
_____no_output_____
###Markdown
Unlike strings, which are immutable, lists are a mutable type, i.e. it is possible to change their content:
###Code
squares.append(30)
print(squares)
squares[5] = 36
print(squares)
###Output
[1, 4, 9, 16, 25, 36]
###Markdown
`if` statements
###Code
a = 3
b = 5
if a > b:
print('a > b')
else:
print('a <= b')
###Output
a <= b
###Markdown
`for` loops The syntax of a for loop is:```pythonfor variable in iterable: do something ```
###Code
for i in squares:
print(i)
###Output
1
4
9
16
25
36
###Markdown
TuplesA tuple is like a list, but immutable (cannot be changed in place).
###Code
t = (2019, 'September', 'Monday')
print(t[0])
print(t[1:3])
# Immutable
t[0] = "Can't do this"
t.append("Can't do this either")
###Output
_____no_output_____
###Markdown
Dictionaries- A dictionary is a collection of key/value pairs.- It is mutable (like a list)
###Code
# In this example, the keys are 'birthYear', 'color' and 'nickname'
# and the values are 1746, 'Orange and Black', and 'Tigers'
pu = {'birthYear': 1746, 'color': 'Orange and Black', 'nickname': 'Tigers'}
print(pu)
# Values can be accessed by keys
print(pu['nickname'])
# Mutable. Values can be modified or added.
pu['oldName'] = 'College of New Jersey'
print(pu)
# Iterating over a dictionary
for k,v in pu.items():
print(k, ':', v)
###Output
birthYear : 1746
color : Orange and Black
nickname : Tigers
oldName : College of New Jersey
###Markdown
FunctionsA function is a block of code that you define for later use (potentially multiple times).
###Code
# define the function
def hello_world():
print('Hellow World!')
# call the function
hello_world()
###Output
Hellow World!
###Markdown
Functions can return values.
###Code
# Functions can return values
def get_school_name():
return 'Princeton University'
print( get_school_name() )
###Output
Princeton University
###Markdown
Function argumentsYou can pass objects into a function as "arguments". There are two kinds of arguments: positional and keyword.
###Code
def do_minus(a, b):
return a - b
# Positional arguments.
do_minus(10, 5)
# Keyword arguments.
do_minus(b=5, a=10)
###Output
_____no_output_____
###Markdown
ModulesPython itself only provides some fundamental functionalities. Others are encapsulated into different modules, either built-in or external. You need to `import` these modules before you can use them. Related modules are often grouped into a package.
###Code
import math
print(math.pi)
# Calculate the area of a circle with radius 3
r = 3.0
a = math.pi * r ** 2
print(a)
help(math)
###Output
Help on module math:
NAME
math
MODULE REFERENCE
https://docs.python.org/3.7/library/math
The following documentation is automatically generated from the Python
source files. It may be incomplete, incorrect or include features that
are considered implementation detail and may vary between Python
implementations. When in doubt, consult the module reference at the
location listed above.
DESCRIPTION
This module is always available. It provides access to the
mathematical functions defined by the C standard.
FUNCTIONS
acos(x, /)
Return the arc cosine (measured in radians) of x.
acosh(x, /)
Return the inverse hyperbolic cosine of x.
asin(x, /)
Return the arc sine (measured in radians) of x.
asinh(x, /)
Return the inverse hyperbolic sine of x.
atan(x, /)
Return the arc tangent (measured in radians) of x.
atan2(y, x, /)
Return the arc tangent (measured in radians) of y/x.
Unlike atan(y/x), the signs of both x and y are considered.
atanh(x, /)
Return the inverse hyperbolic tangent of x.
ceil(x, /)
Return the ceiling of x as an Integral.
This is the smallest integer >= x.
copysign(x, y, /)
Return a float with the magnitude (absolute value) of x but the sign of y.
On platforms that support signed zeros, copysign(1.0, -0.0)
returns -1.0.
cos(x, /)
Return the cosine of x (measured in radians).
cosh(x, /)
Return the hyperbolic cosine of x.
degrees(x, /)
Convert angle x from radians to degrees.
erf(x, /)
Error function at x.
erfc(x, /)
Complementary error function at x.
exp(x, /)
Return e raised to the power of x.
expm1(x, /)
Return exp(x)-1.
This function avoids the loss of precision involved in the direct evaluation of exp(x)-1 for small x.
fabs(x, /)
Return the absolute value of the float x.
factorial(x, /)
Find x!.
Raise a ValueError if x is negative or non-integral.
floor(x, /)
Return the floor of x as an Integral.
This is the largest integer <= x.
fmod(x, y, /)
Return fmod(x, y), according to platform C.
x % y may differ.
frexp(x, /)
Return the mantissa and exponent of x, as pair (m, e).
m is a float and e is an int, such that x = m * 2.**e.
If x is 0, m and e are both 0. Else 0.5 <= abs(m) < 1.0.
fsum(seq, /)
Return an accurate floating point sum of values in the iterable seq.
Assumes IEEE-754 floating point arithmetic.
gamma(x, /)
Gamma function at x.
gcd(x, y, /)
greatest common divisor of x and y
hypot(x, y, /)
Return the Euclidean distance, sqrt(x*x + y*y).
isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
Determine whether two floating point numbers are close in value.
rel_tol
maximum difference for being considered "close", relative to the
magnitude of the input values
abs_tol
maximum difference for being considered "close", regardless of the
magnitude of the input values
Return True if a is close in value to b, and False otherwise.
For the values to be considered close, the difference between them
must be smaller than at least one of the tolerances.
-inf, inf and NaN behave similarly to the IEEE 754 Standard. That
is, NaN is not close to anything, even itself. inf and -inf are
only close to themselves.
isfinite(x, /)
Return True if x is neither an infinity nor a NaN, and False otherwise.
isinf(x, /)
Return True if x is a positive or negative infinity, and False otherwise.
isnan(x, /)
Return True if x is a NaN (not a number), and False otherwise.
ldexp(x, i, /)
Return x * (2**i).
This is essentially the inverse of frexp().
lgamma(x, /)
Natural logarithm of absolute value of Gamma function at x.
log(...)
log(x, [base=math.e])
Return the logarithm of x to the given base.
If the base not specified, returns the natural logarithm (base e) of x.
log10(x, /)
Return the base 10 logarithm of x.
log1p(x, /)
Return the natural logarithm of 1+x (base e).
The result is computed in a way which is accurate for x near zero.
log2(x, /)
Return the base 2 logarithm of x.
modf(x, /)
Return the fractional and integer parts of x.
Both results carry the sign of x and are floats.
pow(x, y, /)
Return x**y (x to the power of y).
radians(x, /)
Convert angle x from degrees to radians.
remainder(x, y, /)
Difference between x and the closest integer multiple of y.
Return x - n*y where n*y is the closest integer multiple of y.
In the case where x is exactly halfway between two multiples of
y, the nearest even value of n is used. The result is always exact.
sin(x, /)
Return the sine of x (measured in radians).
sinh(x, /)
Return the hyperbolic sine of x.
sqrt(x, /)
Return the square root of x.
tan(x, /)
Return the tangent of x (measured in radians).
tanh(x, /)
Return the hyperbolic tangent of x.
trunc(x, /)
Truncates the Real x to the nearest Integral toward 0.
Uses the __trunc__ magic method.
DATA
e = 2.718281828459045
inf = inf
nan = nan
pi = 3.141592653589793
tau = 6.283185307179586
FILE
/Users/wenchang/miniconda3/envs/junior/lib/python3.7/lib-dynload/math.cpython-37m-darwin.so
###Markdown
Python para Métodos Numéricos Prof. Pedro Peixoto Mar 2022Referências Principais:[1] https://hal.inria.fr/inria-00564007/en[2] https://scipy-lectures.org/advanced/optimizing/index.htmloptimization-workflow[3] https://scipy-lectures.org/advanced/advanced_numpy/index.htmlcache-effects
###Code
import numpy as np
import time
###Output
_____no_output_____
###Markdown
Exemplo 1 - Por que devo me preocupar com a forma de escrever o código em Python? Série harmônica alternante - aproxima ln(2) ![Screenshot%20from%202022-03-14%2015-46-07.png](attachment:Screenshot%20from%202022-03-14%2015-46-07.png)
###Code
n = 40000000
# O caso ingênuo - série harmônica alternante - aproxima ln(2)
start_time = time.time()
sum = 0.0
for k in range(1,n):
sum += ((-1)**(k+1))/(k)
tempo1 = time.time() - start_time
print("Soma:", sum, " Valor exato:", np.log(2.0), "Erro:", sum - np.log(2.0))
print("--- %s seconds ---" % tempo1)
# Vamos melhorar isso?
start_time = time.time()
a = np.arange(1,n)
b = - np.where(a%2, -a, a)
c = 1/ b
sum = np.sum(c)
tempo2 = time.time() - start_time
print("Soma:", sum, " Valor exato:", np.log(2.0), "Erro:", sum - np.log(2.0))
print("--- %s seconds ---" % tempo2)
ganho = tempo1/ tempo2
print("Ganho de tempo: ", ganho, " vezes mais rápido")
###Output
Ganho de tempo: 19.00976126047581 vezes mais rápido
###Markdown
_Este exemplo é para deixar claro que implementações ingênuas em Python podem gerar tempos de processamentos absurdos! Evite "loops" em python em ranges ganres (vetores e matrizes) -> use numpy e arimética vetorial!_ **Qual é o "pulo do gato"?**- As operações no primeiro caso são **interpretadas** a cada iteração- As operações no segundo caso executam códigos pré **compilados** de C As funções "arange", "where", "/", "sum" são implementadas em C e pré-compiladas. Ao pré-compilar essas funções, os executáveis resultantes tem rotinas otimizadas (máquina-dependente) para cálculos de operações vetoriais. Exemplo 2 - Tá bom, fiquei preocupado, mas e agora? Vamos ver algumas ideias simples com numpy: - Operações _in place_ são uma boa ideia
###Code
a = np.zeros(n)
%time a = 0.0*a
%time a *= 0.0
###Output
CPU times: user 21.1 ms, sys: 40 ms, total: 61.1 ms
Wall time: 60.4 ms
CPU times: user 30.7 ms, sys: 0 ns, total: 30.7 ms
Wall time: 30 ms
###Markdown
- Use sempre operações vetoriais (em numpy, essas operações são implementadas em C)
###Code
n= 10000000
# Loop explícito
start_time = time.time()
a = np.arange(1,n)
b = np.zeros_like(a)
for i in range(len(a)):
b[i] = 3*a[i]
tempo = time.time() - start_time
print("Loop explícito : %s seconds" % tempo)
# Loop implicito
start_time = time.time()
a = np.arange(1,n)
b = [3*x for x in a]
tempo = time.time() - start_time
print("Loop implicito : %s seconds" % tempo)
# Vetorização
start_time = time.time()
a = np.arange(1,n)
b = 3*a
tempo = time.time() - start_time
print("Usando vetorização: %s seconds" % tempo)
###Output
Usando vetorização: 0.3909482955932617 seconds
###Markdown
- Cuidado com o uso de memória também!
###Code
#Matrizes - a forma ingênua
n = 100
R = np.empty((2*n,2*n,2*n))
start_time = time.time()
for i in range(-n, n):
for j in range(-n, n):
for k in range(-n, n):
R[i+n, j+n, k+n] = np.sqrt(i*i + j*j + k*k)
tempo = time.time() - start_time
print("Com loops : %s seconds " % tempo, " Memória auxiliar: ", 0.0, "MB")
#Alternativas - vetorização!
n = 100
#Usando mais memória
start_time = time.time()
#constroi cubos completos com valores de i, j, k
i, j, k = np.mgrid[-n:n, -n:n, -n:n]
R1 = np.sqrt(i**2 + j**2 + k**2)
tempo = time.time() - start_time
print("Com grids auxiliares : %s seconds " % tempo, " Memória auxiliar: ", 3*i.nbytes/1024/1024, "MB")
#Ver se bateu
print(" Check:", np.max(np.max(np.max(np.abs(R1-R)))))
#Usando menos memória
start_time = time.time()
#constroi vetores com valores de i, j, k
# Construct the row vector: from -100 to +100
i = np.arange(-n, n).reshape(2*n, 1, 1)
# Construct the column vector
j = np.reshape(i, (1, 2*n, 1))
# Construct the depth vector
k = np.reshape(i, (1, 1, 2*n))
#Alternativa para criar os 3 vetores de uma vez
#i, j, k = np.ogrid[-n:n, -n:n, -n:n]
R2 = np.sqrt(i**2 + j**2 + k**2)
tempo = time.time() - start_time
print("Com vetores auxiliares : %s seconds " % tempo, " Memória auxiliar: ", 3*i.nbytes/1024/1024, "MB")
print()
#Ver se bateu
print(" Check:", np.max(np.max(np.max(np.abs(R1-R)))), np.max(np.max(np.max(np.abs(R2-R)))), np.max(np.max(np.max(np.abs(R1-R2)))))
###Output
Com vetores auxiliares : 0.029290199279785156 seconds Memória auxiliar: 0.00457763671875 MB
Check: 0.0 0.0 0.0
###Markdown
_A diferença de processamento e memória pode ser muito grande!_ Exemplo 3 - Fiquei curioso, agora quero entender melhor. Vamos falar de cache e stride? **Cache**Memória volátil (guarda coisas temporárias) de acesso rápido ao processador. ![Screenshot%20from%202022-03-11%2018-19-16.png](attachment:Screenshot%20from%202022-03-11%2018-19-16.png) Intel Core i7 Processor Architechture Layout with Simultaneous Multi-threading (SMT)https://fm.csl.sri.com/LAW09/2011/law2011-paper-bradetich.pdf **Desafio**Toda vez que passo um número da memória RAM para o processador fazer uma conta, ele não passa apenas esse número, mas tudo que tiver na memória em torno desse número (bloco) que caiba no cache!_Exemplo idealizado:_Caso 1: Memória tem os números guardados: 3 , 7 , 8 , 23 , 54 , 37 , 77 , 40 , 45 Quero fazer a conta 3 + 40, e a minha memória cache tem 2 blocos de memória, onde em cada um cabem 2 números por vez apenas, como o computador faz a conta? 1) O processador pede o número 3 para a memória principal e salva o (3, 7) no bloco 1 cache 2) O processador pede o número 40 para a memória principal e salva o (40, 45) no bloco 2 cache 3) Os dois blocos levam as informações para os registradores do processador, e dependendo do processador, ele faz a conta de forma vetorial (3,7) + (40, 45) e retorna (43, 52), sendo o valor 43 salvo na memória RAM no endereço solicitado. 4) Agora, se queremos fazer a conta 8 + 45, o processador pede para jogarem fora o bloco 1 do cache e preencher com (8, 23) para fazer a nova conta, usando o bloco 2 já salva no cache. Caso 2: Memória tem os números guardados (vejam a troca do 7 pelo 8): 3 , 8 , 7 , 23 , 54 , 37 , 77 , 40 , 45 Quero fazer a conta 3 + 40, e a minha memória cache tem 2 blocos de memória, onde em cada um cabem 2 números por vez apenas, como o computador faz a conta? 1) O processador pede o número 3 para a memória principal e salva o (3, 8) no bloco 1 cache 2) O processador pede o número 40 para a memória principal e salva o (40, 45) no bloco 2 cache 3) Os dois blocos levam as informações para os registradores do processador, e dependendo do processador, ele faz a conta de forma vetorial (3,8) + (40, 45) e retorna (43, 53), sendo o valor 43 salvo na memória RAM no endereço solicitado. 4) Agora, se queremos fazer a conta 8 + 45, o processador já percebe que as informações relevantes estão em cache, e a conta da operação é aproveitada, devolvendo o 53 na memória (caso o processador seja vetorial). Ou então, para um processador não vetorial, ele já tem as informações para fazer a conta rapidamente, sme precisar acessar a RAM! **Alinhamento de memória**As contas devem ser preferencialmente feitas na ordem em que os dados estão na memória!!!! Quando a o processador precisa ficar pedidindo novos dados a RAM dizemos que está ocorrendo muito __cache miss__O caso real é um pouco mais complicado, mas esse conceito ainda é válido. Mais detalhes em https://courses.cs.washington.edu/courses/cse378/09wi/lectures/lec15.pdf Exemplo 3a: como o Python guarda uma matriz na memória? Isso importa?
###Code
#Matriz
import numpy as np
n = 20000
c = np.ones((n, 2*n))
linhas, colunas = c.shape
print("Matriz: ", c.shape, "\n", c)
print()
#Soma, para cada coluna, os valores das linhas primeiro
#Código ingênuo
#s = 0
#for j in range(colunas):
# for i in range(linhas):
# s = s + c[i,j]
#Código vetorizado
%time s = np.sum(c.sum(axis=0)) #axis=0 significa que vai soma as linhas para cada coluna primeiro!
print("Soma coluna a coluna:", s)
print()
#Soma, para cada linha, os valores das colunas primeiro
#Código ingênuo
#s = 0
#for i in range(linhas):
# for j in range(colunas):
# s = s + c[i,j]
#Código vetorizado
%time s = np.sum(c.sum(axis=1)) #axis=1 significa que vai somar nas colunas primeiro
print("Soma linha a linha: ", s)
print()
###Output
Matriz: (20000, 40000)
[[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
...
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]]
CPU times: user 427 ms, sys: 0 ns, total: 427 ms
Wall time: 427 ms
Soma coluna a coluna: 800000000.0
CPU times: user 323 ms, sys: 0 ns, total: 323 ms
Wall time: 323 ms
Soma linha a linha: 800000000.0
###Markdown
Se ele vai mais rápido somando primeiro as colunas de cada linha, então ele deve armazenar na ordem de linhas!! Isso é chamado de ordem no estilo "C" (padrão em Python) - Row-major orderOrdem C: ------------------- ------------------- ------------------- ------------------- ------------------- É possível pedir para o Python armazenar de outro jeito, a ordem no estilo Fortran - Column-major orderOrdem Fortran: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mais info em : https://scipy-lectures.org/advanced/advanced_numpy/index.htmlcache-effects
###Code
#Repetindo o exemplo anterior com ordem Fortran
#Matriz
import numpy as np
n = 10000
c = np.ones((n, 2*n), order='F')
linhas, colunas = c.shape
print("Matriz: ", c.shape, "\n", c)
print()
#Soma, para cada coluna, os valores das linhas primeiro
#Código ingênuo
#s = 0
#for j in range(colunas):
# for i in range(linhas):
# s = s + c[i,j]
#Código vetorizado
%time s = np.sum(c.sum(axis=0)) #axis=0 significa que vai soma as linhas para cada coluna primeiro!
print("Soma coluna a coluna:", s)
print()
#Soma, para cada linha, os valores das colunas primeiro
#Código ingênuo
#s = 0
#for i in range(linhas):
# for j in range(colunas):
# s = s + c[i,j]
#Código vetorizado
%time s = np.sum(c.sum(axis=1)) #axis=1 significa que vai somar nas colunas primeiro
print("Soma linha a linha: ", s)
print()
###Output
Matriz: (10000, 20000)
[[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
...
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]]
CPU times: user 104 ms, sys: 11 µs, total: 104 ms
Wall time: 104 ms
Soma coluna a coluna: 200000000.0
CPU times: user 115 ms, sys: 0 ns, total: 115 ms
Wall time: 114 ms
Soma linha a linha: 200000000.0
###Markdown
Note que agora ficou mais rápido lendo coluna a coluna! Exemplo 3b: o conceito de stride e sua relação com cache miss Strides: número de bytes para pular na memória até chegar no próximo elemento. Exemplo: Uma matrix 10x10 com stride de (10,1) significa que:- Para chegar na próxima linha tem que pular 10 bytes- Para chegar na próxima coluna, basta olhar o próximo byteLogo, isso está fortemente conectado ao conceito visto antes de ordenamento.
###Code
#Cada inteiro neste caso foi forçado a ocupar 2 bytes na memória (16 bits). 1 byte = 8 bits
x = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=np.int16, order='C')
print("Ordem C: ", x.strides)
x = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=np.int16, order='F')
print("Ordem F: ", x.strides)
###Output
_____no_output_____
###Markdown
Some basic Python concepts for beginners This is a very quick overview of some basic concepts of the Python programming language. More detailed tutorials and information can be found here - https://wiki.python.org/moin/BeginnersGuide/NonProgrammers Remember, programming is not an easy task for anyone at first. But given practice, it becomes easier. Comments Comments can be used to explain code and make it more human friendly. They start with the '' key. These lines of code are ignored by Python
###Code
# This is a comment
print("Hello, World!")
print("Hello, World!") # This is a comment
###Output
_____no_output_____
###Markdown
They can also be used to control execution and test code.
###Code
# print("Hello, World!")
print("Cheers!")
###Output
_____no_output_____
###Markdown
Variables Variables are containers for storing data values. Variables are creating by assigning values and should be named sensibly and uniquely. Be careful to avoid choosing names that are already being used in your current python session to indentify new variables. Dont name variables with Python keywords as these have a particular meaning to do things in your program. See [Python KeyWords]( https://docs.python.org/3.8/reference/lexical_analysis.html:~:text=False%20%20%20%20%20%20await%20%20%20%20%20%20else,if%20%20%20%20%20%20%20%20%20or%20%20%20%20%20%20%20%20%20yield) Strings are a collection of characters.
###Code
# Initilaise a variable named string_var with value
string_var = "this is a string"
# Print value of string_var to console
print(string_var)
# Get the data type of string_var
type(string_var)
###Output
_____no_output_____
###Markdown
Integers are whole numbers.
###Code
# Initilaise a variable named int_var with value
int_var = 1
# Print value of int_var to console
print(int_var)
# Get the data type of int_var
type(int_var)
###Output
_____no_output_____
###Markdown
Floats are decimal numbers.
###Code
# Initilaise a variable named float_var with value
float_var = 1.0
# Print value of float_var to console
print(float_var)
# Get the data type of float_var
type(float_var)
###Output
_____no_output_____
###Markdown
Booleans reporesent two values - True or False.
###Code
# Initilaise a variable named bool_var with value
bool_var = False
# Print the type of bool_var to console
print(bool_var)
# Get the data type of bool_var
type(bool_var)
###Output
_____no_output_____
###Markdown
Operators Operators are used to perform operations on variables and values. Arithmetic Arithmetic operators are used with numeric values to perform common mathematical operations.
###Code
# Addition
4 + 2
# Substraction
4 - 2
# Devision
4 / 2
# Multiplication
4 * 2
# Exponentiation
4 ** 2
# Modulous (returns the remainder of euclidean division)
9 % 4
# Floor Devision (returns largest integer from devision)
4 // 2.1
###Output
_____no_output_____
###Markdown
When performing arithmetic operation is key to know the order of using the operators. See examples [here](https://thehelloworldprogram.com/python/python-operators-order-precedence/) Comparison Operators Comparison operators are used to compare two values.
###Code
# Equal
1 == 2
# Not equal
1 != 2
# Greater than
1 > 2
# Less than
1 < 2
# Greater than or equal to
1 >= 2
# Less than or equal to
1 <= 2
###Output
_____no_output_____
###Markdown
Logical Operators Logical operators are used to combine conditional statements.
###Code
# and (Returns True if both statements are true)
1 < 2 and 1 < 3
# or (Returns True if one statement is true)
1 < 2 or 1 < 0
# not (Returns False if the result is true)
not(1 < 2 and 1 < 3)
###Output
_____no_output_____
###Markdown
Python data types Lists We use lists to store multiple items within a single variable.
###Code
# Initialise 1 dimensional list
list_1D = [1, 2, 3, 4, 5]
# Print the length of the list
print(len(list_1D))
# Get the data type of list varaiable
type(list_1D)
###Output
_____no_output_____
###Markdown
List elements can be acced by passing a index value. In Python list indices start at 0.
###Code
# Get element at index 1
print(list_1D[1])
# Access whole list with ':'
print(list_1D[:])
# Get list elements between index positions '2:4'
print(list_1D[2:4])
# Initialise 2 dimensional list
list_2D = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
# Print the length of the list
print(len(list_2D))
# Access whole list with ':'
print(list_2D[:])
# Get element at index 1
print(list_2D[1])
# Get list elements between index positions '2:4' from element at index 1
print(list_2D[1][2:4])
###Output
_____no_output_____
###Markdown
There are 4 Python data structures used to store collections of data, one are lists the other 3 are Tuple, Set, and Dictionary, all with different qualities and use cases. More examples [here](https://www.programiz.com/python-programming/variables-datatypes) Flow control statements For loops A for loop is used for iterating over a sequence (such as a list, tuple, dictionary, set or string). With a for loop we can execute a set of statements, once for each item in a sequence.
###Code
# For value in list 1D
for value in list_1D:
# If value is even
if (value % 2) == 0:
# Print "value is Even" to console
print(value, "is Even")
# Else print "value is Odd" to console
else:
print(value, "is Odd")
###Output
_____no_output_____
###Markdown
While Loop With a while loop we can execute a set of statements as long as a condition is true.
###Code
# initialise loop counter
i = 0
# while i is less that the length of list_1D
while i < len(list_1D):
# print value at the ith index of list_1D
print(list_1D[i])
# increment i by 1
i += 1
# Print "Done!" when loop is finished (condition is no longer True)
print("Done!")
###Output
_____no_output_____
###Markdown
Break Statement With a break statement we can stop the loop even if the while condition is true.
###Code
# Initialise loop counter
i = 0
# Initialise stop value
stop_at = 2
# While i is less that the length of list_1D
while i < len(list_1D):
# Print value at the ith index of list_1D
print(list_1D[i])
# If value at the ith index of list_1D is equal to stop value - stop loop
if list_1D[i] == stop_at:
break
# Increment i by 1
i += 1
###Output
_____no_output_____
###Markdown
Exception handling Python manages exceptions or errors with build-in tools, one is Try/Except. This type of handling is included in some of the DEA notebooks.
###Code
# say we want to print a variable that does not exist
print (a)
# if you run the above you get an error saying the name 'a' is not defined.
# the way to overcame that is with the code below. So python find the error and jump to the next line of code 'except' and your program does not stop
try:
print (a)
except:
print ("variable 'a' does not exist")
###Output
_____no_output_____
###Markdown
Functions A function is a block of code which only runs when it is called.
###Code
# Define function called hello()
def hello():
# Add docstring to descibe functionality
"""Function to print Hello World!"""
# Print "Hello world!" to console
print("Hello world!")
# Call hello() function
hello()
###Output
_____no_output_____
###Markdown
You can pass data, known as parameters, into a function.
###Code
# Define function called multiply()
def multiply(x, y): # This function takes the parameters x and y as input
# Add docstring to descibe functionality
"""Function to multiply two numbers"""
# Multiply parameters x and y and assign to vairable result
result = x * y
# Return result
return result
###Output
_____no_output_____
###Markdown
A function can return data as a result.
###Code
# Call multiply() function with parameters
multiply(3, 5)
###Output
_____no_output_____
###Markdown
Python: basic featureshttps://www.python.org/
###Code
print("Hello, World!")
a = 5
b = 2
a + b
1 + a * b
a ** b
# different in python 3: a//b
# for same behaviour run: from __future__ import division
a / b
a / float(b)
a % b
min(a, b)
a == b
a != b
a += 3
a
# Python Lists
a = [1, "hello", 5.5]
a
len(a)
a[2]
a.append("how are you?")
a
for x in a:
print(x)
for i, x in enumerate(a):
print("element {}: {}".format(i, x))
a[0] = 10
a
# Python Tuples:
b = (-1, "bye", 'c')
b
b[-1]
b[0] = 10
b
x, y = b
x
y
# Python Dictionaries (Keys, values)
a = {"name":"Mary", "age":23, "sign":"capricorn"}
a
a[1]
a["job"] = "student"
a
# Python Funtions
def f(a, b=4, c=5):
if a > 2 and b < 10:
return a
elif c == 5:
return b
else:
return a + b + c
f(4)
f(4, 11)
f(4, c=6, b=11)
###Output
_____no_output_____
###Markdown
NumPy: multi-dimensional arrays and scientific computinghttps://www.numpy.org/
###Code
import numpy as np
a = np.array([0, 2, 4, 6, 8, 10, 12, 14, 16])
a
a.ndim
a.shape
a[2]
a[2:]
a[:4]
a[2:7]
a[2:7:2]
a[-1]
a[::-1]
a[[0, 4, 5]]
b = a > 3
b
a[b]
a = np.array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]])
a
a.ndim
a.shape
a[1, 2]
a[0]
a[:, 1:3]
a.T
a + 10
a ** 2
a * [10, 20, 30, 40]
np.sin(a)
np.mean(a)
a.mean(axis=1)
np.max(a)
np.max(a, axis=1)
np.arange(10)
np.linspace(2, 4, 5)
np.zeros((2, 3))
np.full((2, 3), 2.5)
###Output
_____no_output_____
###Markdown
matplotlib: plottinghttps://matplotlib.org/
###Code
import matplotlib.pyplot as plt
#%matplotlib notebook
%matplotlib inline
x = np.linspace(-5, 5, 50)
y = np.sin(x)
y2 = y ** 2
y3 = -x / 5
plt.figure()
plt.plot(x, y, label='sin')
plt.plot(x, y2, '.', label='$\sin^{2}$')
plt.plot(x, y3, linewidth=3)
plt.annotate('example text', xy=(0.5, -0.75))
plt.xlabel("X axis")
plt.ylabel("Y axis")
plt.title("Example plot")
plt.legend()
plt.show()
fig, ax = plt.subplots(2, sharex=True)
ax[0].plot(x, y)
ax[1].plot(x, y2)
ax[1].set_ylabel('y axis')
plt.show()
y, x = np.mgrid[0:20, 0:30]
z = (x - 4)**2+ y**2
plt.figure()
plt.pcolormesh(x, y, z, shading='auto')
plt.show()
###Output
_____no_output_____
###Markdown
SciPy: extra modules for scientific computationhttps://www.scipy.org/
###Code
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
def f(x, a, b, c):
return a * np.exp(-b * x) + c
n = 60
x = np.linspace(0, 5, n)
y = f(x, 5, 2, 0.5) + 2 * np.random.rand(n)
popt, pcov = curve_fit(f, x, y)
perr = np.sqrt(np.diag(pcov))
y_fit = f(x, *popt)
msd = np.sum((y - y_fit) ** 2) / n
pnames = ['a', 'b', 'c']
results = ''
for name, value, error in zip(pnames, popt, perr):
results += '{} = {:.2f}$\pm${:.2f}\n'.format(name, value, error)
results += 'MSD = {:.2f}'.format(msd)
plt.plot(x, y, '.', label='data')
plt.plot(x, y_fit, label='fit: $ae^{-bx} + c$')
plt.annotate(results, xy=(0.7, 0.55), xycoords='axes fraction')
plt.legend()
plt.show()
%run langmuir_fit.py
###Output
_____no_output_____
###Markdown
Python practice
###Code
print("Cici")
print("Lingzhi")
###Output
Lingzhi
###Markdown
starting
###Code
#shit + enter to run
("hello world")
n=5 # assign the value of 5 in the vaeiable n
print(n)
type(n)
#variables to stings
n="python"
type(n) # checking the type of function eg: str,int...
###Output
_____no_output_____
###Markdown
condition statement
###Code
n =10
if n >7:
print("number is greater than 7")
print("i am in first block")
elif n <7:
print("number is less than 7")
else:
print ("the number is 7")
###Output
_____no_output_____
###Markdown
while loop
###Code
i = 0
while i<=10:
print(i)
i+= # for printing the number from 0 to 10
i = 0
while i<=10:
print(i,end=",") # we have "end " statement whith that we can specify how the output needs to end
i+=1 # here it wii print in single line
###Output
0,1,2,3,4,5,6,7,8,9,10,
###Markdown
for loop
###Code
for i in range(10): # for loop is used
print(i)
for i in range(10,21): #to print range of numbers
print(i)
for i in range(10,23):#we giving the range of i as in between 10 and 23, and the second number (23) is excluded
print(i,end=" ") # we have "end " statement whith that we can specify how the output needs to end
# here it wii print in single line
for i in range(10,24,2):# we are giving step size as 2. eg : (start,end,stepsize)
print(i) # now it will avoid the second number eg: 10,14,18..etc.
###Output
10
12
14
16
18
20
22
###Markdown
Functions in python
###Code
def say_hello(name): #def is used to define a function , here "say_hello "is the function name
print("Hello "+ name )
say_hello ("Akhil")
def calculate(a,b): # calculate is the function name here
sum = a+b
diff = a-b
multi = a*b
return(sum,diff,multi) # return or print multiple things using function
calculate(5,3)
calculate(2,2)
###Output
4 0 4
###Markdown
Data structure1. List2. Tuples3. Dictionary4. Sets **list** [ ]1. we can change the elements2. square bracket is used
###Code
li = [4,6,2,9,8,7] #example of a ist
type(li)
print(li[0]) #extracting the first element in the given list
print(li[5]) #extracting the 5 th element
print(li[0]) #extracting the zeroth element
print(li[-1]) # accessing the element from thee negative direction
print(li[-2])
###Output
7
8
###Markdown
*list slicing*1. we can extract some elements from the list by providing the constrains
###Code
# li[ start_index : end_index + 1 ]
li[0 : 3] # we giving the starting element number and end number which is excluded
li[:] # No starting and end index so it will give all the elements
li[0]= 50 # we can change the element of the list
li # Here the first element has been changed
li2 = ["Hello",False,2.3,-1,[1,2,3,5]] # list can contain heterogenous elements
li2
###Output
_____no_output_____
###Markdown
Tuples 1. ( ) is used2. Read only limit3. Element is not changable4. we use Tuples to declare constants
###Code
tup = (5,4,2,3,8) # creating a Tuple
type(tup)
tup[0]
tup[4]
len(tup) # len function gives the element number
3,5 # it is also a Tuple
###Output
_____no_output_____
###Markdown
Dictionary
###Code
canteen_menu = {
"samosa":10,
"juice": 20,
"pizza":50,
}
canteen_menu["pizza"]
canteen_menu['juice']
type(canteen_menu)
canteen_menu["apple"] = 25 # Adding new element
canteen_menu
###Output
_____no_output_____
###Markdown
Python Basics*Prepared by:* **Jude Michael Teves** Introduction to Python Python is a general-purpose and high-level object-oriented, interpreted, and interactive programming language. It is consistenly ranked among the top programming languages in the world. In Stackoverflow's 2020 survey, out of **57,378** respondents, Python ranked 4th in the most popular programming languages category.Python was created and released in 1991 by Guido Van Rossum. The language was designed with readability and simplicity in mind--the syntax heavily uses English words. And I think those two aspects are beautifully encapsulated in the following line in the Zen of Python. > There should be one-- and preferably only one --obvious way to do it The Zen of Python is a collection of guiding principles when coding in Python.**The Zen of Python by Tim Peters**```Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren't special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one-- and preferably only one --obvious way to do it.Although that way may not be obvious at first unless you're Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it's a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea -- let's do more of those!```You could also display this by running the following line of code:```pythonimport this```The principles above might be quite a mouthful for the unitiated, so let's take it apart and just focus on few important lines. I find Myk Ogbinar's summary of this in his introduction to Python notebook to be a good one:--- --> Beautiful is better than ugly. -->**Beautiful is better than ugly.** Python programmers recognize that good code can actually be beautiful. If you come up with a particularly elegant or efficient way to solve a problem, especially a difficult problem, other Python programmers will respect your work and may even call it beautiful. There is beauty in high-level technical work.**Explicit is better than implicit.** It is better to be clear about what you are doing, than come up with some shorter way to do something that is difficult to understand.**Simple is better than complex. Complex is better than complicated.** Keep your code simple whenever possible, but recognize that we sometimes take on really difficult problems for which there are no easy solutions. In those cases, accept the complexity but avoid complication.**Readability counts.** There are very few interesting and useful programs these days that are written and maintained entirely by one person. Write your code in a way that others can read it as easily as possible, and in a way that you will be able to read and understand it 6 months from now. This includes writing good comments in your code.**There should be one-- and preferably only one --obvious way to do it.** There are many ways to solve most problems that come up in programming. However, most problems have a standard, well-established approach. Save complexity for when it is needed, and solve problems in the most straightforward way possible.**Now is better than never.** No one ever writes perfect code. If you have an idea you want to implement it, write some code that works. Release it, let it be used by others, and then steadily improve it. -->---Python was made for ease of use and has since become an essential tool for different kinds of people: programmers, engineers, researchers, and data scientists across academia and industry. The reason why it's widely used is because of its large ecosystem and the availability of domain-specific libraries that have been built using it. It can do so much that you can practically do almost any computer-related tasks such as creating web applications, desktop applications, video games, robots, and also doing data analysis. "Hello World" in Python "Hello World" is like an initiation rite for us programmers. When we are learning a new language, we typically start with writing a simple program that prints the message "Hello world!". Doing this gives us some sense check that we have properly configured our new programming platform/environment.
###Code
print('Hello World!')
name = input("What is your name? ") # input gets the input from the user
print("Hello, " + name + "! Welcome to the world of Python.") # print prints the text on the screen
###Output
What is your name? Jude
###Markdown
Reserved KeywordsYou cannot use the following as variable names in Python as they reserved already:| | | | | ||--------|---------|----------|---------|-----||and |del |from |not |while||as | elif |global |or |with ||assert | else | if |pass |yield||break | except | import |print||class | exec | in |raise||continue| finally | is |return||def | for | lambda |try| VariablesA variable holds a value and we can change its content whenever we want. The data type can also change. The name of the variable should start with an alphabetical character or `_`, but the latter, by convention, is used for hidden or dummy variables.
###Code
var = 'jude'
print(var)
var = 100
print(var)
###Output
100
###Markdown
Data Types StringsA string is simply a sequence of characters. Sentences, paragraphs, or phrases that are encapsulated by `"` (double quotes) or `'` (single quote) are strings.
###Code
text = "hello there" # double quotes
text = 'i am hungry' # single quote
###Output
_____no_output_____
###Markdown
String methodsHere are some neat methods we can use for any string object to make our lives easier:
###Code
text.capitalize()
text.upper()
'I AM ANGRY?'.lower()
' heyy '.strip()
'jude michael'.title()
'jude michael'.index('m') # returns the index of the first instance of m
###Output
_____no_output_____
###Markdown
Numeric TypesThere are numeric data types in Python: `int`, `float`, and `complex`. 99% of the time, we will only be dealing with `int` and `float`, so let's focus on those. `int` type is used for integers and `float` is used for floating-point values (those with decimals). Operators Arithmetic OperationsWe can perform the following arithmetic operations in Python.
###Code
print(1 + 2)
print(1 - 2)
print(1 * 2) # multiplication
print(1 / 2) # division
print(3 // 2) # floor division
print(3 ** 2)
print(3 % 2) # modulus -- remainder
###Output
3
-1
2
0.5
1
9
1
###Markdown
Relational Operatorsaka Comparison Operators. These are used to compare and identify relationships between operands.
###Code
a, b = 10, 10
print(a == b) # Equal to
print(a != b) # Not equal to
print(a > b) # Greater than
print(a < b) # Less than
print(a >= b) # Greater than or equal to
print(a <= b) # Less than or equal to
###Output
True
False
False
False
True
True
###Markdown
Logical Operators**AND ( and )** If both the operands are true, the condition becomes true.**OR ( or )** If any of the two operands are true, the condition becomes true.**NOT (not)** Reverses the logical state of the operand. If true, it will become false and vice-versa.
###Code
c, d = True, False
print(c and d)
print(c or d)
print(f'{c} to {not c}')
###Output
False
True
True to False
###Markdown
Data StructuresPython has the following data structures:|Type|Description|Example||---|---|---||list|Ordered collection of values|[1, 'abc', 3, 1]||set|Unordered collection of unique values|{1, 'abc', 3}||tuple|Immutable Ordered collection|(1, 'abc', 3)||dict|Unordered key. value pairs|{'abc': 1, 'def': 2}| Lists and TuplesIn Python, both lists and tuples can contain values of any data type. The only difference is that tuples are immutable--we cannot do append, insert, and delete.
###Code
var_list = ['jude', 'python', 'datascience']
var_tuple = ('jude', 'python', 'datascience')
print(var_list)
print(var_tuple)
###Output
['jude', 'python', 'datascience']
('jude', 'python', 'datascience')
###Markdown
Indexing: accessing an elementRemember that Python is 0-index, which means that the counter starts at 0.
###Code
print(var_list[0])
print(var_tuple[1])
print(var_list[-1]) # we use -1 to access the last element in our data structure
###Output
datascience
###Markdown
Slicing: accessing multiple elements concurrentlyNote that slicing is not inclusive on the right side of the range, which means the last number is excluded.
###Code
print(var_tuple[:2]) # access indices first element up until the 2nd index (3rd element)
print(var_tuple[-2:]) # access second last element up until the last
###Output
('jude', 'python')
('python', 'datascience')
###Markdown
Sorting values
###Code
var_sorted = sorted(var_list)
print(var_sorted)
var_sorted = sorted(var_list, reverse=True)
print(var_sorted)
###Output
['datascience', 'jude', 'python']
['python', 'jude', 'datascience']
###Markdown
Updating an elementWe can only do this for list as tuple is immutable.
###Code
var_list_copy = list(var_list)
print(var_list_copy)
var_list_copy[0] = 'michael'
print(var_list_copy)
###Output
['jude', 'python', 'datascience']
['michael', 'python', 'datascience']
###Markdown
Deleting an element
###Code
var_list_copy = list(var_list)
print(var_list_copy)
del var_list_copy[0]
print(var_list_copy)
del var_list_copy[-1]
print(var_list_copy)
###Output
['jude', 'python', 'datascience']
['python', 'datascience']
['python']
###Markdown
Adding an element
###Code
var_list_copy = list(var_list)
print(var_list_copy)
var_list_copy = var_list_copy + ['sports', 'games']
print(var_list_copy)
###Output
['jude', 'python', 'datascience']
['jude', 'python', 'datascience', 'sports', 'games']
###Markdown
DictionaryDictionary (`dict`) is a container of key-value pairs. Similar to lists, dicts are mutable and can contain mixed types. In addition, dicts are unordered.
###Code
var_dict = {'key': 'value',
'python': 100,
'programming': 9000.00}
var_dict
###Output
_____no_output_____
###Markdown
Indexing: accessing an element
###Code
print(var_dict['python']) # we use the key to access the value
###Output
100
###Markdown
Updating an element
###Code
var_dict_copy = dict(var_dict) # we can do this to create a copy of the dict or through the following line
# var_dict_copy = var_dict.copy()
var_dict_copy['python'] = 'jude'
print(var_dict_copy)
###Output
{'key': 'value', 'python': 'jude', 'programming': 9000.0}
###Markdown
Deleting an element
###Code
var_dict_copy = var_dict.copy()
del var_dict_copy['programming']
print(var_dict_copy)
###Output
{'key': 'value', 'python': 100}
###Markdown
Adding an element
###Code
var_dict_copy = var_dict.copy()
var_dict_copy['game'] = ['Genshin Impact', 'Witcher 3', 'DotA']
print(var_dict_copy)
###Output
{'key': 'value', 'python': 100, 'programming': 9000.0, 'game': ['Genshin Impact', 'Witcher 3', 'DotA']}
###Markdown
Extras To get just the keys or values.
###Code
var_dict.keys()
var_dict.values()
###Output
_____no_output_____
###Markdown
Control FlowIn programming, there are times when we want to run a specific section of code until it satisfies a specific condition. We use control flows for that, and like all programming languages, Python has the following commands for controlling the flow of program:- Conditional Statements: if, elif, else- Loop Statements: for, while- Loop Control Statements: break, continue, pass Conditional StatementsThis is your usual if-else in other programming languages. What happens here is:- the code checks if the statement in the `if` part is true. If it is, execute the code under this block and end the whole if-else block.- if it is not true, it checks if the `elif` part is true. If it is, execute the code under this block and end the whole if-else block. `elif` is the equivalent of `else if` in other languages. `elif`s are the conditional statements in between `if` and `else`.- if none of the conditions passed applied to all the `if` and `elif`s, it will execute the code under the `else` block.
###Code
result = 4
if result == 1:
print("Hello there!")
elif result <= 3:
print("Getting there!")
else:
print("Aww! better luck next time!")
###Output
Aww! better luck next time!
###Markdown
Loop StatementsWhen we want to run something multiple times, we use a loop. Python has 2 types of loops: `for` and `while` loops.
###Code
for i in [0,1,2]:
print(f"{i}")
###Output
0
1
2
###Markdown
A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. To help you visualize this, you may use this Python Tutor web app.
###Code
i = 2
while i >= 0: # execute while this is true
print(f"{i}")
i -= 1
###Output
2
1
0
###Markdown
Loop Control StatementsWe use loop control statements to change the execution of loop from its intended sequence. BreakThis immediately stops the loop.
###Code
for i in range(1, 10):
if i == 3:
print('Condition satisfied')
break
print(i) # What would happen if this is placed before if condition?
###Output
1
2
Condition satisfied
###Markdown
ContinueContinue statement immediately stops the current iteration of the loop and proceeds to the next iteration of the loop.
###Code
for i in range(1, 5):
if i == 3:
print('Condition satisfied')
continue
print("whatever.. I won't get printed anyways.")
print(i)
###Output
1
2
Condition satisfied
4
###Markdown
PassPerforms null operation--does nothing. It is generally used as a temporary placeholder for an unimplemented logic.
###Code
for i in range(1, 5):
if i == 3:
print('Condition satisfied')
pass
print(i)
###Output
1
2
Condition satisfied
3
4
###Markdown
FunctionsImagine the following code below:```pythonkg = 10lbs = kg*2.2print(f'{kg} kg = {lbs} lbs')...some code...kg = 87lbs = kg*2.2print(f'{kg} kg = {lbs} lbs') ```Notice that multiple lines are duplicated. When developing big software, expect to be reusing many pieces of code. That's where functions come in handy. It lets us call a block of code and executes it. We use functions for the following reasons: - it allows us to reuse blocks of code - it makes the code more readable Anatomy of a function Defining a functionWe define a function by using the `def` keyword followed by the name of the function, then the parameters or the arguments encapsulated in a parentheses, and lastly, a colon `:`. The code block within every function should be indented. The function is also expected to return something back to the caller. To demonstrate, we'll turn the code above into one that uses functions.
###Code
def kg_to_lbs(kg):
'''
Converts kilogram to pounds.
'''
lbs = kg * 2.2
return lbs
kg = 10
print(f'{kg} kg = {kg_to_lbs(kg)} lbs')
# some code
kg = 87
print(f'{kg} kg = {kg_to_lbs(kg)} lbs')
###Output
10 kg = 22.0 lbs
87 kg = 191.4 lbs
###Markdown
Part 1 Denne fil er en sammensætning af forskelligt materiale og er derfor lidt en blanding af dansk og engelsk. Målet er at give en introduktion til de mest basale elementer i Python, så I er klædt på til at kunne programmere jeres LEGO mindstorms robot. The first programTraditionally, the first program you write in a new language is called "Hello, World!" because all it does is display the words "Hello, World!".
###Code
print('Hello, World!')
###Output
Hello, World!
###Markdown
Exercise 1Try to make Python print out your name in the cell below. When you have written the code, press shift-enter to run the cell. Arithmetic operatorsAddition, subtraction, multiplication and division:
###Code
40+2
43-1
6*7
84/2
###Output
_____no_output_____
###Markdown
We'll get back to why it is ".0". Exercise 2Calculate how many paws do 3 cats and 4 dogs have? (answer is 28) Exponentiation (raise number to a power)E.g. $6^2+6$:
###Code
6**2 + 6
###Output
_____no_output_____
###Markdown
Exercise 3In the cell below, please calculate $0.5 x^2 + 2 x + 4$ for x = 2. Check that you get 10 as result. NB!In many programming languages, the "^" operator is used to raise a number to a power.E.g. in C, $6^2+6$ is written as "6^2+6".However, in Python, "^" is the bitwise operator XOR. Values and typesA value is one of the basic things a program works with, like a letter or a number. Some values we have seen so far are 2, 42.0, and "Hello, World!".These values belong to different types: 2 is an integer, 42.0 is a floating-point number, and 'Hello, World!' is a string, so-called because the letters it contains are strung together.We can use the `type` command to find out the type of a variable. Some examples are given below:
###Code
type(2)
type(42.0)
type('Hello, World!')
###Output
_____no_output_____
###Markdown
Notice that the following is also a string, because the number is put in single-quotation marks. Thereby we tell Python to treat it as a string:
###Code
type('42.0')
###Output
_____no_output_____
###Markdown
NB! Do not use comma for large numbers:Comma has the special meaning in Python of an element seperator.
###Code
1,000,000
type((1, 0, 0))
###Output
_____no_output_____
###Markdown
Since we provided three elements, Python interpreted our input as a tuple, which is a container of elements.Don't worry about this for now, we'll learn about tuples later. Formal and natural languagesNatural languages are the languages people speak, such as English, Spanish, and French. They were not designed by people (although people try to impose some order on them); they evolved naturally.Formal languages are languages that are designed by people for specific applications. For example: - Math notation is a formal language that is particularly good at denoting relationships among numbers and symbols. - Chemist notation is a formal language to represent the chemical structure of molecules. And most importantly:Programming languages are formal languages that have been designed to express computations. Natural language have loose syntax; minor typos/grammatical errors are usually not critical. - E.g.: "This is @ well-structured Engli$h sentence with invalid t*kens in it"Formal languages have strict syntax, and the smallest deviation from the rules causes an error.Python does not understand what you meant to say, only what you actually wrote. Assignment statementsFirst we can define a string variable:
###Code
message = 'And now for something completely different'
print(message)
###Output
And now for something completely different
###Markdown
We can also define number variables, either integers (heltal) or floating point (kommatal).
###Code
n = 17
n
pi = 3.1415926535897932
pi
###Output
_____no_output_____
###Markdown
Variable names - In Python: variables, functions, objects cannot have numbers as the first character. - Only numbers can start with a number digit.- It is conventional to use only lower case letters for names, and underscore '_' to combine words. - For example 'car_speed_kmh' - Reserved words cannot be used as variable names: `and`, `as`, `assert`, `async`, `await`, `break`, `class`, `continue`, `def`, `del`, `elif`, `else`, `except`, `False`, `finally`, `for`, `from`, `global`, `if`, `import`, `in`, `is`, `lambda`, `None`, `nonlocal`, `not`, `or`, `pass`, `raise`, `return`, `True`, `try`, `while`, `with` and `yield`. (we will see the meaning of these later) Exercise 5Create a string variable named "name" that contains your name. Expressions and statementsAn expression is a combination of values, variables, and operators. A value all by itself isconsidered an expression, and so is a variable, so the following are all legal expressions:
###Code
42
n
n+25
###Output
_____no_output_____
###Markdown
A statement is a unit of code that has an effect, like creating a variable or displaying a value.Example of two statements:
###Code
n=17
print(n)
###Output
17
###Markdown
The typed text is executed by the Python interpreter, line by line. We can also use script mode. Script modeIf using Python as calculator, we may write two statements:
###Code
miles = 26.2
miles * 1.61
###Output
_____no_output_____
###Markdown
- The first line is an assignment (tildeling) that has no output.- The second line is an expression (udtryk) that outputs the result.A script is a collection of statements. If we collect the two statements above in the same cell (or in the same file in Spyder), they now make up a script.
###Code
miles = 26.2
miles * 1.61
###Output
_____no_output_____
###Markdown
Order of operations- As in many other computing languages there are certain rules for evaluating expressions.- Arithmetic operators are evaluated in the following order, where the highest has precedence: - Paranthesis: `()` - Power: `**` - Sign: `+x, -x` - Multiplication, division, modulo: `*, /, %` - Addition and subtraction: `+, -`- Groups left to right if on same level. Exercise 6According to the rules above, what is the correct interpretation of $b-a \cdot x^2$? Only one of the choices below is correct. 1. $(b-a)x^2$? 2. $b-(a(x^2))$? 3. $(b-ax)^2$? 4. $b-(ax)^2$? Discuss in your group what is the correct choice and make sure you understand why. String operationsWe can use `+` and `*` operators on strings:
###Code
'in' + 'put'
'same'*2
'same' + 'same'
###Output
_____no_output_____
###Markdown
Exercise 7Make a small script in the cell below that writes the following string: `--- Headline ---`. You are only allowed to use the dash (`-`) twice. Comments in Python code- It is possible to insert comments to explain the idea behind the code.- Should help both you and other programmers to understand the code.- In Python: everything after `` on a line is considered a comment.
###Code
minute = 2
# compute the percentage of the hour that has elapsed
percentage = (minute * 100) / 60
percentage = (minute * 100) / 60 # percentage of an hour
###Output
_____no_output_____
###Markdown
Besides being used for explaining code the `` can also be used to comment out lines of code, e.g. if you are trying out different ideas. Exercise 8In the following code, try commenting out different lines by placing a `` at the beginning of a line to disable those lines.
###Code
a = 2
a = 3
a = 4
print(a)
###Output
4
###Markdown
Printing strings with variablesIn Python there are different ways to print a string in which the values of variables are inserted. The most recent method is to use what is called f-strings. An example of how it works is given below:
###Code
age = 21
name = 'Bob'
print(f'My name is {name} and I am {age} years old.')
###Output
My name is Bob and I am 21 years old.
###Markdown
As you can see, the two variables `age` and `name` are inserted in the placed indicated by the variables names, surrounded by curly braces. Notice the `f` before the start of the string - this tells the print function to treat this string specially and insert the values of the variables.Actually, anything that you put inside the curly braces will be treated as an expression. This means that you can actually do arithmetic operations or function calls inside the curly braces, e.g.:
###Code
name = 'Bobby'
print(f'Your name is {len(name)} characters long.')
###Output
Your name is 5 characters long.
###Markdown
Exercise 9First define a variable that contains a number. Now, use the f-string method to print out the number and the square of the number, where you calculate the squared number directly in the curly braces. The output should be like this:`The number is 3, the squared number is 9.` Part 2 Hvad er en funktion? En funktion er en samling kommandoer som udfører en opgave.Funktionen er identificeret ved et navn, som man kan bruge til at *kalde* dvs. udføre funktionen med. Nogle eksempler på funktioner indbygget i Python Tjek typen:
###Code
type('Hej')
###Output
_____no_output_____
###Markdown
Konverter en streng med et tal til et heltal:
###Code
int('32')
###Output
_____no_output_____
###Markdown
Hvorfor virker nedenstående ikke?
###Code
int('Hej')
###Output
_____no_output_____
###Markdown
Konverter et kommatal til et heltal. Bemærk hvad der sker med decimalerne.
###Code
int(4.5678)
###Output
_____no_output_____
###Markdown
Konverter en streng med tal til kommatal:
###Code
float('2.7182')
###Output
_____no_output_____
###Markdown
Konverter et tal til en streng:
###Code
str(32)
###Output
_____no_output_____
###Markdown
Opgave 1Udvid nedenstående kode med de rette af ovenstående funktioner så du kan udregne hvad $7 \cdot a$ er.
###Code
a = '2.5'
###Output
_____no_output_____
###Markdown
Eksempler på funktioner fra Pythons standardbibliotekLad os som eksempel her se `math`-modulet.- En samling som stiller et udvalg af almindelige matematiske funktioner til rådighed.- Et *modul* i Python er en samling funktioner m.m. - samlet i én fil.
###Code
import math
math
###Output
_____no_output_____
###Markdown
For at tilgå funktioner fra et modul bruges 'dot'-notation (punktum). Se i eksemplerne herunder hvordan vi tilgår hhv. `log10()` og `sin()` funktionerne fra `math`-modulet:
###Code
signal_power = 10
noise_power = 1
ratio = signal_power / noise_power
decibels = 10 * math.log10(ratio)
decibels
radians = 0.7
height = math.sin(radians)
height
###Output
_____no_output_____
###Markdown
Opgave 2Lav et script herunder der udregner arealet af en cirkel ud fra en radius gemt i variablen `r`. Hint: brug konstanten `pi` fra `math`-modulet vha. dot-notation. Vore egne funktionerVi har set nogle enkelte eksempler på funktioner indbygget i Python. Endnu mere interessant er det nok, at vi også kan definere vore egne funktioner.Et første eksempel:
###Code
def print_haiku():
print('Tyst falder løvet')
print('Efteråret tilstunder')
print('Regnens sagte "dryp"')
###Output
_____no_output_____
###Markdown
- En funktionsdefinition starter altid med nøgleordet `def`.- Lige efter kommer funktionens navn, der kan bestå af bogstaver, tal og understreg - men må ikke starte med et tal.- efter navnet kommer parenteser (evt. argumenter - her har vi ingen) og linjen afsluttes med kolon.- Selve funktionaliteten af funktionen skrives på de efterfølgende linjer. Bemærk indrykning - det er en essentiel detalje i Python. Python bruger udelukkende indrykningen til at se hvad der hører til funktionen og hvad der ikke er en del af den. Når man definerer en funktion, dannes et funktions-*objekt*.
###Code
print(print_haiku)
type(print_haiku)
###Output
_____no_output_____
###Markdown
Vi kalder en sådan funktion på samme måde som de indbyggede funktioner i Python - funktionens navn og parenteser:
###Code
print_haiku()
###Output
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
###Markdown
Når man har defineret en funktion, kan man f.eks. bruge den indeni en anden funktion:
###Code
def gentag_haiku():
print_haiku()
print_haiku()
###Output
_____no_output_____
###Markdown
Lad os prøve at kalde den:
###Code
gentag_haiku()
###Output
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
###Markdown
Vi kan sætte vores funktioner sammen så de danner et modul:
###Code
def print_haiku():
print('Tyst falder løvet')
print('Efteråret tilstunder')
print('Regnens sagte "dryp"')
def gentag_haiku():
print_haiku()
print_haiku()
gentag_haiku()
###Output
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
###Markdown
Bemærk hvordan den sidste linje der kalder `gentag_haiku()` ikke er indrykket da den ikke er en del af funktionen. Opgave 4Prøv at indsætte følgende kommando på linje 9: `print('Haiku-digt om efterår:')`, og se hvilken forskel det gør i output om linjen er indrykket eller ej. Vær sikker på at du forstår hvad forskellen skyldes.
###Code
def print_haiku():
print('Tyst falder løvet')
print('Efteråret tilstunder')
print('Regnens sagte "dryp"')
def gentag_haiku():
print_haiku()
print_haiku()
gentag_haiku()
###Output
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
###Markdown
Parametre og argumenterNogle funktioner kræver **argumenter** - f.eks. `math.sin`, der kun virker hvis den får et tal som input.- Argumenterne er "input", vi giver til funktionen, når vi kalder den.- Inde i funktionen tildeles argumenterne til **parametre**.- Parametrene er interne variable i funktionen, som funktionen opbevarer argumenterne i, mens den eksekverer.I eksemplet herunder har vi defineret at funktionen tager et argument, som vi tildeler til en parameter kaldet `bjarne`.
###Code
def print_to_gange(bjarne):
print(bjarne)
print(bjarne)
###Output
_____no_output_____
###Markdown
Lad os prøve at bruge den, ved at give et argument til funktionen:
###Code
print_to_gange('selleri')
###Output
selleri
selleri
###Markdown
Vi kan lave *komposition* for argumenter til funktioner, ved at indsætte et udtryk som argument:
###Code
print_to_gange('rødbede' + 'saft')
sigurd = 'Skærslibervisen'
print_to_gange(sigurd)
###Output
Skærslibervisen
Skærslibervisen
###Markdown
Bemærk at det er ligegyldigt hvad vi kalder den variabel (`sigurd`) som er argument til funktionen (`print_to_gange`). Indeni funktionen vil argumentets værdi blive tilskrevet parameteren `bjarne`, uanset hvad. Opgave 5Definer en funktion med navnet `summering` der tager to argumenter og udskriver summen af disse.Afprøv funktionen med kommandoen `summering(2,4)`, der gerne skal udskrive `6`. Variable og parametre i funktioner er lokaleIndeni en funktionsdefinition som den herunder, har vi tre såkaldt *lokale variable*. Det at de er lokale, betyder at de kun eksisterer indeni funktionen. Lad os først definere en funktion og prøve at kalde den:
###Code
def kat_to_gange(nummer1, nummer2):
kat = nummer1 + nummer2
print_to_gange(kat)
linje1 = 'dyt '
linje2 = 'båt.'
kat_to_gange(linje1, linje2)
###Output
dyt båt.
dyt båt.
###Markdown
Lad os sige vi gerne vil udskrive værdien af parameteren `kat`:
###Code
print(kat)
###Output
_____no_output_____
###Markdown
Her giver Python os en fejlmeddelelse og siger at `'kat'` ikke er defineret. Fejlen består i at vi forsøger at tilgå parameteren/variablen udenfor funktionen, hvilket ikke er muligt. Opgave 6Prøv at indsætte kommandoen `print(kat)` på en ny linje efter linje 2 i nedenstående:
###Code
def kat_to_gange(nummer1, nummer2):
kat = nummer1 + nummer2
print_to_gange(kat)
linje1 = 'dyt '
linje2 = 'båt.'
kat_to_gange(linje1, linje2)
###Output
dyt båt.
dyt båt.
###Markdown
Nu skulle fejlen gerne være væk og `'dyt båt.'` bliver printet tre gange i stedet for kun to. Udbytterige funktioner og funktioner uden returværdiBlandt de funktioner, vi har set, returnerer nogle funktioner en værdi, mens andre ikke gør.- De funktioner, der **returnerer** en værdi, kalder vi for *fruitful functions* - her udbytterige funktioner. - Funktioner, der **ikke returnerer** nogen værdi, kalder vi *void functions* - her funktioner uden returværdi. En udbytterig funktion:
###Code
math.sqrt(5)
###Output
_____no_output_____
###Markdown
En funktion uden returværdi:
###Code
print_haiku()
###Output
Tyst falder løvet
Efteråret tilstunder
Regnens sagte "dryp"
###Markdown
Men hov, hvad var det? Begge funktioner så ud som om, de returnerede noget. Hvad var forskellen så?
###Code
x = math.sqrt(5)
print(x)
y = print_haiku()
print(y)
###Output
None
###Markdown
Forskellen er at `math.sqrt(5)` ikke udskriver noget til skærmen - det er først når vi aktivt udskriver `x` med print-kommandoen at vi får et output på skærmen. Derimod bliver Haiku-digtet skrevet til skærmen i det øjeblik vi kalder `print_haiku()` og `y` har værdien `None`, hvilket betyder at den ikke er tilskrevet nogen værdi. Hvorfor døje med funktioner?- Når vi definerer en funktion, får vi mulighed for at gruppere en samling funktionalitet under ét navn (**abstraktion**), som vi så kan bruge i stedet for alle disse linjer kode, der hvor funktionaliteten skal bruges. Det gør programmet lettere at læse og fejlfinde.- Ved at inddele et langt program i flere funktioner, kan man fejlfinde programmet én funktion ad gangen (et princip vi i øvrigt kender som *unit testing*) og herefter samle dem til et korrekt fungerende program.- Veldesignede funktioner kan ofte være nyttige i flere forskellige programmer. Når man først har skrevet og debugget den ét sted, kan man genbruge den i andre programmer.- Funktioner kan være med til at gøre et program mindre ved at eliminere gentaget kode. Hvis der senere er brug for at rette i funktionaliteten, behøver man kun gøre det ét sted. Opgave 7I kode-eksemplet herunder er der en gentagelse af funktionalitet i linjerne 3-5 og 10-12. Eleminér det gentagne kode ved at lave en funktion der står for at printe overskriftsboksen. Funktionen skal tage et argument som er den tekst der skal stå i overskriften. Lav den forbedrede udgave i den tomme celle under kodeeksemplet.
###Code
laengde = 15
print('*'*(laengde+4))
print('* ' + ' Ingredienser ' + ' *')
print('*'*(laengde+4))
print('Gær, vand og mel')
print('')
print('*'*(laengde+4))
print('* ' + ' Fremgangsmåde ' + ' *')
print('*'*(laengde+4))
print('Opløs gær i vandet og tilsæt melet. Ælt og bag i ovnen.')
###Output
*******************
* Ingredienser *
*******************
Gær, vand og mel
*******************
* Fremgangsmåde *
*******************
Opløs gær i vandet og tilsæt melet. Ælt og bag i ovnen.
###Markdown
DocstringsNår man laver sine egne funktioner, er det god skik at lave en såkaldt *Docstring* der forklarer formålet med funktionen. Blok-kommentarOvenfor lærte vi at lave kommentarer på en enkelt linje vha. ``. I Python kan vi lave kommentarer der strækker sig over flere linjer. Man både starter og slutter en blok-kommentar med tre anførselstegn `"""`. Et eksempel:
###Code
print('før')
"""
Dette er en blok-kommentar som består af flere linjer.
Anden linje.
Tredje linje.
"""
print('efter')
###Output
før
efter
###Markdown
DocstringEn Docstring er en blok-kommentar som starter på den allerførste linje i en funktion. Hvis blok-kommentaren står et andet sted i koden er det ikke en Docstring.
###Code
def afstand(x1,y1,x2,y2):
"""
Udregner den kartesiske afstand mellem punkterne (x1,y1) og (x2,y2)
"""
print(((x1-x2)**2 + (y1-y2)**2)**(1/2))
afstand(1,1,2,2)
###Output
1.4142135623730951
###Markdown
Udover at det kan være nyttigt med en beskrivelse af funktionen, har Docstring flere smarte egenskaber såsom automatiseret test og automatisk inklusion i Pythons hjælp. De automatiserede tests kommer vi tilbage til i senere lektioner, men herunder kan vi se hvordan vi kan bruge Pythons indbyggede `help`-funktion til at finde ud af hvad funktionen gør og hvad argumenterne repræsenterer.
###Code
help(afstand)
###Output
Help on function afstand in module __main__:
afstand(x1, y1, x2, y2)
Udregner den kartesiske afstand mellem punkterne (x1,y1) og (x2,y2)
###Markdown
Opgave 8Lav en funktion der udregner arealet af en cirkel ud fra en given radius, og lav en passende Docstring. Tjek at du kan hente beskrivelsen ud vha. `help`-funktionen.
###Code
# Definer funktionen her
# Kald help her
###Output
_____no_output_____
###Markdown
For-løkkerVi bruger typisk en `for`-løkke når vi gerne vil have den samme kode udført et antal gange eller for flere elementer i en liste.Lad os starte med et simpelt eksempel:
###Code
for i in range(4):
print('gør noget')
###Output
gør noget
gør noget
gør noget
gør noget
###Markdown
Her kan vi se at kommandoen i linje 2 er blevet udført 4 gange. Vi kan styre antallet af gentagelser ved at ændre tallet i `range(4)`. Ændrer vi til eksempelvis `range(2)` får vi i stedet to gentagelser.Udover at gentage en blok kode, kan vi i en `for`-løkke også få indekset af den pågældende iteration oplyst igennem tælle-variablen, som vi her har kaldt `i`. Lad os sige at vi gerne vil udskrive tallene fra 1-10, det kunne vi umiddelbart tro skulle gøres sådan:
###Code
for i in range(10):
print(i)
###Output
0
1
2
3
4
5
6
7
8
9
###Markdown
Her får vi imidlertid tallene 0-9. Grunden er at Python per definition altid tæller fra 0 (ligesom mange andre programmeringssprog - dog ikke alle). Dette kommer vi tilbage til når vi skal arbejde med lister. I forhold til at få skrevet tallene 1-10 ud kan vi nemt ændre vores program ved at ændre linje 2 til `print(i+1)`. Opgave 9Brug en `for`-løkke til at udskrive de første 20 tal i 7-tabellen. `range`-funktionenOvenfor har vi kun givet et enkelt argument til `range`-funktionen. Den kan dog tage flere argumenter for at styre dens opførsel. Hvis vi skriver `help(range)` kan vi se hvordan vi bruger den. Herunder ses de første linjer fra hjælpen:
###Code
Help on class range in module builtins:
class range(object)
| range(stop) -> range object
| range(start, stop[, step]) -> range object
|
| Return an object that produces a sequence of integers from start (inclusive)
| to stop (exclusive) by step. range(i, j) produces i, i+1, i+2, ..., j-1.
| start defaults to 0, and stop is omitted! range(4) produces 0, 1, 2, 3.
| These are exactly the valid indices for a list of 4 elements.
| When step is given, it specifies the increment (or decrement).
###Output
_____no_output_____
###Markdown
Hvis vi giver 2 argumenter, bliver de opfattet som hhv. start og stop, hvor stop ikke er inkluderet. Opgave 10Indsæt de rigtige argumenter til `range`-funktionen herunder så koden udskriver tallene 1-5.
###Code
for i in range():
print(i)
###Output
_____no_output_____
###Markdown
Hvis vi giver 3 argumenter, opfattes de som hhv. start, stop og skridtstørrelse. Dvs. vi kan udskrive 7-tabellen op til 100 på følgende måde:
###Code
for i in range(7,100,7):
print(i)
###Output
7
14
21
28
35
42
49
56
63
70
77
84
91
98
###Markdown
`for`-løkker og listerEndelig kan `for`-løkker også anvendes til at arbejde på lister. Vi kommer tilbage til lister senere, men det er på sin plads at vise hvordan dette fungerer. Lad os antage at vi har en liste med temperatur-målinger i grader Celcius, som vi gerne vil have udskrevet i grader Fahrenheit i stedet. Formlen for denne konvertering er: $$f = (c \cdot 9/5) + 32$$Herunder definerer vi først en liste `C` der indeholder nogle fiktive målinger og derefter bruger vi en `for`-løkke til at gå igennem elementerne, og for hver lave konvertering til F og udskrive en passende tekst.
###Code
C = [21.6, 22.3, 23.1, 22.8, 21.9]
for c in C:
f = (c*9/5) + 32
print(f'{c} grader C er {f} grader F')
###Output
21.6 grader C er 70.88 grader F
22.3 grader C er 72.14 grader F
23.1 grader C er 73.58 grader F
22.8 grader C er 73.04 grader F
21.9 grader C er 71.42 grader F
###Markdown
Bemærk hvordan `c` for hver iteration tildeles en ny værdi fra listen `C`. Indlejrede `for`-løkkerOfte arbejder man med problemer hvor det ikke er nok med én `for`-løkke og i disse tilfælde bruger man indlejrede løkker. Dvs. én `for`-løkke optræder inden i en anden. Her er et eksempel hvor tællevariablen fra den yderste løkke bruges som startværdi til `range` i den inderste løkke.
###Code
for i in range(5):
line = ''
for j in range(i,5):
line = line + ' ' + str(j)
print(line)
###Output
0 1 2 3 4
1 2 3 4
2 3 4
3 4
4
###Markdown
Part 3 Conditionals and recursion Primary focus is `if` statement Let's first introduce two new operators:- floor division- Modulus Division vs. floor division Normal division is `/`. Example:
###Code
minutes = 105
hours = minutes/60
hours
###Output
_____no_output_____
###Markdown
Floor division is `//`. Example:
###Code
minutes = 105
hours = minutes//60
hours
###Output
_____no_output_____
###Markdown
What happened?Floor division results in only integer part of division by rounding down. Equivalent to $\lfloor \frac{minutes}{60} \rfloor$:
###Code
minutes = 105
hours = minutes/60
import math
hours = math.floor(hours)
hours
###Output
_____no_output_____
###Markdown
If we want to get the remainder, we can either subtract the integer part and convert to minutes:
###Code
remainder = minutes - hours * 60
remainder
###Output
_____no_output_____
###Markdown
Alternatively, we can use the modulus operator `%`. Example:
###Code
remainder = minutes % 60
remainder
###Output
_____no_output_____
###Markdown
Modulus is very useful in many situations, for example you can check whether one number is divisible by another:
###Code
119%13
119//13
9*13
###Output
_____no_output_____
###Markdown
There is a non-zero remainder after doing division, meaning that $119$ is not divisible by $13$.
###Code
119%17
###Output
_____no_output_____
###Markdown
The remainder is zero, meaning that $119$ is indeed divisible by $17$, specifically $7\cdot17=119$. Exercise 1Calculate how many hours and minutes are in 427 minutes. Try the following two approaches:1. Use only floor division `//` and basic arithmetic operators (+,-,*,/).2. Use only modules and basic arithmetic operators.In both cases, print out a suitable text explaining the result. E.g. "X minutes is Y hours and Z minutes." Boolean expressionsThese are expressions that are either `True` or `False`. Examples:
###Code
5 == 5
5 == 6
###Output
_____no_output_____
###Markdown
`True` and `False` are special values - not strings (notice that there is no `"` or `'`). Specifically, they are of type `bool`, which is a data type with two possible values, `True` or `False`.
###Code
type(True)
type(False)
###Output
_____no_output_____
###Markdown
`==` means equals and is used to test for equality. (Notice that it is different from the assignment equal sign `=` we have used so far.)It is one of the relational operators. The others are: - `x!=y`: $x$ is not equal to $y$- `x>y`: $x$ is greater than $y$- `x<y`: $x$ is less than $y$- `x>=y`: $x$ is greater than or equal to $y$- `x<=y`: $x$ is less than or equal to $y$ Exercise 2Check which of the following expressions are true:1. $21/7 \geq \pi$ 2. $\frac{116}{27} < \frac{13}{3}$3. The length of your full name is larger than your age Logical operators `and`, `or`, and `not`Same meaning as in English. Let's assume that we require the weight of an item to be between 1 and 5 kg.
###Code
weight = 1
weight >= 1 and weight < 5
###Output
_____no_output_____
###Markdown
Python accepts any non-zero value as `True`:
###Code
17 and True
###Output
_____no_output_____
###Markdown
It works, but avoid confusing code. Conditional execution Very often we need to make the program flow depend on some conditions. An example for video streaming service could be:- For age under 18 years, show safe content- For age 18 or above, show all content
###Code
age = 10
if age < 18:
print('Safe content only')
if age >= 18:
print('All content')
###Output
Safe content only
###Markdown
However, it is unnecessary to evaluate the second condition if the first is true, since they are mutually exclusive. Tip:There is no limit on the number of statements that can appear in the body, but there has tobe at least one. Occasionally, it is useful to have a body with no statements (usually as aplace keeper for code you haven’t written yet). In that case, you can use the `pass` statement,which does nothing.
###Code
if age < 0:
pass
###Output
_____no_output_____
###Markdown
Exercise 3Let the variable `fuel` denote the amount of remaining fuel in the fuel tank of a car. Implement a conditional code below that prints out a warning that fuel is low when the value of `fuel` is lower than 5 liters. Try assigning different values (4,5, and 6) to `fuel` to test that it works as intended. Only `fuel = 4` should produce a warning.
###Code
fuel = 6
# Implement conditional code below
###Output
_____no_output_____
###Markdown
Alternative execution Another way to implement the age restriction is using the `else` keyword:
###Code
age = 18
if age < 18:
print('Safe content only')
else:
print('All content')
###Output
All content
###Markdown
In this case, if we have to change the age limit in the condition, we only needto change it in one place. In the code above that uses two `if`-statements, we must remember to change the number in both places. Chained conditionalsIn case we have more than two possibilities, we can use `elif` (else if) to create branches, where only the branch whose condition is `True` is executed:
###Code
age = 10
if age < 8:
print("Kids' content only")
elif age < 18:
print('Safe content only')
else:
print('All content')
###Output
Safe content only
###Markdown
Conditions are checked in order, i.e. top to bottom. When a condition is `True` Python evaluates the code inside and exits the branch. Exercise 4Copy paste your solution from exercise 3 into the cell below and extend it to give an additional critical warning when the fuel level is 1 liter or below. Use the `elif` statement to achieve this. Test you code with `fuel` values 0.5, 1, 4, 5, and 6. Only values 0.5 and 1 should give the critical warning and only the value 4 should give the low fuel warning.Notice the importance of the ordering of conditions. Try swapping the conditions for low fuel warning and critical warning and see which result you get. (This may also help you to get the desired behavior if it did not work in the first try.) Nested conditionalsWe can also put conditions in another conditional branch:
###Code
age = 3
paid = True
if paid:
if age < 8:
print("Kids' content only")
elif age < 18:
print('Safe content only')
else:
print('All content')
else:
print('No subscription')
###Output
Kids' content only
###Markdown
In this case, the conditions on `age` will only be considered if the subscription has been paid.Try changing the value of `paid` (`True` or `False`) and give different values for `age` in the code above to achieve the different possible outputs. Double conditionals Double conditional for a variable in the same statement is possible in Python:
###Code
age = 10
if 8 < age < 18:
print("Safe content only")
###Output
Safe content only
###Markdown
Exercise 5Re-implement the fuel tank example using a double conditional for low fuel warning in the 1 to 5 liter interval and an `elif`-statement for the critical level. Keyboard inputIt is possible to get input from a user with the `input()` function. Try running the following cells:
###Code
name = input('What is your name?\n')
print(name)
age_str = input('What is your age?\n')
age = int(age_str)
type(age)
###Output
_____no_output_____
###Markdown
Exercise 7Make a program that asks the user for a name and age. If the age is below 40 years, the program shall print out the name with the suffix `Jr.` and if the age is 40 or above, it should print out the name with the suffix `Sr.`. An example for a 50 year old user is: `John Doe, Sr.`.
###Code
name = 'John'
###Output
_____no_output_____
###Markdown
Fruitful functionsThe functions we have made so far have printed things to the screen or done computations, but they have not returned values. Such functions are called `void` functions.So-called fruitful functions return a result to the calling function.Example of a function to calculate area of circle:
###Code
def area(radius):
a = math.pi * radius**2
return a
ar = area(3)
ar
###Output
_____no_output_____
###Markdown
Notice how we store the returned value from calling the `area`-function in the variable `ar`. Return statementsReturn statements can appear multiple times in code, e.g. with conditionals:
###Code
def absolute_value(x):
if x < 0:
return -x
if x > 0:
return x
###Output
_____no_output_____
###Markdown
Is this function correct? What if $x = 0$? Boolean functionsA function can also return a boolean (`True` or `False`), example:
###Code
def is_divisible(x, y):
return x % y == 0
is_divisible(6, 4)
is_divisible(6, 3)
###Output
_____no_output_____
###Markdown
Now you can program anything!We have only covered a small subset of Python, but you might be interested to know thatthis subset is a complete programming language, which means that anything that can becomputed can be expressed in this language. **Any program ever written could be rewrittenusing only the language features you have learned so far** (actually, you would need a fewcommands to control devices like the mouse, disks, etc., but that’s all). Additional things in Python that are nice to know Iteration*The ability to run a block of statements repeatedly.*We saw above how this can be done through recursion, but it is not always easy to get an overview.Two common methods to implement iteration is- `while` loop- `for` loopWe have looked at the `for` loop last time. Today we will present the `while` loop and study their differences. ReassignmentBefore we study iteration, it is worth studying reassignment. It is possible to make multiple assignments to the same variable:
###Code
x=1
x
x=2
x
###Output
_____no_output_____
###Markdown
In the second assignment `x=2`, we overwrite the initial value of `x`. NoteRemember that Python uses `=` for assignment and `==` for testing equality. This is different from math, where `=` has both meanings.Example:If we assign the value of `a` to `b`, then the variables will be equal.
###Code
a=5
b=a
a==b
###Output
_____no_output_____
###Markdown
But if we change the value of `a`, they will no longer be equal.
###Code
a=3
a==b
###Output
_____no_output_____
###Markdown
The assignment `b=a` from above actually copies the value of `a` into `b`. Updating variablesA common kind of reassignment is an update where the new value of a variable depends on the old value.For example incrementing a value:
###Code
x = 1 # First we need to define variable
x = x + 1
print(x)
###Output
2
###Markdown
Common variable updates are:- increment
###Code
x = x + 1
x += 1 # Short form of increment.
###Output
_____no_output_____
###Markdown
- decrement
###Code
x = x - 1
x -= 1 # Short form of decrement.
###Output
_____no_output_____
###Markdown
Exercise 9Verify that the short-hand versions give the same result as the long form versions. The `while` statement *One method to obtain iteration or repetition of a code block.*Below is an example of the `countdown` function that we studied earlier, where it was implemented using recursion:
###Code
def countdown(n):
while n > 0:
print(n)
n -= 1
print('Blastoff!')
countdown(3)
###Output
3
2
1
Blastoff!
###Markdown
Elements of a while loop- Test boolean value of *condition*:- If false, exit the loop and continue with next statement (here, `after_a_while()`)- If condition is true, execute body of loop once (here this is statement 1, statement 2, ...)Note: The body of the while loop is everything that is indented after the `:`.
###Code
while <condition>:
<statement 1>
<statement 2>
...
after_a_while()
###Output
_____no_output_____
###Markdown
Loop terminationFor the `countdown` function, it is easy to see that the while loop will terminate at some point and not loop infinitely as `n` is counted down, towards the stop condition.It is not always easy to tell if a while loop will always terminate, e.g.:
###Code
def sequence(n):
while n != 1:
print(n)
if n % 2 == 0: # n is even
n=n/2
else:
n = n*3 + 1 # n is odd
###Output
_____no_output_____
###Markdown
Q: Will the loop terminate? Let's check:
###Code
sequence(128)
###Output
128
64.0
32.0
16.0
8.0
4.0
2.0
###Markdown
From experiments, it seems that the program will eventually reach a number that is a power of two, and then always divide by two to finally reach 1 and terminate. Actually no one has been able to prove that it will always terminate or that there are cases where it will not. It is an open mathematical problem called Collatz' conjecture. The `break` statementSometimes it is not possible to know when to end a loop until somewhere in the loop body. In that case it is possible to stop the loop using the `break` statement.For example, suppose you want to take input from the user until they type done. You couldwrite:
###Code
while True:
line = input('> ')
if line == 'done':
break
print(line)
print('Done!')
###Output
_____no_output_____
###Markdown
Using `break` is a common way of writing `while` loops as you can express the stop condition affirmatively (“stop when this happens”) rather than negatively (“keep going until that happens”).However, be careful not to put `break` statements in multiple places, since the code can be very difficult to read and understand. Exercise 10Try out the program above and see how it only exits if the text typed on a line is `done`. Loop use case: Square rootsNewton's method allows to compute the square root of a number `a`.$y = \frac{x+a/x}{2}$Let's try with numbers:
###Code
a = 4
x = 3
y = (x + a/x) / 2
print(y)
###Output
2.1666666666666665
###Markdown
This is close, but if we use this estimate as the new initial guess we get closer:
###Code
x = y
y = (x + a/x) / 2
print(y)
###Output
2.0064102564102564
###Markdown
After a few more updates we get `y == 2.0` and we can stop. We can formulate this approach using a `while` statement:
###Code
a = 4
x = 3
while True:
print(x)
y = (x + a/x) / 2
if y == x:
break
x = y
print(y)
###Output
3
2.1666666666666665
2.0064102564102564
2.0000102400262145
2.0000000000262146
2.0
2.0
###Markdown
It is generally not a good idea to test for equality of floats `y == x`.Rather, one should define a maximum tolerated error (`epsilon`) and check if the difference between the variables is smaller than this:
###Code
if abs(y-x) < epsilon:
break
###Output
_____no_output_____
###Markdown
Exercise 11:Extend the example of Newton's method above to test for equality using the maximim tolerated error instead of `==`. The `for` statement In lecture 2 you were introduced to the `for` loop and we looked at how to repeat commands and how we can use the `range`-function to generate a counter or index variable, as in the following examples:
###Code
for i in range(4):
print('Hello!')
for i in range(3):
print(i)
###Output
0
1
2
###Markdown
Remember that in Python we count from 0 and the upper limit is not included.How can we print the numbers 1, 2, and 3 instead?
###Code
for i in range(1,4):
print(i)
###Output
1
2
3
###Markdown
Another use of `for` loops is to iterate over a list of items. While we have not looked at lists in details, it suffices to know that a list is defined as comma-seperated items surrounded by square brackets, e.g. `[a, b, c]` where a, b, and c are variables or values.In the following example we will implement the same functionality using first a `for`-loop and thereafter using a `while`-loop.Example: Let's print out the numbers of the following Fibonacci sequence that are divisible by 2:
###Code
fib_list = [1, 2, 3, 5, 8, 13, 21, 34]
###Output
_____no_output_____
###Markdown
`while`-loop version:
###Code
i = 0
while i < len(fib_list):
n = fib_list[i]
if n%2==0:
print(n,'is divisible by 2.')
i += 1
###Output
2 is divisible by 2.
8 is divisible by 2.
34 is divisible by 2.
###Markdown
`for`-loop version:
###Code
for n in fib_list:
if n%2==0:
print(n,'is divisible by 2.')
###Output
2 is divisible by 2.
8 is divisible by 2.
34 is divisible by 2.
###Markdown
Notice how compact the `for`-loop version is compared to the `while`-loop. The `for`-loop is tailored for iterating over lists, whereas the `while`-loop is a more general construction that is more flexible and can be used in situations where the `for`-loop is too rigid. However, often both a `while`-loop and a `for`-loop can get the job done. Experience will help you make the best choice in a given situation. Part 4 ListerTidligere har vi arbejdet med simple datatyper som kun kan indeholde én værdi:- `int` (heltal),- `float` (kommatal),- `bool` (True/False) og- `str` (tekststreng, som dog kan indeholde mange tegn). En liste er derimod en datatype som kan indeholde flere værdier. Den mest direkte måde at oprette en liste på er ved at skrive dens indhold eksplicit. Der bruger man den følgende syntaks med kantede parenteser.Eksempel med liste af heltal:
###Code
[10, 20, 30, 40]
###Output
_____no_output_____
###Markdown
Eksempel med liste af tekststrenge:
###Code
['crunchy frog', 'ram bladder', 'lark vomit']
###Output
_____no_output_____
###Markdown
Lister kan også tildeles til en variabel:
###Code
liste = [1, 2, 3]
print(liste)
###Output
[1, 2, 3]
###Markdown
Lister er fleksible nok til at kunne indeholde elementer af forskellig type. Man kan endda kombinere forskellige typer i den samme liste. Opgave 1Lav en liste der indeholde en værdi af hver af de fire simple datatyper nævnt ovenfor. Gem listen i en variabel med navnet `enAfHver`. Indlejrede lister En liste inden i en liste kaldes *indlejret*.
###Code
indlejret = [[1, 2], [10, 11]]
print(indlejret)
###Output
[[1, 2], [10, 11]]
###Markdown
Hvor mange elementer er der i ovenstående liste?Vi kan bruge `len`-kommandoen til at undersøge dette:
###Code
len(indlejret)
###Output
_____no_output_____
###Markdown
Hvis du tænkte at der er 4 elementer i listen undrer du dig nok over dette resultat. Lad os kigge hvad der er på de to pladser i listen. Her bruger vi kantede paranteser til at indeksere ind i listen:
###Code
print(indlejret[0])
print(indlejret[1])
###Output
[1, 2]
[10, 11]
###Markdown
Som vi kan se her, er det rigtigt at der er to elementer i listen. De to lister har hver især har to elementer. Vi kan tilgå elementerne i de indlejrede lister ved at tilføje endnu et sæt kantede paranteser, på følgende måde:
###Code
print(indlejret[1][0])
###Output
10
###Markdown
Opgave 2Brug dobbelt-indeksering som i eksemplet herover til at udskrive hhv. 2-tallet og 11-tallet fra listen `indlejret`. Lister er muterbareDvs. at indholdet af en eksisterende liste kan ændres. Ændring af et element i en liste kan gøres således:
###Code
numbers = [42, 123]
print(numbers)
numbers[1] = 5 # Her ændrer vi værdien af elementet på pladsen med indeks 1.
print(numbers)
###Output
[42, 123]
[42, 5]
###Markdown
Gennemløbe en listeVi kan nemt gennemløbe elementerne i en liste vha. en `for`-løkke:
###Code
cheeses = ['cheddar', 'Gouda', 'Danbo']
for cheese in cheeses:
print(cheese)
###Output
cheddar
Gouda
Danbo
###Markdown
Bemærk hvordan variablen `cheese` i hvert gennemløb får tilskrevet den næste værdi i listen `cheeses`. Har man brug for at ændre listen i en `for`-løkke, må man også have indeks til de enkelte elementer vha. `range`-funktionen:
###Code
for i in range(len(numbers)):
numbers[i] = numbers[i] * 2
numbers
###Output
_____no_output_____
###Markdown
En løkke over en tom liste kører aldrig:
###Code
for x in []:
print('This never happens.')
###Output
_____no_output_____
###Markdown
Opgave 3Lav en `for`-løkke der udskriver hvert andet element i listen `a`.
###Code
a = [12, 23, 34, 45, 56, 67, 78, 89, 90]
###Output
_____no_output_____
###Markdown
Operationer på listerEt par af de aritmetiske operatorer fungerer også på lister og har følgende virkning:- Addition (`+`) konkatenerer (sammenføjer) lister:
###Code
a = [1, 2, 3]
b = [4, 5, 6]
c = a + b
c
###Output
_____no_output_____
###Markdown
- Multiplikation (`*`) - med et heltal - repeterer en liste:
###Code
[0] * 4
[1, 2, 3] * 3
###Output
_____no_output_____
###Markdown
Opgave 4Brug addition og multiplikation af listerne `x = [1]` og `y = [2]` til at danne følgende liste `z = [1, 1, 1, 2, 2, 1, 1, 1, 2, 2]`:
###Code
x = [1]
y = [2]
z = # Udfyld her
z
###Output
_____no_output_____
###Markdown
Udsnit af listerMan kan udtage flere elementer fra en liste samtidigt ved at lave en "slice". Resultatet af sådan en operation er en ny liste som indeholder de indekserede elementer. Lad os først definere en liste:
###Code
t = ['a', 'b', 'c', 'd', 'e', 'f']
###Output
_____no_output_____
###Markdown
Vi kan bede om elementer med indeks $1$ til $3$:
###Code
t[1:3]
###Output
_____no_output_____
###Markdown
Bemærk at, ligesom i `range`-funktionen, får vi ikke værdien med slut-indeks med i resultatet. I Python skal `a:b`, (hvor $a$ og $b$ er heltal), forstås som fra $a$ til $b$, (inklusiv $a$ men eksklusiv $b$). Med matematisk notation kan vi skrive $[a,b[$ eller $[a,b)$. Vi kan også bede om alle elementer fra start indtil indeks 4 med følgende kommando:
###Code
t[:4]
###Output
_____no_output_____
###Markdown
Ligeledes kan vi bede om alle elementer fra indeks 3 indtil slut:
###Code
t[3:]
###Output
_____no_output_____
###Markdown
Endelig kan vi få alle elementer ud ved bare at skrive `:`:
###Code
t[:]
###Output
_____no_output_____
###Markdown
Hvis vi ikke kender længden af en liste men gerne vil have f.eks. de sidste 2 værdier ud, kan vi bruge negativ indeksering på følgende måde:
###Code
t[-2:]
###Output
_____no_output_____
###Markdown
Flere elementer i en liste kan ændres på én gang via et slice-udtryk:
###Code
t = ['a', 'b', 'c', 'd', 'e', 'f']
t[1:3] = ['x', 'y']
t
###Output
_____no_output_____
###Markdown
Opgave 51. Antag at vi har et datasæt $x$ hvor der er noget støj i hhv. begyndelsen og slutningen. Lav et udsnit af $x$ som indeholder værdierne fra $x$ bortset fra de første og sidste 3 elementer.
###Code
x = [3.1, 2.4, 2.3, 2.5, 2.7, 2.4, 2.5, 2.8, 2.1, 2.7, 2.9]
# lav udsnit herunder
###Output
_____no_output_____ |
how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb | ###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
image.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
image.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
import azureml.core
import json
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
from azureml.core import Model
model = Model.register(model_path="sklearn_regression_model.pkl", # This points to a local file.
model_name="sklearn_regression_model.pkl", # This is the name the model is registered as.
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.20.3'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'area': "diabetes", 'type': "regression"},
description="Predict diabetes using regression model",
enable_app_insights=True)
aci_service_name = "aci-service-appinsights"
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)
aci_service.wait_for_deployment(show_output=True)
print(aci_service.state)
if aci_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aci_service.run(test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.core.compute_target import ComputeTargetException
aks_name = "my-aks-insights"
creating_compute = False
try:
aks_target = ComputeTarget(ws, aks_name)
print("Using existing AKS compute target {}.".format(aks_name))
except ComputeTargetException:
print("Creating a new AKS compute target {}.".format(aks_name))
# Use the default configuration (can also provide parameters to customize).
prov_config = AksCompute.provisioning_configuration()
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
creating_compute = True
%%time
if creating_compute and aks_target.provisioning_state != "Succeeded":
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python%%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
# Set the web service configuration.
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state == "Succeeded":
aks_service_name = "aks-service-appinsights"
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
else:
raise ValueError("AKS cluster provisioning failed. Error: ", aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
if aks_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
if creating_compute:
aks_target.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="myenv.yml")
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target = aks_target)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
import azureml.core
import json
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
from azureml.core import Model
model = Model.register(model_path="sklearn_regression_model.pkl", # This points to a local file.
model_name="sklearn_regression_model.pkl", # This is the name the model is registered as.
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.20.3'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'area': "diabetes", 'type': "regression"},
description="Predict diabetes using regression model",
enable_app_insights=True)
aci_service_name = "aci-service-appinsights"
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)
aci_service.wait_for_deployment(show_output=True)
print(aci_service.state)
if aci_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aci_service.run(test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
###Code
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.core.compute_target import ComputeTargetException
aks_name = "my-aks-insights"
creating_compute = False
try:
aks_target = ComputeTarget(ws, aks_name)
print("Using existing AKS compute target {}.".format(aks_name))
except ComputeTargetException:
print("Creating a new AKS compute target {}.".format(aks_name))
# Use the default configuration (can also provide parameters to customize).
prov_config = AksCompute.provisioning_configuration()
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
creating_compute = True
%%time
if creating_compute and aks_target.provisioning_state != "Succeeded":
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python%%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
# Set the web service configuration.
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state == "Succeeded":
aks_service_name = "aks-service-appinsights"
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
else:
raise ValueError("AKS cluster provisioning failed. Error: ", aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
if aks_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
if creating_compute:
aks_target.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
import azureml.core
import json
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
from azureml.core import Model
model = Model.register(model_path="sklearn_regression_model.pkl", # This points to a local file.
model_name="sklearn_regression_model.pkl", # This is the name the model is registered as.
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.20.3'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'area': "diabetes", 'type': "regression"},
description="Predict diabetes using regression model",
enable_app_insights=True)
aci_service_name = "aci-service-appinsights"
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)
aci_service.wait_for_deployment(show_output=True)
print(aci_service.state)
if aci_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aci_service.run(test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
from azureml.exceptions import ComputeTargetException
aks_name = "my-aks"
try:
aks_target = ComputeTarget(ws, aks_name)
print("Using existing AKS cluster {}.".format(aks_name))
except ComputeTargetException:
print("Creating a new AKS cluster {}.".format(aks_name))
# Use the default configuration (can also provide parameters to customize).
prov_config = AksCompute.provisioning_configuration()
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
%%time
if aks_target.provisioning_state != "Succeeded":
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python%%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
# Set the web service configuration.
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state == "Succeeded":
aks_service_name = "aks-service-appinsights"
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
if aks_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
import azureml.core
import json
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
from azureml.core import Model
model = Model.register(model_path="sklearn_regression_model.pkl", # This points to a local file.
model_name="sklearn_regression_model.pkl", # This is the name the model is registered as.
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.20.3'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'area': "diabetes", 'type': "regression"},
description="Predict diabetes using regression model",
enable_app_insights=True)
aci_service_name = "aci-service-appinsights"
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)
aci_service.wait_for_deployment(show_output=True)
print(aci_service.state)
if aci_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aci_service.run(test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
from azureml.exceptions import ComputeTargetException
aks_name = "my-aks"
creating_compute = False
try:
aks_target = ComputeTarget(ws, aks_name)
print("Using existing AKS cluster {}.".format(aks_name))
except ComputeTargetException:
print("Creating a new AKS cluster {}.".format(aks_name))
# Use the default configuration (can also provide parameters to customize).
prov_config = AksCompute.provisioning_configuration()
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
creating_compute = True
%%time
if creating_compute:
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python%%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
# Set the web service configuration.
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state == "Succeeded":
aks_service_name = "aks-service-appinsights"
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
if aks_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target = aks_target)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output = True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="myenv.yml")
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target = aks_target)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output = True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
import azureml.core
import json
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
from azureml.core import Model
model = Model.register(model_path="sklearn_regression_model.pkl", # This points to a local file.
model_name="sklearn_regression_model.pkl", # This is the name the model is registered as.
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'area': "diabetes", 'type': "regression"},
description="Predict diabetes using regression model",
enable_app_insights=True)
aci_service_name = "aci-service-appinsights"
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)
aci_service.wait_for_deployment(show_output=True)
print(aci_service.state)
if aci_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aci_service.run(test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
from azureml.exceptions import ComputeTargetException
aks_name = "my-aks"
try:
aks_target = ComputeTarget(ws, aks_name)
print("Using existing AKS cluster {}.".format(aks_name))
except ComputeTargetException:
print("Creating a new AKS cluster {}.".format(aks_name))
# Use the default configuration (can also provide parameters to customize).
prov_config = AksCompute.provisioning_configuration()
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
%%time
if aks_target.provisioning_state != "Succeeded":
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python%%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
# Set the web service configuration.
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state == "Succeeded":
aks_service_name = "aks-service-appinsights"
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
if aks_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. 1. Import your dependencies
###Code
from azureml.core import Workspace, Run
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.image import Image
from azureml.core.model import Model
import azureml.core
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspaceFollow Notebook 00 instructions to do this.
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("saving input data" + time.strftime("%H:%M:%S"))print ("saving prediction data" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
from azureml.monitoring import ModelDataCollector
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
global inputs_dc, prediction_dc
# this setup will help us save our inputs under the "inputs" path in our Azure Blob
inputs_dc = ModelDataCollector(model_name="sklearn_regression_model", identifier="inputs", feature_names=["feat1", "feat2"])
# this setup will help us save our ipredictions under the "predictions" path in our Azure Blob
prediction_dc = ModelDataCollector("sklearn_regression_model", identifier="predictions", feature_names=["prediction1", "prediction2"])
# note you can pass in multiple rows for scoring
def run(raw_data):
global inputs_dc, prediction_dc
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
#Print statement for appinsights custom traces:
print ("saving input data" + time.strftime("%H:%M:%S"))
#this call is saving our input data into our blob
inputs_dc.collect(data)
#this call is saving our prediction data into our blob
prediction_dc.collect(result)
#Print statement for appinsights custom traces:
print ("saving prediction data" + time.strftime("%H:%M:%S"))
# you can return any data type as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so (Notebook 11)
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test2'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
%%time
aks_service_name ='aks-w-dc3'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
import json
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
prediction = aks_service.run(input_data=test_sample)
print(prediction)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. 1. Import your dependencies
###Code
from azureml.core import Workspace, Run
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.image import Image
from azureml.core.model import Model
import azureml.core
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
import json
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service")
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed.")
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
import json
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service")
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
image.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. 1. Import your dependencies
###Code
from azureml.core import Workspace, Run
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.image import Image
from azureml.core.model import Model
import azureml.core
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
import json
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service")
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed.")
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
import json
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service")
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
image.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service")
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed.")
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service")
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
image.delete()
model.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Deploy the model with this new configuration. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png) 1. Import your dependencies
###Code
import azureml.core
import json
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
from azureml.core import Model
model = Model.register(model_path="sklearn_regression_model.pkl", # This points to a local file.
model_name="sklearn_regression_model.pkl", # This is the name the model is registered as.
tags={'area': "diabetes", 'type': "regression"},
description="Ridge regression model to predict diabetes",
workspace=ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import os
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy==1.19.5','scikit-learn==0.22.1'],
pip_packages=['azureml-defaults'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create Inference Configuration
###Code
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'area': "diabetes", 'type': "regression"},
description="Predict diabetes using regression model",
enable_app_insights=True)
aci_service_name = "aci-service-appinsights"
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)
aci_service.wait_for_deployment(show_output=True)
print(aci_service.state)
if aci_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aci_service.run(test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
###Code
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.core.compute_target import ComputeTargetException
aks_name = "my-aks-insights"
creating_compute = False
try:
aks_target = ComputeTarget(ws, aks_name)
print("Using existing AKS compute target {}.".format(aks_name))
except ComputeTargetException:
print("Creating a new AKS compute target {}.".format(aks_name))
# Use the default configuration (can also provide parameters to customize).
prov_config = AksCompute.provisioning_configuration()
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
creating_compute = True
%%time
if creating_compute and aks_target.provisioning_state != "Succeeded":
aks_target.wait_for_completion(show_output=True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python%%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
# Set the web service configuration.
aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state == "Succeeded":
aks_service_name = "aks-service-appinsights"
aks_service = Model.deploy(ws,
aks_service_name,
[model],
inference_config,
aks_deployment_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
else:
raise ValueError("AKS cluster provisioning failed. Error: ", aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
if aks_service.state == "Healthy":
test_sample = json.dumps({
"data": [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]
})
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
aks_service.wait_for_deployment(show_output=True)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
model.delete()
if creating_compute:
aks_target.delete()
###Output
_____no_output_____
###Markdown
Enabling App Insights for Services in ProductionWith this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. What does Application Insights monitor?It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview) What is different compared to standard production deployment process?If you want to enable generic App Insights for a service run:```pythonaks_service= Webservice(ws, "aks-w-dc2")aks_service.update(enable_app_insights=True)```Where "aks-w-dc2" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select "Enable AppInsights diagnostics"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:1. Update scoring file.2. Update aks configuration.3. Build new image and deploy it. 1. Import your dependencies
###Code
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import AksWebservice
import azureml.core
import json
print(azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
2. Set up your configuration and create a workspace
###Code
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
###Output
_____no_output_____
###Markdown
3. Register ModelRegister an existing trained model, add descirption and tags.
###Code
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
4. *Update your scoring file with custom print statements*Here is an example: a. In your init function add:```pythonprint ("model initialized" + time.strftime("%H:%M:%S"))``` b. In your run function add:```pythonprint ("Prediction created" + time.strftime("%H:%M:%S"))```
###Code
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
import time
def init():
global model
#Print statement for appinsights custom traces:
print ("model initialized" + time.strftime("%H:%M:%S"))
# note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
print ("Prediction created" + time.strftime("%H:%M:%S"))
# you can return any datatype as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
print (error + time.strftime("%H:%M:%S"))
return error
###Output
_____no_output_____
###Markdown
5. *Create myenv.yml file*
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
6. Create your new Image
###Code
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
###Output
_____no_output_____
###Markdown
Deploy to ACI (Optional)
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "diabetes", 'type': "regression"},
description = 'Predict diabetes using regression model',
enable_app_insights = True)
from azureml.core.webservice import Webservice
aci_service_name = 'my-aci-service-4'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aci_service.state == "Healthy":
prediction = aci_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aci_service.error)
###Output
_____no_output_____
###Markdown
7. Deploy to AKS service Create AKS compute if you haven't done so.
###Code
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-test3'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
###Output
_____no_output_____
###Markdown
If you already have a cluster you can attach the service to it: ```python %%timeresource_id = '/subscriptions//resourcegroups//providers/Microsoft.ContainerService/managedClusters/'create_name= 'myaks4'attach_config = AksCompute.attach_configuration(resource_id=resource_id)aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) Wait for the operation to completeaks_target.wait_for_provisioning(True)``` a. *Activate App Insights through updating AKS Webservice configuration*In order to enable App Insights in your service you will need to update your AKS configuration file:
###Code
#Set the web service configuration
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
###Output
_____no_output_____
###Markdown
b. Deploy your service
###Code
if aks_target.provisioning_state== "Succeeded":
aks_service_name ='aks-w-dc5'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target
)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
else:
raise ValueError("AKS provisioning failed. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
8. Test your service
###Code
%%time
test_sample = json.dumps({'data': [
[1,28,13,45,54,6,57,8,8,10],
[101,9,8,37,6,45,4,3,2,41]
]})
test_sample = bytes(test_sample,encoding='utf8')
if aks_service.state == "Healthy":
prediction = aks_service.run(input_data=test_sample)
print(prediction)
else:
raise ValueError("Service deployment isn't healthy, can't call the service. Error: ", aks_service.error)
###Output
_____no_output_____
###Markdown
9. See your service telemetry in App Insights1. Go to the [Azure Portal](https://portal.azure.com/)2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.4. Click on the top banner "Analytics"5. In the "Schema" section select "traces" and run your query.6. Voila! All your custom traces should be there. Disable App Insights
###Code
aks_service.update(enable_app_insights=False)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%%time
aks_service.delete()
aci_service.delete()
image.delete()
model.delete()
###Output
_____no_output_____ |
ea-2022-04-ndvi-automation.ipynb | ###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel ** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. ** DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting(HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables.* Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and , where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name: Mitch Thompson** --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise **PSEUDOCODE*** Get sorted list of landsat tif files needed for NDVI for a single scene (bands 4-5)* Open and crop the bands to sites/site-name/vector/site-name-crop.shp* Restrict the Landsat 8 values to the 'valid range" of 0 to 10000* Stack (concat) the bands (optional for NDVI calc)* Open QA layer & crop * Generate cloud mask* Calculate mean NDVI * Generate DataFrame w/ mean NDVI* Grab site name and date from filename (e.g. file_name[0:4] for site_name)* Format date using DateTime * Add or rename columns* Index DF on the date* Output to csv
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import geopandas as gpd
import rioxarray as rxr
import xarray as xr
import earthpy as et
import warnings
from glob import glob
from matplotlib.dates import DateFormatter
sns.set(font_scale=1.5, style='whitegrid', context='notebook')
et.data.get_data('ndvi-automation')
data_path = os.path.join(et.io.HOME, 'earth-analytics', 'data')
if os.path.exists(data_path):
os.chdir(data_path)
else:
os.makedirs(data_path)
print('The new directory is created!')
os.chdir(data_path)
print('Current working directory is set to: ', os.getcwd())
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
def open_clean_bands(band_path, valid_range=None):
"""Open and mask a single landsat band using a pixel_qa layer.
Parameters
-----------
band_path : string
A path to the array to be opened
valid_range : tuple (optional)
A tuple of min and max range of values for the data. Default = None
Returns
-----------
arr : xarray DataArray
An xarray DataArray with values that should be masked set to 1 for True (Boolean)
"""
band = rxr.open_rasterio(band_path, masked=True).squeeze()
if valid_range:
mask = ((band < valid_range[0]) | (band > valid_range[1]))
band = band.where(~xr.where(mask, True, False))
return band
def mask_crop_ndvi(all_bands, crop_bound, pixel_qa, vals):
"""Compute normalized difference vegetation index (NDVI) from given landsat bands. Crop the NDVI layer and the pixel qa layer to the boundary as specified by a given crop_bound file.
Parameters
-----------
all_bands : list
A list containing the xarray objects for landsat bands 4 and 5
crop_bound: geopandas GeoDataFrame
A geopandas dataframe to be used to crop the raster data using rasterio mask().
pixel_qa: xarray DataArray
An xarray DataArray with pixel qa values that have not yet been turned into a mask (0s and 1s)
vals: list
A list of values needed to create the cloud mask
Returns
-----------
ndvi_crop : Xarray Dataset
A cropped and masked xarray object containing NDVI values
"""
crop_json = crop_bound.geometry
# Clip pixel qa cloud mask layer
cl_mask_crop = pixel_qa.rio.clip(crop_json)
# Calculate NDVI
ndvi_xr = (all_bands[1]-all_bands[0]) / (all_bands[1]+all_bands[0])
# Clip NDVI layer
ndvi_crop = ndvi_xr.rio.clip(crop_json)
# Apply cloud mask to NDVI
ndvi_crop = ndvi_crop.where(~cl_mask_crop.isin(vals))
return ndvi_crop
# Set base file path for single scene data
harv_data_path = os.path.join('ndvi-automation',
'sites',
'HARV',
'landsat-crop',
'LC080130302017031701T1-SC20181023151837')
# Get sorted list of landsat tif files needed for NDVI for a single scene
harv_band_path = sorted(glob(os.path.join(harv_data_path, '*band*[4-5].tif')))
# Generate list of bands for NDVI calculation
all_bands_harv = []
# Function call in loop
for aband in harv_band_path:
cleaned_band = open_clean_bands(band_path=aband, valid_range=(0, 10000))
all_bands_harv.append(cleaned_band)
date = aband[50:58]
# Set variable for site directories
sites_path = glob(os.path.join('ndvi-automation', 'sites' + '/*/'))
# Get site name from directory path
vector_dir_harv = os.path.join(sites_path[1], 'vector')
site_name = os.path.basename(os.path.normpath(sites_path[1]))
# Format date from filename
site_date = pd.to_datetime(date, format='%Y%m%d')
# Open crop boundary
site_boundary_path = os.path.join(vector_dir_harv, site_name + '-crop.shp')
crop_bound = gpd.read_file(site_boundary_path)
# Set path to cloud mask layer
harv_qa_path = glob(os.path.join(harv_data_path, '*qa*'))
# Open the cloud mask layer
qa_layer = rxr.open_rasterio(harv_qa_path[0], masked=True).squeeze()
# List of Landsat 8 cloud no vals
vals = [328, 392, 840, 904, 1350, 352, 368, 416,
432, 480, 864, 880, 928, 944, 992, 480, 992]
# Function call
ndvi_clean = mask_crop_ndvi(all_bands=all_bands_harv,
crop_bound=crop_bound,
pixel_qa=qa_layer,
vals=vals)
# Compute the arithmetic mean, ignoring NaNs
mean_ndvi = np.nanmean(ndvi_clean)
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
# Dict for df
ndvi_dict = {'site': site_name, 'mean_ndvi': mean_ndvi, 'date': [site_date]}
ndvi_mean_df = pd.DataFrame.from_dict(ndvi_dict)
ndvi_mean_df.set_index('date')
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# Supress warnings
warnings.filterwarnings(action='ignore', message='Mean of empty slice')
all_data = []
# Loop through file paths
for site_path in sites_path:
site_name = os.path.basename(os.path.normpath(site_path))
vector_dir = os.path.join(site_path, 'vector')
site_boundary_path = os.path.join(vector_dir, site_name + '-crop.shp')
crop_bound = gpd.read_file(site_boundary_path)
landsat_dir = os.path.join(site_path, 'landsat-crop')
all_scenes = sorted(glob(os.path.join(landsat_dir, 'LC08*')))
# Loop through files
for scene in all_scenes:
band_paths = sorted(glob(os.path.join(scene,
'*band*[4-5].tif')))
all_bands = []
# Function call in loop
for band in band_paths:
cleaned_band = open_clean_bands(band_path=band,
valid_range=(0, 10000))
all_bands.append(cleaned_band)
qa_path = glob(os.path.join(scene, '*pixel_qa*'))
qa_layer = rxr.open_rasterio(qa_path[0], masked=True).squeeze()
# Function call
ndvi_clean = mask_crop_ndvi(all_bands=all_bands,
crop_bound=crop_bound,
pixel_qa=qa_layer,
vals=vals)
# Compute the arithmetic mean, ignoring NaNs
ndvi_mean = np.nanmean(ndvi_clean)
# Grab date from filename convention
date = os.path.basename(os.path.normpath(band_paths[0]))[17:25]
site_data = [date, site_name, ndvi_mean]
all_data.append(site_data)
ndvi_mean_df = pd.DataFrame(data=all_data,
columns=['date', 'site', 'mean_ndvi'])
ndvi_mean_df['date'] = pd.to_datetime(ndvi_mean_df['date'])
ndvi_mean_df.set_index('date', inplace=True)
ndvi_mean_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points += 2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points += 2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points += 3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points += 3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
# Plot mean NDVI for both sites across the year.
ndvi_mean_df.dropna(subset=['mean_ndvi'], inplace=True)
colors = {'HARV': 'purple',
'SJER': 'black'}
fig, ax = plt.subplots(figsize=(12, 8))
for site, group in ndvi_mean_df.groupby('site'):
ax.plot(group.index,
group.mean_ndvi,
marker='o',
color=colors[site],
label=site)
date_form = DateFormatter('%b')
ax.xaxis.set_major_formatter(date_form)
fig.suptitle('Mean NDVI, HARV and SJER Field Sites', x=.52, y=.95)
ax.set(title=' Landsat 8, Jan 2017 - Dec 2017',
xlabel='Month',
ylabel='Mean NDVI')
ax.legend()
fig.tight_layout()
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. The Normalized Difference Vegetation Index measures the levels of chlorophyll in vegetation, ranging from -1 to +1. The higher the measurement, the healthier and denser the vegetation likely is. Arranging flights over the HARV site would be best timed in the months of May through October, according to the 2017 values. Similiarily, the months of March and April would be best for the SJER field site per the 2017 values. Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. Monitoring vegetative changes over time in each site would require an increased persistance of data over time instead of the single year plotted above. Initial modifications to the workflow would include these longer time series datasets. Secondary to this longer time series would be the distinct comparison and analyses of the time windows of seasonal changes with the hypothesis as seasonal triggers should not vary by 30-60 days. Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. BONUS - Export a .CSV File to Share (10 points possible)This is optional - if you export a **.csv** file with the columns specified above: Site, Date and NDVI Value you can get an additional 10 points.* FULL CREDIT: File exists in csv format and contains the columns specified.We will check your github repo for this file!
###Code
csvfile = os.path.join(data_path,
'ndvi-automation',
'outputs',
'harv_sjer_meanNDVI_clean.csv')
ndvi_mean_df.to_csv(csvfile)
print('************Complete*************')
###Output
************Complete*************
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name: Heidi Yoon** --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Workflow for this notebook**Steps to process one scene of Landsat data:**1. Make a list of all the bands for one scene.2. Open and clean data for valid values for that scene. 3. Calculate the NDVI, mask for clouds, and calculate the mean NDVI for that scene.**Steps to process all scenes for one site of Landsat data:**1. Make a list of all scenes (dates) for one site.2. For each scene, open and clean data for valid values, calculate the NDVI, mask for clouds, and calculate the mean NDVI.3. Store the mean NDVI, date of the scene, and site name for each scene in a pandas dataframe.**Steps to process multiple sites of Landsat data:**1. Make a list of all the sites.2. For each site, get the data and clean for valid values for each scene, calculate the NDVI, mask for clouds, and calculate the mean NDVI for each scene.3. Store the mean NDVI, date, and site name for each scene in a pandas dataframe.4. Export the dataframe with mean NDVI to csv.
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
from glob import glob
import numpy as np
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
import xarray as xr
import rioxarray as rxr
import earthpy as et
import earthpy.mask as em
import pyproj
from matplotlib.dates import DateFormatter
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# Download data and set working directory
et.data.get_data('ndvi-automation')
os.chdir(os.path.join(et.io.HOME, "earth-analytics", "data"))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Data Source and Areas of Interest* In this notebook, we analyze Landsat 8 imagery and vector files for two field sites in the National Ecological Observatory Network (NEON). The first field site is the Harvard Forest and Quabbin Watershed (HARV), which is located approximately 65 miles west of Boston, Massachusetts. The second field site is the San Joaquin Experimental Range (SJER) located approximately 25 miles north of Fresno, California.* These data are available online as part of the EarthPy Data Subset. Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
def open_clean_band(band_path, clip_extent, valid_range=None):
"""A function that opens a Landsat band as an (rio)xarray object
Parameters
----------
band_path : str
A list of paths to the tif file
clip_extent : geopandas geodataframe
A geodataframe containing the clip extent of interest. NOTE:
this will fail if the clip extent is in a different CRS than the
raster data.
valid_range : tuple (optional)
The min and max valid range for the data. All pixels with values
outside of this range will be masked.
Returns
-------
An single xarray object with the Landsat band data.
"""
# Clip a band of landsat data to the area of interest
band = rxr.open_rasterio(band_path, masked=True).rio.clip(
clip_extent.geometry, from_disk=True).squeeze()
# Mask values outside of a valid range
if valid_range:
mask = ((band <= valid_range[0]) | (band > valid_range[1]))
cleaned_band = band.where(~mask, np.nan)
return cleaned_band
def masked_ndvi(all_bands, clip_extent, pixel_qa_path, vals):
"""Open and mask a single landsat band using a pixel_qa layer.
Parameters
-----------
all_bands : list
A list containing two xarray objects for landsat bands 4 and 5
clip_extent: geopandas GeoDataFrame
A geodataframe containing the clip extent of interest. NOTE:
this will fail if the clip extent is in a different CRS than the
raster data.
pixel_qa_path: str
A path to a pixel qa tif file.
vals: list
A list of values needed to create the cloud mask
Returns
-----------
ndvi_crop : Xarray Dataset
a cropped and masked xarray object containing NDVI values
"""
# Open and clip landsat qa layer
pixel_qa = rxr.open_rasterio(
pixel_qa_path[0], masked=True).rio.clip(
clip_extent.geometry, from_disk=True).squeeze()
# Calculate NDVI
ndvi_xr = (all_bands[1]-all_bands[0]) / (all_bands[1]+all_bands[0])
# Apply cloud mask to NDVI
ndvi_mask = ndvi_xr.where(~pixel_qa.isin(vals))
return ndvi_mask
def open_site_vector(site_path):
"""A function that opens a shapefile for a site location.
Parameters
----------
site_path: str
A list of paths to the site directory.
Returns
-------
crop_bound: geopandas DataFrame
A geodataframe of the crop boundary for the site location.
"""
vector_dir = os.path.join(site_path, "vector")
site_name = os.path.basename(os.path.normpath(site_path))
site_boundary_path = os.path.join(
vector_dir, site_name + "-crop.shp")
# Open the crop boundary as a geodataframe
crop_bound = gpd.read_file(site_boundary_path)
return crop_bound
def mean_ndvi_df(folder_name):
"""Calculate mean NDVI for a landsat data directory.
Parameters
----------
folder_name: str
A list of paths to the landsat site folder.
Returns
-------
ndvi_df: pandas DataFrame
A DataFrame containing the mean NDVI, date of when the data was
measured, and the site name.
"""
# Make a list of all the dates in the directory
all_dates = glob(os.path.join(folder_name, "landsat-crop", "*"))
# Open the crop boundary for the site location
crop_bound = open_site_vector(folder_name)
# Initialize lists for mean NDVI, date, and site name
all_ndvi = []
dates = []
site = []
column_names = ["mean_ndvi", "site", "date"]
# For all the dates, list all of the band paths and qa path
for adate in all_dates:
band_paths = sorted(
glob(os.path.join(adate, "*band*[4-5].tif")))
landsat_qa_path = glob(os.path.join(adate, "*qa*"))
# Store the date and file name for each date.
dates.append(adate[-29:-21])
site.append(adate[22:26])
# Initialize list for bands 4 and 5 xarrays
all_bands = []
# For all the band paths, open and clean bands 4 and 5
for aband in band_paths:
band = open_clean_band(aband, crop_bound, (0, 10000))
all_bands.append(band)
# Calculate NDVI and mask for clouds, then calculate mean NDVI
avg_ndvi = masked_ndvi(
all_bands, crop_bound, landsat_qa_path, cloud_values).mean()
all_ndvi.append(avg_ndvi)
# Create a dataframe to store mean NDVI, date, site name
df = pd.DataFrame(columns=column_names)
df["mean_ndvi"] = xr.concat(all_ndvi, dim="array").to_series()
df["site"] = site
df["date"] = dates
df["date"] = pd.to_datetime(df["date"])
ndvi_df = df.set_index("date")
return ndvi_df
def ndvi_all_sites(path_name):
"""Calculate mean NDVI for all sites in a Landsat data directory
Parameters
----------
path_name: str
A list of paths to the Landsat data directory for all sites.
Returns
-------
ndvi_allsites: pandas dataframe
A dataframe containing the mean NDVI, site name, and date of
Landsat measurement.
"""
# Make a list of all the sites
all_sites = glob(os.path.join(path_name, "*"))
# Initialize the list of NDVI dataframes
ndvi_ls = []
for asite in all_sites:
# Calculate the mean NDVI for each site
ndvi_df = mean_ndvi_df(asite)
ndvi_ls.append(ndvi_df)
ndvi_allsites = pd.concat(ndvi_ls, axis=0)
return ndvi_allsites
###Output
_____no_output_____
###Markdown
How we process all of the Landsat scenes for the HARV site* We process all of the scenes for the HARV site in the cell below by using the function mean_ndvi_df(). First, we make a list of all of the scenes. Then for each scene, we process the bands using the function open_clean_band(), and we calculate the NDVI and mask for clouds using the function masked_ndvi(). We also use the function open_site_vector() to open the crop boundary for the HARV site. Once we have calculated the NDVI for a scene, we calculate the mean NDVI and store it with the date of the scene and site name in a pandas dataframe. The final dataframe is returned by the mean_ndvi_df() function.* In order to make the code run faster, we made some choices in our functions to optimize for speed. In the functions, open_clean_band() and masked_ndvi, we pipe the rioxarray commands and read from_disk. In the function masked_ndvi(), we also chose to apply the cloud mask at the end of the NDVI calculation.* We made the code more concise by using functions and using loops to open and clean bands 4 and 5 for all of the scenes and to mask and calculate NDVI for all of the scenes.
###Code
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
# Define cloud mask values
high_cloud_confidence = (
em.pixel_flags['pixel_qa']['L8']['High Cloud Confidence'])
cloud = em.pixel_flags['pixel_qa']['L8']['Cloud']
cloud_shadow = em.pixel_flags['pixel_qa']['L8']['Cloud Shadow']
cloud_values = high_cloud_confidence + cloud + cloud_shadow
# Create dataframe of mean NDVI for the HARV site
ndvi_harv = mean_ndvi_df("ndvi-automation/sites/HARV")
# Remove NaN values
ndvi_harv_clean = ndvi_harv.dropna(how='any')
ndvi_harv_clean
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`). How we process the Landsat scenes for all sites* We process all of the scenes for all sites in the cell below by using the function ndvi_all_sites(). First, we make a list of all the sites. Then for each site, we calculate the mean NDVI using the function mean_ndvi_df. We concatenate the list of mean NDVI dataframes for all the sites and return the final dataframe which stores the mean NDVI, date of each scene, and site name.* In order to make the code run faster, we use the global pyproj context.* We made the code more concise by using functions and using loops to calculate NDVI for all of the scenes in each site. We also used the function open_site_vector() to open the corresponding vector shapefile for each site.
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# Set pyproj settings
pyproj.set_use_global_context(True)
# Calculate the mean NDVI for all sites
ndvi_harv_sjer = ndvi_all_sites("ndvi-automation/sites")
# Export mean NDVI dataframe to csv
ndvi_harv_sjer.to_csv("ndvi-automation/outputs/ndvi_harv_sjer.csv")
ndvi_harv_sjer
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
###Output
✅ Your data is stored in a DataFrame!
✅ Correct number of masked data values!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
Your total run time for processing the data was 0:00:57.273169.
➡ You received 10 out of 10 points for creating a dataframe.
###Markdown
Figure: Mean NDVI for two NEON sites in 2017
###Code
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
# Remove NaN values
ndvi_clean = ndvi_harv_sjer.dropna(how='any')
# Define plot space
fig, ax = plt.subplots(figsize=(10, 10))
# Set plot variables
for site, site_df in ndvi_clean.groupby(["site"]):
if site == 'HARV':
color = "purple"
else:
color = "blue"
ax.plot(site_df.index,
site_df.mean_ndvi,
marker="o",
color=color,
label=site)
# Set titles and axes labels
ax.xaxis.set_major_formatter(DateFormatter("%b"))
ax.set(
title="Mean Normalized Difference Vegetation Index from 2017 (Landsat 8)",
xlabel="Month",
ylabel="Mean NDVI")
# Set legend
ax.legend(loc="upper left")
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name: Svetlana Kurakina** --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Pseudocode for this workflow* Import libraries necessary to complete the workflow* Set home directory* Set paths to data* Specify cloud values using earthpy (need to do once)* Read study sites boundaries (need to do once for each site)* Read landsat bands and qa layers* Restrict landsat data to "valid range" of 0 to 10000* Clip extent of bands to match study sites boundaries* Clip extent of qa layers to match study sites boundaries* Calculate NDVI for each pair of red/infrared bands for each landsar scene* Apply cloud mask to resulting NDVI* Calculate Mean NDVI and append those to a list* Get site names from paths and append those to a list* Get scene dates from from paths and append those to a list* Create dataframe and populate with lists of site names, dates, mean NDVIs* Set dataframe index to datetime* Drop NA data* Plot data* Export data to CSV
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
# YOUR CODE HERE
import os
from glob import glob
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdates
import numpy as np
import pandas as pd
import geopandas as gpd
import xarray as xr
import rioxarray as rxr
from rasterio.plot import plotting_extent
import earthpy as et
import earthpy.mask as em
import earthpy.spatial as es
import earthpy.plot as ep
# Download data and set working directory
data = et.data.get_data('ndvi-automation')
os.chdir(os.path.join(et.io.HOME,
'earth-analytics',
'data'))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below MEAN NDVI for one HARV landsat scene without functions (below)* I want to be sure I understand all the transformations and calculations involved in the process* To achieve that, I start from processing all data for one HARV landsat scene without using any functions* I check my resulting dataframe using autograder below to make sure I am getting correct result* After that, I adjust my pseudocode and start working on functions* Once I get my functions I will use them to calculate df for single HARV landsat scene again* Once I am sure my functions work as intended, I will comment out code cell below, but I leave it for future reference
###Code
# # Calculation Mean NDVI for one HARV landsat scene without functions
# # Set paths
# path_harv = os.path.join('ndvi-automation', 'sites', 'HARV')
# path_harv_scene = os.path.join(path_harv,'landsat-crop',
# 'LC080130302017031701T1-SC20181023151837')
# path_harv_bands = sorted(glob(os.path.join(path_harv_scene,
# '*band*[4-5].tif')))
# path_harv_crop_bound = os.path.join(path_harv, 'vector', 'HARV-crop.shp')
# path_harv_qa = os.path.join(
# path_harv_scene,
# 'LC08_L1TP_013030_20170317_20170328_01_T1_pixel_qa.tif'
# )
# # Open crop boundary
# harv_crop_bound = gpd.read_file(path_harv_crop_bound)
# # Open band 4 using rioxarray
# harv_band_4 = rxr.open_rasterio(path_harv_bands[0], masked=True).squeeze()
# #harv_band_4.plot()
# # Clip band 4
# harv_band_4_crop = harv_band_4.rio.clip(harv_crop_bound.geometry)
# # Specify the valid range
# valid_range = (0, 10000)
# if valid_range:
# mask = ((
# harv_band_4_crop < valid_range[0]) | (
# harv_band_4_crop > valid_range[1]))
# harv_band_4_crop = harv_band_4_crop.where(~xr.where(mask, True, False))
#harv_band_4_crop.plot()
# # Open band 5 using rioxarray
# harv_band_5 = rxr.open_rasterio(path_harv_bands[1], masked=True).squeeze()
# # Clip band 5
# harv_band_5_crop = harv_band_5.rio.clip(harv_crop_bound.geometry)
# # Specify the valid range for band 5
# if valid_range:
# mask = ((
# harv_band_5_crop < valid_range[0]) | (
# harv_band_5_crop > valid_range[1]))
# harv_band_5_crop = harv_band_5_crop.where(~xr.where(mask, True, False))
# #harv_band_5_crop.plot()
# # Calculate NDVI (since I used cropped bands it is already cropped)
# harv_ndvi = (harv_band_5_crop-harv_band_4_crop)/(
# harv_band_5_crop+harv_band_4_crop)
# # Grab cloud values using earthpy
# high_cloud_confidence = (
# em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"])
# cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
# cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
# all_masked_values = cloud_shadow + cloud + high_cloud_confidence
# # Open and crop harv qa layer
# harv_qa = rxr.open_rasterio(path_harv_qa).squeeze()
# harv_qa_cropped = harv_qa.rio.clip(harv_crop_bound.geometry)
# Create and apply cloud mask to NDVI
# harv_ndvi_cl_free = harv_ndvi.where(
# ~harv_qa_cropped.isin(all_masked_values))
#harv_ndvi_cl_free.plot()
# # Calculate mean NDVI
# print('My mean NDVI for HARV site is:', harv_ndvi_cl_free.mean().values)
# #Get site name from path
# harv_sitename = [os.path.basename(os.path.normpath(path_harv))]
# # Get date
# harv_date = [os.path.basename(os.path.normpath(path_harv_scene))[10:18]]
# # Get NDVI value
# harv_mean = [harv_ndvi_cl_free.mean().values]
# # Construct dataframe
# harv_df = pd.DataFrame(columns=["site","date","mean_ndvi"])
# harv_df['site'] = harv_sitename
# harv_df['date'] = harv_date
# harv_df['mean_ndvi'] = harv_mean
# harv_df['date'] = pd.to_datetime(harv_df['date'])
# harv_df.set_index('date', inplace=True)
# harv_df
###Output
_____no_output_____
###Markdown
Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# YOUR CODE HERE
# Define function which opens, crops, and cleans landsat bands
def open_clean_bands(band_path,
crop_extent,
valid_range=None,):
"""Open, crop and define valid range of values for single ladsat band.
Parameters
-----------
band_path : string
A path to the array to be opened
valid_range : tuple (optional)
A tuple of min and max range of values for the data. Default = None
Returns
-----------
arr : xarray DataArray
An xarray DataArray with values in a valid range.
"""
# Open and crop bands
band = rxr.open_rasterio(
band_path, masked=True).rio.clip(
crop_extent.geometry, from_disk=True).squeeze()
# Only run this step if a valid range tuple is provided
if valid_range:
mask = ((band < valid_range[0]) | (band > valid_range[1]))
band = band.where(~xr.where(mask, True, False))
return band
# Define function that calculates NDVI and masks clouds using cropped qa layer
def mask_crop_ndvi(all_bands,
crop_bound,
pixel_qa_path,
vals):
"""Takes 2 landsat bands, calc. NDVI, masks clouds, gets NDVI mean value.
Parameters
-----------
all_bands : list
A list containing two xarray objects for landsat bands 4 and 5
crop_bound: geopandas GeoDataFrame
A geopandas dataframe to be used to crop
a pixel qa tif using rasterio mask().
pixel_qa_path: string
A path to a pixel qa tif file.
vals: list
A list of values needed to create the cloud mask.
Returns
-----------
ndvi_mean : array
an array object containing mean NDVI value
"""
crop_json = crop_bound.geometry
# Open and clip qa layer
pixel_qa = rxr.open_rasterio(
pixel_qa_path[0], masked=True).rio.clip(
crop_json, from_disk=True).squeeze()
# Calculate NDVI
ndvi_xr = (all_bands[1]-all_bands[0]) / (all_bands[1]+all_bands[0])
# Apply cloud mask to NDVI
ndvi_mask = ndvi_xr.where(~pixel_qa.isin(vals))
# Calculate mean NDVI
ndvi_mean = ndvi_mask.mean().values
return ndvi_mean
# Define function that constructs df with Mean NDVI value for a landsat scene
# Might not be that useful if needs to be scaled up
# Served as an exploratory exercise for an author of this notebook
def construct_df(site_name,
scene_path,
ndvi_mean):
"""Contructs df indexed on date with NDVI value for a landsat scene.
Parameters
-----------
site_name : string
A string with the name of the study site
scene_path: string
A string with the path to landsat scene directory.
ndvi_mean: array
An array with NDVI mean value.
Returns
-----------
ndvi_mean_df : dataframe
a pandas Data Frame with date, site name and mean NDVI value
"""
#Prep site name
sitename = [site_name]
# Get date
date = [os.path.basename(scene_path)[10:18]]
# Get NDVI value
mean = [ndvi_mean]
# Construct dataframe
ndvi_mean_df = pd.DataFrame(columns = ["site","date","mean_ndvi"])
ndvi_mean_df['site'] = sitename
ndvi_mean_df['date'] = date
ndvi_mean_df['mean_ndvi'] = mean
ndvi_mean_df['date'] = pd.to_datetime(ndvi_mean_df['date'])
ndvi_mean_df.set_index('date', inplace=True)
return ndvi_mean_df
# Define function that uses open_clean_bands and mask_crop_ndvi
# Function is suitable to use with multiple landsat scenes
def mean_ndvi(scene_path,
crop_bound):
"""Opens landsat bands, crops and cleans data, calculates mean NDVI.
Parameters
-----------
scene_path : string
A string with the path of the landsat scene directory.
crop_extent : crop_bound: geopandas GeoDataFrame
A geopandas dataframe to be used to crop
the raster data using rasterio mask().
Returns
-----------
ndvi : array
An array with NDVI mean value.
"""
# Construct paths
band_paths = glob(os.path.join(scene_path, '*band[4-5].tif'))
qa_paths = glob(os.path.join(scene_path, '*qa.tif'))
# Run open_clean_bands function
bands = []
for aband in band_paths:
clean_band = open_clean_bands(aband, crop_bound, (0, 10000))
bands.append(clean_band)
# Grab cloud values using earthpy
high_cloud_confidence = (
em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"])
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
# Run mask_crop_ndvi function
ndvi = mask_crop_ndvi(bands, crop_bound, qa_paths, all_masked_values)
return ndvi
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
# YOUR CODE HERE
# Get a list of each directory
path = os.path.join("ndvi-automation", "sites")
# print (path)
# Get a list of both site directories
# Added sorted to make sure sites come in the same order on all machines
sites = sorted(glob(path + "/*/"))
# print(sites)
# Get the HARV site name
site_name = os.path.basename(os.path.normpath(sites[0]))
# site_name
# Set path to directory with HARV site boundary
vector_dir = os.path.join(sites[0],
"vector")
# print(vector_dir)
# Set path to HARV site boundary
site_boundary_path = os.path.join(vector_dir, site_name + "-crop.shp")
# print(site_boundary_path)
# Open crop boundary
crop_bound = gpd.read_file(site_boundary_path)
# crop_bound.plot()
# Set path to umbrella directory with HARV landsat data
landsat_dir = os.path.join(sites[0],
"landsat-crop")
# print(landsat_dir)
# Set path to HARV landsat scenes
landsat_dirs = sorted(glob(os.path.join(landsat_dir, "LC08*")))
# print(landsat_dirs)
# Set path to scene LC080130302017031701T1-SC20181023151837
adir = landsat_dirs[4]
#print(adir)
# Set paths to bands 4-5 tifs
band_paths = sorted(glob(os.path.join(adir, "*band*[4-5].tif")))
# print(band_paths)
# Use function to open, crop and clean bands 4 and 5
all_bands = []
for aband in band_paths:
# print("Opening up", aband)
cleaned_band = open_clean_bands(band_path=aband,
crop_extent=crop_bound,
valid_range=(0, 10000))
all_bands.append(cleaned_band)
# Grab cloud values using earthpy
high_cloud_confidence = (
em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"])
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
# Set path to qa tif
pixel_qa_path = glob(os.path.join(adir, "*qa*"))
# print(pixel_qa_path)
# Use function to calculate NDVI, and mask clouds with cropped qa layer
ndvi_harv_mean = mask_crop_ndvi(all_bands=all_bands,
crop_bound=crop_bound,
pixel_qa_path=pixel_qa_path,
vals=all_masked_values)
# Use function to get dataframe
single_scene_df = construct_df(site_name=site_name,
scene_path=adir,
ndvi_mean=ndvi_harv_mean)
single_scene_df
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# YOUR CODE HERE
# Get list of study site directories
harv_sjer_sites = sorted(glob(path + "/*/"))
# Construct NDVI list
all_ndvi = []
# Loop through sites and read two crop_bounds
for asite in harv_sjer_sites:
sitename = os.path.basename(os.path.normpath(asite))
site_boundary_path = os.path.join(
asite,
'vector',
sitename +
'-crop.shp'
)
crop_bound = gpd.read_file(site_boundary_path)
# Get a list of all landsat scenes
scene_dirs = sorted(glob(os.path.join(
path,
sitename,
'landsat-crop',
'*'
)))
# Loop though all landsat scenes
for ascene in scene_dirs:
# get date for each scene
date = ascene[-29:-21]
# run function which reads and cleans all bands and
# calculates NDVI for each pait of red/infrared bands
# belonging to the same scene
ndvi = mean_ndvi(ascene, crop_bound)
# append site names, dates and mean ndvi to a list
all_ndvi.append([sitename, date, ndvi])
# Construct data frame using list above
ndvi = pd.DataFrame(all_ndvi,
columns = ['site', 'date', 'mean_ndvi'])
# Convert string with date to a datetime object
ndvi['date'] = pd.to_datetime(ndvi['date'])
# Set dataframe index to a datetime
ndvi.set_index('date', inplace=True)
# Remove invalid ndvi values (i.e. remove rows with no values)
ndvi_clean = ndvi[ndvi['mean_ndvi'] > 0]
ndvi_clean
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
# YOUR CODE HERE
# Set a list of sites to loop through
plot_sites = ['HARV', 'SJER']
# Set a list of unique colors
plot_colors = ['purple', 'green']
# Set figure
fig, ax = plt.subplots(figsize=(12, 6))
# Loop through dataframe and plot NDVI for each site using assigned color
for idx, asite in enumerate(plot_sites):
temp = ndvi_clean[ndvi_clean['site']==asite]
ax.plot(temp.index,
temp.mean_ndvi,
label=plot_sites[idx],
color = plot_colors[idx])
#Set plot title and axes labels
ax.set(title = "Mean NDVI For HARV and SJER Study Sites\n (February 2017 - \
December 2017)\nInfluence of Clouds Removed",
xlabel = "Month",
ylabel = "Mean NDVI")
# Add legend
plt.legend()
# Add grid
plt.grid(color='grey', linestyle='-', linewidth=0.5)
# Format dates on x axis
# Add month names
date_form = DateFormatter("%b")
ax.xaxis.set_major_formatter(date_form)
# Add ticks for each months
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1))
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. YOUR ANSWER HERE1. I would plan flights for HARV site starting from mid May and until the first week of October.2. Flights for SJER would have to be planned for a short window between mid February and end of March. Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. YOUR ANSWER HERE:1. I would probably ended up making separate dataframes for each site.2. I think, it may be good idea to process more landsat scenes (for few years). I can see seasonal changes on the data from one year, but long-term trend would be interesting to explore.3. Would be interesting to compare NDVI pattern with tempreture and precipitation variations across years. Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. BONUS - Export a .CSV File to Share (10 points possible)This is optional - if you export a **.csv** file with the columns specified above: Site, Date and NDVI Value you can get an additional 10 points.* FULL CREDIT: File exists in csv format and contains the columns specified.We will check your github repo for this file!
###Code
# Export final datafarame to csv stored in the outputs directory
# Check if output folder exists
# Create output folder if necessary
ndvi_output = os.path.join(
'ndvi-automation',
'outputs'
)
if os.path.exists(ndvi_output):
print('Output directory exists')
else:
print('Output directory does not exist but it is being created.')
os.makedirs(ndvi_output)
# Create path and filename
ndvi_output_csv = os.path.join(
ndvi_output,
'ndvi_output.csv'
)
# Export final df to csv
ndvi_clean.to_csv(ndvi_output_csv)
# Check that output csv exists
if os.path.exists(ndvi_output_csv):
print('Output CSV exists')
else:
print('Oh no, final CSV does not exist in the outputs directory.')
###Output
Output directory exists
Output CSV exists
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Carson Norris** Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise --- Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too High Level Overview**1. Loop through each landsat directory of tif files (each scene)** * create a list of paths to directories for both scenes * in each loop iteration, grab the date from the directory name * create an empty dictionary with a key for that date **2. Create a list of all tif files that you will need in the scene's directory****3. Open / crop / clean the tif files that you need for your analysis****4. Optional: combine into a single object****5. Calculate veg indices** * NDVI * NBR / dNBR **6. Landsat Data Only: Apply cloud mask to final NBR / NDVI layers**
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import geopandas as gpd
import xarray as xr
import rioxarray as rxr
from rasterio.plot import plotting_extent
import earthpy as et
import earthpy.mask as em
from matplotlib.dates import DateFormatter
# Get data
data = et.data.get_data('ndvi-automation')
# Set working directory
os.chdir(os.path.join(et.io.HOME,
'earth-analytics',
'data'))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
# Site data
path = os.path.join('ndvi-automation', 'sites')
all_sites_dirs = glob(path + "/*/")
for site_dirs in all_sites_dirs:
print(site_dirs)
site_name = os.path.basename(os.path.normpath(all_sites_dirs[1]))
site_name
# Shapefile data
vector_dir = os.path.join(all_sites_dirs[1], "vector")
# Create boundary
site_boundary_path = os.path.join(vector_dir, site_name + "-crop.shp")
crop_bound = gpd.read_file(site_boundary_path)
# Test boundary plot
crop_bound.plot()
plt.show()
# Test cell
landsat_dir = os.path.join(site_dirs,
"landsat-crop")
# This is the crop folder containing all of the .tif files
landsat_dirs = sorted(glob(os.path.join(landsat_dir, "LC08*")))
landsat_dirs
###Output
_____no_output_____
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# Function A
def open_clean_bands(band_path,
valid_range=None):
"""Open path to file, mask single band in file within specified
range values.
Parameters
----------
band_path : string
A path to the array to be opened
valid_range : tuple (optional)
A tuple of min and max range values for the data. Default = None
Returns
----------
band : xarray DataArray
An xarray DataArray with values that should be masked set to 1 for
True (Boolean)
"""
# Open, clip, mask single band using rioxarray to valid range
band = (rxr.open_rasterio(band_path, masked=True)
.rio.clip(crop_bound.geometry, from_disk=True)
.squeeze())
# Specify the valid range of values
if valid_range:
mask = ((band <= 0) | (band > 10000))
band = band.where(~mask, np.nan)
return band
# Function B
def mask_crop_ndvi(all_bands,
crop_bound,
pixel_qa,
vals):
"""Open and mask a single Landsat band using a pixel_qa layer.
Parameters
-----------
all_bands : list
A list containing paths to Landsat bands 4 and 5 as .tif files
pixel_qa : xarray DataArray
An xarray DataArray with pixel qa values that have not yet been
turned into a mask (0s and 1s)
crop_bounds : geopandas GeoDataFrame
A geopandas dataframe to be used to crop the raster data using
rasterio mask()
vals : list
A list of values needed to create the cloud mask
Returns
--------
mean_ndvi : xarray.DataArray
A cropped, masked xarray object containing NDVI values
"""
# Create empty list
bands = []
for band_path in all_bands_path:
clean_bands = open_clean_bands(band_path=band_path,
valid_range=(0, 10000))
bands.append(clean_bands)
# Open and clip cloud mask layer
cl_mask = (rxr.open_rasterio(pixel_qa_path, masked=True)
.rio.clip(crop_bound.geometry, from_disk=True).squeeze())
# Apply cloud mask to NDVI
all_masked_values = [328, 392, 840, 904, 1350, 352, 368, 416,
432, 480, 864, 880, 928, 944, 992, 480, 992]
# Calculate NDVI
ndvi_xr = (bands[1]-bands[0]) / (bands[1]+bands[0])
# Apply cloud mask to NDVI
all_masked_values = [328, 392, 840, 904, 1350, 352, 368, 416,
432, 480, 864, 880, 928, 944, 992, 480, 992]
ndvi_mask = ndvi_xr.where(~cl_mask.isin(all_masked_values))
# Calculate NDVI values
mean_ndvi = ndvi_mask.mean(skipna=True).item()
return mean_ndvi
# Define directory name
site_crop_dir = "landsat-crop"
# Create empty list
ndvi_list = []
# Loop through each site directory
for site_dir in all_sites_dirs:
print("I am looping through: ", site_dir)
# Get site name
site = os.path.normpath(site_dir).split(os.sep)[-1]
# Get a list of subdirectories for the site
print("I am working on", site, "field site now")
site_crop_dir_path = os.path.join(site_dir, site_crop_dir)
scene_dirs = sorted(glob(site_crop_dir_path + "/*/"))
# Shapefile data
vector_dir = os.path.join(site_dir, "vector")
# Create boundary
site_boundary_path = os.path.join(vector_dir, site + "-crop.shp")
crop_bound = gpd.read_file(site_boundary_path)
# Loop through each scene subdirectory for stored data
for scene_dir in scene_dirs:
print("Scene is processing", scene_dir.split(os.sep)[-2])
date = scene_dir[50:58]
# Grab only necessary bands
all_bands_path = sorted(glob(os.path.join(scene_dir,
"*band*[4-5].tif")))
# Grab QA band
pixel_qa_path = glob(os.path.join(scene_dir, "*qa*"))[0]
# Apply cloud mask to NDVI > confirm function operability
all_masked_values = [328, 392, 840, 904, 1350, 352, 368, 416,
432, 480, 864, 880, 928, 944, 992, 480, 992]
# Calculate NDVI
ndvi_values = mask_crop_ndvi(all_bands=all_bands_path,
pixel_qa=pixel_qa_path,
crop_bound=crop_bound,
vals=all_masked_values)
# Append NDVI to columns
outputs = [site, date, ndvi_values]
ndvi_list.append(outputs)
ndvi_list
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
ndvi_df = pd.DataFrame(ndvi_list,
columns = ['site', 'date', 'mean_ndvi'])
ndvi_df['date'] = pd.to_datetime(ndvi_df['date'])
harv_ndvi_df = ndvi_df[0:23].set_index("date")
final_harv_df = harv_ndvi_df.dropna(how='any')
final_harv_df
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
all_ndvi_df = ndvi_df.set_index("date")
all_ndvi_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
all_ndvi_df
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
site_colors = {'HARV': 'purple', 'SJER': 'cyan'}
fig, ax = plt.subplots(figsize=(10, 12))
fig.suptitle('Mean Normalized Difference Vegetation Index\n Jan 2017 - Dec 2017',
fontsize=20, fontweight='bold')
for site, df in all_ndvi_df.dropna().groupby('site'):
if site in ['HARV']:
site_name = 'HARV'
else:
site_name = 'SJER'
label = site
color = site_colors[site]
ax.plot(df.index, df.mean_ndvi, label = site_name,
color = site_colors[site], marker = 'o')
# Set axes labels
ax.xaxis.set_major_formatter(DateFormatter("%b"))
ax.set(xlabel="Month",
ylabel="Mean NDVI")
# Add legends
ax.legend(['HARV', 'SJER'], loc='upper right',
bbox_to_anchor=(1.5, 1), borderaxespad=0)
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. For the HARV site the best recommended flight time is June to August. The SJER site as a much shorter growth season which is between Mar and early June before a precipitous drop to low NDVI rates. Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. Within the for loop one can replace the code that summarizes by mean. Can focus on a comparison of maximum values across the sites perhaps to view changes over time in the context of an event trigger. Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. BONUS - Export a .CSV File to Share (10 points possible)This is optional - if you export a **.csv** file with the columns specified above: Site, Date and NDVI Value you can get an additional 10 points.* FULL CREDIT: File exists in csv format and contains the columns specified.We will check your github repo for this file!
###Code
# Export DataFrame to .csv file
ndvi_df_csv = all_ndvi_df.dropna()
# Export to directory
ndvi_df_csv.to_csv(os.path.join(et.io.HOME,
"earth-analytics",
"ea-2022-04-ndvi-automation-cnorristellar",
"ndvi_df.csv"))
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name: Jacquelyn Witte** --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Pseudocode for this workflow There are three required files to get a single mean NDVI value: 1. Landsat tif 2. Landsat QC tif 3. Landsat shapefile The basic flow to calculate the mean NDVI for a single date of measurements:- Read the image tifs- Read the QA tif- Read the shapefile- Merge the individual tif bands into a single xarray- Crop the dataArray using the shapefile- Clean the cropped dataArray - remove 0's and high values- Mask the dataArray using the QA tif- Calculate the NDVI- Calculate the mean NDVI- Turn it into a dataFrame- Plot itThe idea is to create many functions to perform most of the outlined steps above. When looping through all the data directories to calculate a single mean NDVI value and grab it's metadata, the sequence of code will be a series of function calls only.Note, that I am choosing to read all the bands so there is the option to look at RGB or CIR in the future. Function 1 - Retrieve tif files bands_tif = sorted(os.path.join(landsat_path,'*band[2-5]*.tif')) qa_tif = os.path.join(landsat_path,'*qa*.tif') Function 2 - Key metadata: date and site site = filepath.split('/')[2] date = datetime.strptime(filepath.split('T')[0][-10:-2], "%Y%m%d") Function 3 - Consolidated Landsat image, cropped and cloud-free For loop over tif files and append to a list. all_bands = [] for i, aband in enumerate(files): all_bands.append(rxr.open_rasterio(aband, masked=True).squeeze()) Assign a band number to the new xarray object all_bands[i]["band"] = i+1 Turn list of bands into a single xarray object - Create xarray.DataArray with xr.concat() Crop the data to the shapefile - Use dataArray.rio.clip(crop_boundary.geometry) method IMPORTANT: Clean the dataArray - remove 0's and high values I went with a less elegant but explicit and clear code that I can understand. data_nozeros_xr = data_xr_crop.where(data_xr_crop > 0, np.nan) data_nozeros_xr = data_nozeros_xr.where(data_nozeros_xr < 10000, np.nan) Function 4 - Apply cloud maskReference: https://github.com/earthlab-education/ea-python-course-notebooks/ blob/main/2022/completed-demos/05-l1-landsat-cloud-masks.ipynb Read the QA file qa_xr = rxr.open_rasterio(qa_file).squeeze() Create cloud mask using earthpy mask package for Landsat imagery high_cloud_confidence = ( em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"]) cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"] cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"] Add up all the mask values all_masked_values = cloud_shadow + cloud + high_cloud_confidence Apply the masking to the data xarray data_cld_free_xr = data_xr.where(~qa_xr.isin(all_masked_values)) Function 5 - Calculate NDVI and take the mean ndvi_xr = es.normalized_diff(data_xr[3], data_xr[2]) IMPORTANT: Replace dataArray NAN with numpy NAN. Otherwise there are warnings when creating a dataFrame -------------------------------------------------------------------------- Main code to process mean NDVI time series from multiple sites Get the site name from the base folders base_path = os.path.join('ndvi-automation', 'sites') sitenames = os.listdir(base_path) Initialize the desired variables that will define the dataFrame date = [] site = [] mean_ndvi = [] Parent loop - over the sites for s in sitenames: Read the shapefile from the base_path shapefile = glob(os.path.join(base_path, s, 'vector', '*shp'))[0] landsat_buffer_shp = gpd.read_file(shapefile) Loop through all the Landsat directories per site for fdir in sorted(glob(os.path.join(base_path, s, 'landsat-crop', '*'))): Get the Bands and QA files per data directory Call Function 1 Get the site name and date Call Function 2 Append to date and site Read the Landsat data - bands into a single dataArray, cropped to the shapefile Call Function 3 Apply cloud mask Call Function 4 Calculate the mean ndvi Call Function 5 Append to mean_ndvi Create a pandas dataFrame via a dictionary dict = {'Date': date, 'site': site, 'mean_ndvi': mean_ndvi} ndvi_df = pd.DataFrame(dict).set_index('Date') ------------- Create the Figure ---------------------- Reference to ignore NaN: https://www.bmc.com/blogs/pandas-nan-missing-data/ fig, ax = plt.subplots(figsize=(12, 6)) for s, df in ndvi_df.dropna().groupby('site'): ax.plot(df['mean_ndvi'], 'o-', label=s) title = 'Mean NDVI from Landsat 8 (Cloud-free)\nMeasurements taken in 2017' ax.set(title=title, xlabel='Month', ylabel='NDVI') ax.legend()
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
from glob import glob
import earthpy as et
import earthpy.mask as em
import earthpy.plot as ep
import earthpy.spatial as es
import geopandas as gpd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import numpy as np
import pandas as pd
import rioxarray as rxr
import seaborn as sns
import xarray as xr
# Prettier plotting with seaborn
sns.set(font_scale=1.3, style="whitegrid")
# Download data
et.data.get_data('ndvi-automation')
# Change to data directory
data_dir = os.path.join(et.io.HOME,
'earth-analytics',
'data')
os.chdir(data_dir)
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
###Output
_____no_output_____
###Markdown
Function 1 - Retrieve tif files
###Code
def find_tifs(landsat_path):
"""Finds all Landsat tif files in a single directory
Extracts the bands and the quality flag tifs separately
Parameters
----------
landsat_path : String
Path to the directory of tif files
Returns
-------
bands_files : String
List of bands tif files
qa_file : String
Quality flag file
"""
bands_tif = os.path.join(landsat_path,
'*band[2-5]*.tif')
qa_tif = os.path.join(landsat_path,
'*qa*.tif')
bands_files = sorted(glob(bands_tif))
qa_file = glob(qa_tif)[0]
return bands_files, qa_file
###Output
_____no_output_____
###Markdown
Function 2 - Retrieve key metadata: date and site
###Code
def get_site_date(filepath):
"""Gets the date and site name from the file path
Parameters
----------
filepath : String
Returns
-------
site : String
date : String
"""
# https://stackoverflow.com/questions/22804002/how-to-split-path-with-slashes
site = filepath.split(os.sep)[2]
date = datetime.strptime(filepath.split('T')[0][-10:-2], "%Y%m%d")
return site, date
###Output
_____no_output_____
###Markdown
Function 3 - Return consolidate Landsat bands that are cropped and cloud-free
###Code
def landsat_read_rgbbands(files, crop_boundary):
"""Consolidates Landsat RGB+NIR bands into a single dataArray
Takes a list of Landsat RGB+NIR bands for a single date and returns
a single dataArray of consolidated bands 2-5
Band 2 = Blue -> Index 0
Band 3 = Green -> Index 1
Band 4 = Red -> Index 2
Band 5 = NIR -> Index 3
Parameters
----------
files: List
A list of Landsat tif images
crop_boundary : shapefile dataArray
Returns
-------
data_xr : dataArray
Consolidated all bands in files cropped to shapefile
"""
all_bands = []
for i, aband in enumerate(files):
all_bands.append(
(rxr.open_rasterio(aband, masked=True)
# Including from_disk=True makes code run faster
.rio.clip(crop_boundary.geometry, from_disk=True)
.squeeze())
)
# Assign a band number to the new xarray object
all_bands[i]["band"] = i+1
# Turn list of bands into a single xarray object
data_xr_crop = xr.concat(all_bands, dim="band")
# IMPORTANT: Clean the dataArray - remove 0's and high values
data_nozeros_xr = data_xr_crop.where(data_xr_crop > 0, np.nan)
data_nozeros_xr = data_nozeros_xr.where(data_nozeros_xr < 10000, np.nan)
return data_nozeros_xr
###Output
_____no_output_____
###Markdown
Function 4 - Apply cloud mask
###Code
def apply_cloud_mask(data_xr, qa_file, crop_boundary):
"""Applies a cloud mask to Landsat dataArray
Parameters
----------
qa_file : String
Path to the Landsat quality flag file
data_xr : dataArray
Landsat 8 band consolidated dataArray
crop_boundary : shapefile dataArray
Returns
-------
Cloud free and cropped dataArray
"""
# Read the quality flags file
qa_xr = (rxr.open_rasterio(qa_file, masked=True)
.rio.clip(crop_boundary.geometry, from_disk=True)
.squeeze())
# Create cloud mask using earthpy mask package for Landsat imagery
high_cloud_confidence = (
em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"])
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
# Add up all the mask values
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
# Apply the masking to the data xarray
data_cld_free_xr = data_xr.where(~qa_xr.isin(all_masked_values))
return data_cld_free_xr
###Output
_____no_output_____
###Markdown
Function 5 - Calculate NDVI and return the mean
###Code
def calc_mean_ndvi(data_xr):
"""Calculates the NDVI and returns the mean
Reference: https://stackoverflow.com/questions/49867345/
how-to-deal-with-inf-values-when-computting-the-average-of-values-of-a-list-in-p
Parameters
----------
data_xr : dataArray
The Landsat data
Returns
-------
float : numpy
The mean NDVI
"""
ndvi_xr = es.normalized_diff(data_xr[3], data_xr[2])
# Calculating my own NDVI
# ndvi_xr = (data_xr[3] - data_xr[2]) / (data_xr[3] + data_xr[2])
# ndvi_clean = ndvi_xr.where(np.isfinite(ndvi_xr.values))
# Replace dataArray NAN with numpy NAN
ndvi_mean = ndvi_xr.mean()
if np.isfinite(ndvi_mean):
result = float(ndvi_mean)
else:
result = np.nan
return result
###Output
_____no_output_____
###Markdown
Start of the main code - where the magic happens Calculate mean NDVI from Landsat 8 for a single date- Create dataframe of mean NDVI in this cell using the functions created above- Important: to use the ungraded tests below as a sanity check, name your columns: mean_ndvi and site- Call the dataframe at the end of the cell so the tests run on it!- Be sure that the date column is an index of type date- HINT: the time series lessons may help you remember how to do this!
###Code
base_path = os.path.join('ndvi-automation',
'sites')
landsat_path = os.path.join(base_path,
'HARV',
'landsat-crop',
'LC080130302017031701T1-SC20181023151837')
landsat_bufferfile = os.path.join(base_path,
'HARV',
'vector',
'HARV-crop.shp')
# Read shapefile
landsat_buffer_shp = gpd.read_file(landsat_bufferfile)
# Get the Bands and QA files
landsat_bands_files, landsat_qa_file = find_tifs(landsat_path)
# Get the metadata = site name and date
site, date = get_site_date(landsat_path)
# Read the Landsat data - consolidate the bands into a single dataArray
# Crop to the shapefile
landsat_xr = landsat_read_rgbbands(landsat_bands_files,
landsat_buffer_shp)
# Apply cloud mask
landsat_cld_free_xr = apply_cloud_mask(landsat_xr,
landsat_qa_file,
landsat_buffer_shp)
###Output
_____no_output_____
###Markdown
Plot the cloud-free, cropped dataArray as a check
###Code
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(6, 10))
ep.plot_rgb(landsat_xr.values,
rgb=[2, 1, 0],
ax=ax1,
title='Landsat Original')
ep.plot_rgb(landsat_cld_free_xr.values,
rgb=[2, 1, 0],
ax=ax2,
title='Landsat Cloud mask applied')
# Hmmm doesn't look very different. Onward!
###Output
_____no_output_____
###Markdown
Calculate the NDVI from the cloud-free Landsat image
###Code
mean_ndvi = calc_mean_ndvi(landsat_cld_free_xr)
# Create a pandas dataFrame
ndvi_df = pd.DataFrame([[date, site, mean_ndvi]],
columns=['Date', 'site', 'mean_ndvi']
).set_index('Date')
ndvi_df
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`). Generate mean NDVI dataFrame for all sites in the `base_path` folder- Important: to use the ungraded tests below as a sanity check, name your columns: mean_ndvi and site- Don't forget to set date as the index and make the values of type datetime
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Get the site names
sitenames = os.listdir(base_path)
# Initialize the desired variables
date = []
site = []
mean_ndvi = []
# Loop over the sites
for s in sitenames:
# Get the shapefile
shapefile = glob(os.path.join(base_path, s, 'vector', '*shp'))[0]
print(shapefile)
# Read shapefile
landsat_buffer_shp = gpd.read_file(shapefile)
# Loop through all the Landsat directories per site
for fdir in sorted(glob(os.path.join(base_path, s, 'landsat-crop', '*'))):
# Get the Bands and QA files per data directory
landsat_bands_files, landsat_qa_file = find_tifs(fdir)
# Get the site name and date
site_temp, date_temp = get_site_date(fdir)
site.append(site_temp)
date.append(date_temp)
# Read the Landsat data - bands into a single dataArray
# Cropped to the shapefile
landsat_xr = landsat_read_rgbbands(landsat_bands_files,
landsat_buffer_shp)
# Apply cloud mask
landsat_cld_free_xr = apply_cloud_mask(landsat_xr,
landsat_qa_file,
landsat_buffer_shp)
# Calculate the mean ndvi
mean_ndvi.append(calc_mean_ndvi(landsat_cld_free_xr))
pass
pass
###Output
ndvi-automation/sites/SJER/vector/SJER-crop.shp
ndvi-automation/sites/HARV/vector/HARV-crop.shp
###Markdown
Create the final mean NDVI dataFrame
###Code
# Create a pandas dataFrame via a dictionary
dict = {'Date': date,
'site': site,
'mean_ndvi': mean_ndvi}
ndvi_df = pd.DataFrame(dict).set_index('Date')
ndvi_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
###Output
✅ Your data is stored in a DataFrame!
✅ Correct number of masked data values!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
Your total run time for processing the data was 0:00:09.887659.
➡ You received 10 out of 10 points for creating a dataframe.
###Markdown
Figure of mean NDVI over the 2 NEON field sites
###Code
# Add only the plot code to this cell
# Ref: https://www.bmc.com/blogs/pandas-nan-missing-data/
fig, ax = plt.subplots(figsize=(12, 6))
for s, df in ndvi_df.dropna().groupby('site'):
ax.plot(df['mean_ndvi'],
'o-',
label=s)
# Define the date format
date_fmt = DateFormatter("%b")
ax.xaxis.set_major_formatter(date_fmt)
# Add labels
title = 'Mean NDVI from Landsat 8 (Cloud-free)\nMeasurements taken in 2017'
ax.set(title=title,
xlabel='Month',
ylabel='NDVI')
ax.legend()
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. NDVI measures the greenness of a terrain where high values (near 1.0) indicate a dense vegetation, i.e. rainforest, and near zero values correspond to the absense of vegetation (Reference: https://earthobservatory.nasa.gov/features/MeasuringVegetation:~:text=The%20most%20common%20measurement%20is,rainforests%20(0.6%20to%200.8).). Based on the plot above, to capture high vegetation seasons I would fly mid-May through September for HARV domain (essentially the summer months) and March through April over SJER (early spring months). Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. Well, for one I would use a longer time series because then I can subtract the monthly mean from a monthly climatology, i.e. May 2017 minus (all the Mays for 10 years). I would also add other observations such as soil moisture, temperature, fires, and precipitation that impact vegetation. There may be strong correlations that can explain observed monthly variations in NDVI. Finally, for the dataset given, I can examine CIR imagery which is good for (1) identifying plant species, (2) estimating biomass of vegetation, (3) assessing soil moisture. Reference: https://www.mngeo.state.mn.us/chouse/airphoto/cir.html Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. Complete Bonus - Export to a CSV file Initially, I exported to the outputs/ folder in the ndvi-automation/ data folder but this file has to also be saved to my assignment folder so I can upload to the github repo.
###Code
# This path is specific to my assignment repo so I can add it to gitHub
# output_path = os.path.join(et.io.HOME,
# 'earth-analytics',
# 'ea-2022-04-ndvi-automation-jacquiewitte')
# This path is reproducible
output_path = os.path.join('ndvi-automation',
'outputs',
'Landsat8_ndvi_neon2017.csv')
# Converting to CSV file
ndvi_df.to_csv(output_path)
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name:** Kristen Tortorelli --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Workflow Pseudocode1. Open data for single landsat scene/date: * Get list of .tif files in folder for single scene/date with glob. * Subset list to grab only bands 4 and 5 (those needed to calculate NDVI). * Sort file list. * Open and crop bands with open_rasterio using shp file in data directory. 2. Calculate average NDVI value for single landsat scene/date. 3. Get list of all landsat dates/scenes folders for one site using glob.4. Use steps 1 and 2 above to get average NDVI value for each scene. * Grab date and site from file name. 5. Save NDVI, date, and site for each date/scene to pandas df. 6. Export pandas dataframe with mean_ndvi, site, and date to csv file.
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
# Import necessary libraries
import os
from glob import glob
import shutil
import matplotlib.pyplot as plt
from matplotlib import patches as mpatches, colors
from datetime import datetime
import pandas as pd
import geopandas as gpd
import seaborn as sns
import numpy as np
from numpy import ma
import xarray as xr
import rioxarray as rxr
import earthpy as et
import earthpy.plot as ep
import earthpy.mask as em
import earthpy.spatial as es
# Set consistent plotting style
sns.set_style("white")
sns.set(font_scale=1.5)
# Download the data
data = et.data.get_data('ndvi-automation')
# Create variable for data path
data_path = os.path.join(et.io.HOME, 'earth-analytics', 'data')
# Check that path exists and if so set to working directory
if os.path.exists(data_path):
print("This directory exists, and is set to current working directory.")
os.chdir(data_path)
# If directory does not exist, create it and set to working directory
else:
print("This directory does not exist, but is being created.")
os.mkdir(data_path)
os.chdir(data_path)
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# Define custom functions
def normalized_diff(b1, b2):
"""Take two numpy arrays and calculate the normalized difference.
Math will be calculated (b1-b2) / (b1+b2). The arrays must be of the
same shape.
Parameters
----------
b1, b2 : numpy arrays
Two numpy arrays that will be used to calculate the normalized difference.
Math will be calculated (b1-b2) / (b1+b2).
Returns
----------
n_diff : numpy float
The average of element-wise result of (b1-b2) / (b1+b2) calculation. Inf values are set
to nan. Array returned as masked if result includes nan values.
"""
if not (b1.shape == b2.shape):
raise ValueError("Both arrays should have the same dimensions")
n_diff = (b1 - b2) / (b1 + b2)
# Set inf values to nan and provide custom warning
if np.isinf(n_diff).any():
warnings.warn(
"Divide by zero produced infinity values that will be replaced with nan values",
Warning)
n_diff[np.isinf(n_diff)] = np.nan
# Mask invalid values
if np.isnan(n_diff).any():
n_diff = np.ma.masked_invalid(n_diff)
# Calculate average NDVI value with np.mean
avg_ndvi = np.mean(n_diff)
return avg_ndvi
def process_bands(paths, path_to_crop, stack=False):
"""
Open, clean and crop a list of raster files using rioxarray.
Parameters
----------
paths : list
A list of paths to raster files that could be stacked (of the same
resolution, crs and spatial extent).
path_to_crop : string
A string path to the geodataframe containing the crop geometry that
you wish to crop your data to.
stack : boolean
If True, return a stacked xarray object. If false will return a list
of xarray objects.
Returns
-------
Either a list of xarray objects or a stacked xarray object
"""
crop_layer = gpd.read_file(path_to_crop)
clip_bound = crop_layer.geometry
all_bands = []
ndvi_values = []
for i, aband in enumerate(paths):
cleaned_band = rxr.open_rasterio(aband,
masked=True).rio.clip(clip_bound,
from_disk=True).squeeze()
# Clean the data
valid_range = (0, 10000)
# Only run this step if a valid range tuple is provided
if valid_range:
mask = ((cleaned_band < valid_range[0]) | (
cleaned_band > valid_range[1]))
cleaned_band = cleaned_band.where(
~xr.where(mask, True, False))
cleaned_band["band"] = i+1
all_bands.append(cleaned_band)
if stack:
return xr.concat(all_bands, dim="band")
else:
print("Returning a list of xarray objects.")
return all_bands
def remove_clouds(path_to_qa):
"""Opens landsat qa file, selects cloud/shadow values, creates cloud mask.
Parameters
----------
path_to_qa : string
Path to pixel_qa file in landsat data that will be used to clean data
for presence of clouds and shadows.
Returns
----------
cl_mask : xarray object
The cloud mask xarray object that will be used to clean landsat data.
"""
landsat_qa = rxr.open_rasterio(path_to_qa).squeeze()
high_cloud_confidence = em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"]
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
cl_mask = landsat_qa.isin(all_masked_values)
return cl_mask
def combine_scenes(path_to_all_scenes, crop_path):
"""Open, cleans, crops list of raster files, cleans data with cloud mask,
captures average ndvi, date, and site name and saves to pandas df.
Parameters
----------
path_to_all_scenes: string
Path to directory with all landsat scenes/date folders to be combined.
crop_path: string
A string path to the geodataframe containing the crop geometry that
you wish to crop your data to.
Returns
----------
df : pandas dataframe
Dataframe containing average NDVI values, date, and site name.
"""
dates_temp = []
ndvi_values_temp = []
for ascene in path_to_all_scenes:
paths_list = sorted(glob(os.path.join(ascene, "*band[4-5]*.tif")))
landsat_scene = process_bands(paths_list, crop_path, stack=True)
# Mask data with pixel QA layer
qa_path = glob(os.path.join(ascene, '*pixel_qa.tif'))
landsat_scene_cl_free = landsat_scene.where(
~remove_clouds(qa_path[0]))
landsat_scene_ndvi = normalized_diff(
landsat_scene_cl_free[1], landsat_scene_cl_free[0])
ndvi_values_temp.append(landsat_scene_ndvi)
date = ascene[50:58]
datetime_object = datetime.strptime(date, '%Y%m%d')
dates_temp.append(datetime_object)
df = pd.DataFrame({'mean_ndvi': ndvi_values_temp}, index=dates_temp)
df['site'] = ascene[22:26]
df['mean_ndvi'] = df['mean_ndvi'].apply(pd.to_numeric, errors='coerce')
df.drop(df[df['mean_ndvi'] == 0].index, inplace=True)
return df
###Output
_____no_output_____
###Markdown
Using Functions for Optimized Code These functions defined above help to make my notebook code more efficient and optimized. Instead of repeating several lines of code for every time I need to calculate NDVI, open/clean/crop a list of raster files, remove cloud interference from landsat data, or combine NDVI values for several sites/scenes into a pandas dataframe, I can use the functions defined above. These functions reduce the potential for error, make it easier to add more data to this analysis, and help someone else understand my code better.
###Code
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
# Define paths to data and crop geodataframe files
landsat_dirpath_harv = os.path.join("ndvi-automation",
"sites",
"HARV",
"landsat-crop",
"LC080130302017031701T1-SC20181023151837",
"*band[4-5]*.tif")
crop_dirpath_harv = os.path.join("ndvi-automation",
"sites",
"HARV",
"vector",
"HARV-crop.shp")
crop_dirpath_sjer = os.path.join("ndvi-automation",
"sites",
"SJER",
"vector",
"SJER-crop.shp")
# Open crop files for both sites with geopandas
harv_crop = gpd.read_file(crop_dirpath_harv)
sjer_crop = gpd.read_file(crop_dirpath_sjer)
# Sort list of .tif files for single date in HARV landsat data
landsat_paths_harv = sorted(glob(landsat_dirpath_harv))
# Open, clean, and crop data for single date with process_bands
landsat_harv_march = process_bands(
landsat_paths_harv, crop_dirpath_harv, stack=True)
# Calculate average NDVI for this date with normalized_diff
landsat_harv_ndvi = normalized_diff(
landsat_harv_march[1], landsat_harv_march[0])
# Save NDVI value, date, and site name to pandas df
ndvi_values = [landsat_harv_ndvi]
date_single = '20170317'
datetime_object_single = [datetime.strptime(date_single, '%Y%m%d')]
df_harv_single_date = pd.DataFrame(
{'mean_ndvi': ndvi_values}, index=datetime_object_single)
df_harv_single_date['site'] = 'HARV'
df_harv_single_date
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Data Source for Above AnalysisThe data I used for the analysis above is landsat data collected on March 17, 2017 over the Harvard Forest (HARV) site in the Eastern United States. I used only bands 4 and 5 (Red and NIR) for this analysis, since NDVI uses only those two bands in the calculation. I cropped this data to our specific area of interest (defined in the crop geodataframe object provided with data). I used the process_bands function to open, clean, and crop the data, and I used the normalized_diff function to calculate NDVI. I then saved this information (along with site name and date) to a pandas dataframe. Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# Define paths to all dates/scenes in both sites
landsat_scenes_harv = os.path.join("ndvi-automation",
"sites",
"HARV",
"landsat-crop",
"*")
landsat_scenes_sjer = os.path.join("ndvi-automation",
"sites",
"SJER",
"landsat-crop",
"*")
# Sort lists
landsat_scene_paths_harv = sorted(glob(landsat_scenes_harv))
landsat_scene_paths_sjer = sorted(glob(landsat_scenes_sjer))
# Use combine_sites to find average ndvi values for all dates for both sites
df1 = combine_scenes(landsat_scene_paths_harv, crop_dirpath_harv)
df2 = combine_scenes(landsat_scene_paths_sjer, crop_dirpath_sjer)
# Combine pandas dataframes for both sites
dfs = [df1, df2]
mean_ndvi_df_total = pd.concat(dfs)
mean_ndvi_df_total
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
###Output
✅ Your data is stored in a DataFrame!
❌ The amount of null data in your dataframe is incorrect.
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
Your total run time for processing the data was 0:00:07.271149.
➡ You received 8 out of 10 points for creating a dataframe.
###Markdown
Data Sources and Analysis Process The data I used in the above analysis are landsat data for two sites (San Joaquin Experimental Range (SJER) in Southern California and Harvard Forest (HARV) in the Eastern United States). The data spans several dates in the year of 2017, but after removing clouds and cleaning the data, there are only valid mean_ndvi values from March to December 2017. Again, I used only bands 4 and 5 in each scene to calculate average NDVI, and I cropped the data to our AOI. I used the process_bands, normalized_diff, and remove_clouds functions within the combine_scenes function. The combine_scenes functions opens and cleans several raster files, removes cloud interference, calculates average NDVI for each scene, and combines those values with the dates and site names in a pandas dataframe.
###Code
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
# Plot average NDVI values for both sites
fig, (ax) = plt.subplots(figsize=(12, 12))
for site_name, group in mean_ndvi_df_total.groupby('site'):
group.groupby('site').plot(y='mean_ndvi',
label=site_name,
linewidth=3.0,
ax=ax,
alpha=.8)
# Set plot title and axes labels
ax.set(title="Mean NDVI Values from Landsat Data for Two Sites (Clouds Removed)\n \
March - December 2017",
xlabel="Date",
ylabel="Mean NDVI")
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Export pandas dataframe to csv file
path_to_output = os.path.join("ndvi-automation",
"outputs",
"ea-2022-04-ndvi-automation-kristentortorelli-df-output.csv")
path_to_hw_csv = os.path.join(et.io.HOME, 'earth-analytics',
'assignments',
'ea-2022-04-ndvi-automation-kristentortorelli')
mean_ndvi_df_total.to_csv(path_to_output)
shutil.copy(path_to_output, path_to_hw_csv)
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name: Christy Sandberg** --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too **PSEUDOCODE**TAKE FILEPATH AND CALCULATE NDVI AT SITES WITH INFO STORED IN A SET FORMAT AT THE GIVEN FILEPATH* Create glob list of scene folders at given file location.* Create empty list to store info about each landsat scene.* Loop through each landsat scene folder create function for this? - get the site name from site directory name (site name is in directory two levels above scene directory) - get the data acquisition date from scene directory name ([10-18]) - get crop shape file ('.shp') from vector directory ('SITES//vector') - get lists of tif files using (glob lists using wildcards) - red and near-infrared band tif files needed to calculate NDVI - the qa layer needed to create cloud mask - create 'qa_pixel from qa band and crop shape* open 'ndvi_bands' (red and near infrared bands), to process ndvi for scene (use valid range of 0-10000)* calculate NDVI using n_diff = (b1 - b2) / (b1 + b2)* mask the NDVI using the 'cloud_mask'* append 'site', 'date', and 'mean_ndvi' to list (mean_ndvi = ndvi.mean().values)* Convert list to dataframe - column names for the final DataFrame should be 'mean_ndvi' and 'site' - convert date to datetime, and use as index - create plot showing mean ndvi for each site over time (include legend)* Export/Output the final mean_ndvi_df to a csv file* save to 'outputs' folder (create that folder if it doesn't already exist)
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
# YOUR CODE HERE
# Import necessary packages
import os
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
import geopandas as gpd
import pandas as pd
import rioxarray as rxr
import xarray as xr
from rasterio.plot import plotting_extent
import earthpy as et
import earthpy.spatial as es
import earthpy.plot as ep
import earthpy.mask as em
# Get data and set working directory
data = et.data.get_data('ndvi-automation')
os.chdir(os.path.join(et.io.HOME,
"earth-analytics",
"data"))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points) The two functions I've created for this assignment are:1. **'process_single_scene'** - this function will collect four variables needed from each scene to populate the final dataframe with 'site', 'date' and 'mean_ndvi'. 1. site name 2. data acquisition date 3. cloud mask layer from scene's 'qa' file 4. list of red and near-infrared bands This function will open the red and near-infrared bands needed for the NDVI calculation, clipping to a crop shape that is identified earlier in the function. It will also remove any values outside of a given valid range of values (if that valid range is provided to the function). The actual NDVI calculation occurs outside the 'process_single_scene' function. 2. **'calculate_masked_ndvi'** - this function will calculate the NDVI from two bands provided in the 'ndvi_list', and then mask the NDVI values from a cloud mask layer generated from the 'pixel_qa' variable provided by the 'process_single_scene' function and a 'cloud_vals' list that can be maintained inside the function.
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# YOUR CODE HERE
# function to process single landsat scene in directory
def process_single_scene (path_to_scene_dir, valid_range):
"""Get four variables needed from each landsat scene to create NDVI df.
Can be used when Site and Scene directories are organized in a standard
format, where the Site directory has a 'vector' directory that stores the
crop shape, and a 'landsat-crop' directory that stores the tif files.
Parameters
----------
path_to_scene_dir : str
this string is a filepath to a directory containing tif files from a
single landsat scene.
valid_range : tuple
this tuple not required, but represents a valid min/max range of
values for a landsat tif file; if given, the formula will clean the
bands to the range provided.
Returns
-------
site : str
NEON site name - four letter abbreviation
date : str
date the Landsat satellite collected the data
pixel_qa : xarray.core.dataarray
landsat pixel_qa layer
ndvi_bands : list
list of two cropped bands to be used for NDVI calculation
"""
# get the site name ('site') from site directory name
path_to_site_dir = os.path.dirname(os.path.dirname(
os.path.normpath(path_to_scene_dir)))
site = os.path.basename(os.path.normpath(path_to_site_dir))
# get the data acquisition date ('date') from scene directory name
date = os.path.basename(os.path.normpath(path_to_scene_dir))[10:18]
# get shapefile needed for 'crop_shape'
crop_shape_path = glob(os.path.join(path_to_site_dir, 'vector', '*.shp'))
crop_shape = gpd.read_file(os.path.normpath(crop_shape_path[0]))
# create cloud mask layer from qa band
# open the qa band needed for the cloud mask, clip to crop_shape
pixel_qa_band_path = glob(os.path.join(path_to_scene_dir, "*qa.tif"))[0]
pixel_qa = rxr.open_rasterio(
pixel_qa_band_path, masked=True).rio.clip(
crop_shape.geometry, from_disk=True).squeeze()
# get 'ndvi_bands' list; info needed to calculate NDVI at later step
# open the red and infrared bands and clip to crop_shape
ndvi_bands_path = sorted(
glob(os.path.join(path_to_scene_dir, "*band[4-5]*")))
ndvi_bands = []
for band_path in ndvi_bands_path:
band = rxr.open_rasterio(band_path, masked=True).rio.clip(
crop_shape.geometry, from_disk=True).squeeze()
# mask values to valid range (if valid_range is given)
if valid_range:
mask = ((band < valid_range[0]) | (band > valid_range[1]))
clean_band = band.where(~xr.where(mask, True, False))
ndvi_bands.append(clean_band)
# function returns four variables
return(site, date, pixel_qa, ndvi_bands)
# function to calculate NDVI, with data removed when it is hidden by clouds
def calculate_masked_ndvi(ndvi_bands, pixel_qa):
"""calculates NDVI; removes data that is hidden by clouds
Calculates normalized difference from two arrays of same shape. Math will
be calculated (b1-b2) / (b1+b2). Also removes values that are obscured by
clouds, applying the pixel_qa layer when it's values are found in the
'cloud_vals' list maintained inside this function.
Parameters
----------
ndvi_bands : list
List of two numpy arrays of same shape
pixel_qa : xarray.core.dataarray
landsat pixel_qa layer
Returns
----------
ndvi_masked : numpy array
The element-wise result of (b1-b2) / (b1+b2) calculation after the
cloud mask is applied
"""
ndvi = (ndvi_bands[1]-ndvi_bands[0]) / (ndvi_bands[1]+ndvi_bands[0])
cloud_vals = [328, 392, 840, 904, 1350, 352, 368, 416, 432, 480, 864,
880, 928, 944, 992, 480, 992]
ndvi_masked = ndvi.where(~pixel_qa.isin(cloud_vals))
return(ndvi_masked)
###Output
_____no_output_____
###Markdown
**The process for creating a dataframe showing columns for 'site', 'mean_ndvi' and indexed by date is tested using a filepath to a single landsat scene.**
###Code
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
# YOUR CODE HERE
# In the cell below, create a single dataframe containing MEAN NDVI, the site
# name, and the date of the data for the HARV site scene
# HARV/landsat-crop/LC080130302017031701T1-SC20181023151837.
# The column names for the final DataFrame should bemean_ndvi, and site, and
# the data should be indexed on the date.
# create empty list
scene_ndvi_list=[]
# create filepath to single landsat scene directory
harv_scene_path = os.path.join('ndvi-automation', 'sites', 'HARV',
'landsat-crop',
'LC080130302017031701T1-SC20181023151837')
# use 'process_single_scene' function to return needed variables
site_loc, acq_date, pixel_qa, ndvi_bands = process_single_scene (
harv_scene_path, (0,10000))
# use 'calculate_masked_ndvi' function to get NDVI values
masked_ndvi = calculate_masked_ndvi(ndvi_bands, pixel_qa)
# append scene variables to list
scene_ndvi_list.append([acq_date, site_loc, masked_ndvi.mean().values])
# convert scene_ndvi_list to dataframe with date as datetime index
scene_ndvi_df = pd.DataFrame(data=scene_ndvi_list,
columns=['date', 'site', 'mean_ndvi'])
scene_ndvi_df['date'] = pd.to_datetime(scene_ndvi_df['date'], yearfirst=True,
format='%Y-%m-%d')
scene_ndvi_df = scene_ndvi_df.set_index('date')
scene_ndvi_df
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`). **The process used for creating a dataframe of a single landsat scene is expanded with a loop, to collect info on all landsat scenes in a directory.**
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# YOUR CODE HERE
# Create glob list of all scene folders at given file location
scene_dirs = glob(os.path.join('ndvi-automation', 'sites', '*',
'landsat-crop', '*'))
# Create empty list to store info about each landsat scene.
full_ndvi_list = []
# Loop through each landsat scene folder
for scene in scene_dirs:
# use 'process_single_scene' function to get the site name, data
# acquisition date, pixel_qa & ndvi_bands list
site_loc, acq_date, pixel_qa, ndvi_bands = process_single_scene (
scene, (0,10000))
# use 'calculate_masked_ndvi' function to get get ndvi with cloud mask
masked_ndvi = calculate_masked_ndvi(ndvi_bands, pixel_qa)
# append 'site', 'date', and 'mean_ndvi' to list
full_ndvi_list.append([acq_date, site_loc, masked_ndvi.mean().values])
# convert list to dataframe with date as datetime index
full_ndvi_df = pd.DataFrame(data=full_ndvi_list,
columns=['date', 'site', 'mean_ndvi'])
full_ndvi_df['date'] = pd.to_datetime(full_ndvi_df['date'], yearfirst=True,
format='%Y-%m-%d')
full_ndvi_df = full_ndvi_df.set_index('date')
# call dataframe at the end of the cell for autograding
full_ndvi_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
###Output
✅ Your data is stored in a DataFrame!
❌ The amount of null data in your dataframe is incorrect.
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
Your total run time for processing the data was 0:00:10.489686.
➡ You received 8 out of 10 points for creating a dataframe.
###Markdown
**NOTE:** The cell below removes the 'nan' values from the 'full_ndvi_df', keeping only the values needed to plot a continuous line for each NEON site.Doing this will remove 15 'nan' rows from the dataframe. A sanity check looking at the nan values will fail, even though it seems to be looking for 15 nan values to be removed?`(student_ndvi_df.mean_ndvi.isna().sum() == 15)`*(Thanks to Lana, and her discussion post on this!)*
###Code
# clean 'full_ndvi_df' for plot, removing 'nan' values by keeping only the
# rows with mean_ndvi > 0
ndvi_for_plot = full_ndvi_df[full_ndvi_df['mean_ndvi'] > 0]
print(ndvi_for_plot)
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
# YOUR CODE HERE
# create plot showing mean ndvi for each site over time (include legend)
colors = {"HARV": "firebrick",
"SJER": "teal"}
fig, ax = plt.subplots()
for label, df in ndvi_for_plot.groupby("site"):
# print(df)
ax.plot(df.index,
df.mean_ndvi,
label=label,
color=colors[label])
plt.legend()
plt.setp(ax.get_xticklabels(), rotation = 90)
ax.set(ylabel="Mean NDVI",
xlabel="Date",
title="Mean Nornalized Differential Vegetation Index (NDVI)\n"
"Jan - Dec 2017\nLandsat 8 with Clouds Removed")
# plt.show()
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below Eric Nutt --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
import os
from glob import glob
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import pandas as pd
import seaborn as sns
import geopandas as gpd
import numpy as np
import xarray as xr
import rioxarray as rxr
import earthpy as et
import earthpy.spatial as es
import earthpy.plot as ep
import earthpy.mask as em
# Prettier plotting with seaborn
sns.set_style('white')
sns.set(font_scale=1.5)
# Download data and set working directory
data = et.data.get_data('ndvi-automation')
os.chdir(os.path.join(et.io.HOME,
'earth-analytics',
'data'))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
###Output
_____no_output_____
###Markdown
Necessary Functions DefinedBelow the function "open_clean_bands" is defined for opening and cleaning individual landsat bands. Another function is defined for masking, cropping, and calculating a mean ndvi value from the corresponding xarray object.
###Code
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# Define function to open and clean a single landsat band
def open_clean_bands(band_path,
valid_range=None):
"""Open and mask a single landsat band using a pixel_qa layer.
Parameters
-----------
band_path : string
A path to the array to be opened
valid_range : tuple (optional)
A tuple of min and max range of values for the data. Default = None
Returns
-----------
arr : xarray DataArray
An xarray DataArray with values that should be masked set to 1 for True (Boolean)
"""
# Open a single band using rioxarray & mask to valid range then apply cloud mask
band = (rxr.open_rasterio(band_path, masked=True)
.rio.clip(crop_bounds.geometry, from_disk=True)
.squeeze())
# Specify valid range of values
if valid_range:
mask = ((band <= 0) | (band > 10000))
band = band.where(~mask, np.nan)
return band
# Define function to open and mask single landsat band
def mask_crop_ndvi(all_bands,
crop_bound,
pixel_qa_path,
vals):
"""Open and mask a single landsat band using a pixel_qa layer.
Parameters
-----------
all_bands : list
a list containing the xarray objects for landsat bands 4 and 5
crop_bound: geopandas GeoDataFrame
A geopandas dataframe to be used to crop the raster data using rasterio mask().
pixel_qa: xarray DataArray
An xarray DataArray with pixel qa values that have not yet been turned into a mask (0s and 1s)
vals: list
A list of values needed to create the cloud mask
Returns
-----------
ndvi_mean : Xarray object
an xarray object containing NDVI mean values
"""
crop_extent = crop_bounds
# Open all bands using a loop
bands = []
for aband in all_bands_path:
cleaned_band = open_clean_bands(band_path=aband,
valid_range=(0, 10000))
bands.append(cleaned_band)
# Clip the cloud mask layer
cl_mask = (rxr.open_rasterio(pixel_qa_path, masked=True).squeeze()
.rio.clip(crop_bounds.geometry, from_disk=True)
.squeeze())
# Calculate NDVI
ndvi_xr = (bands[1]-bands[0]) / (bands[1]+bands[0])
# Apply cloud mask to NDVI
all_masked_values = [328, 392, 840, 904, 1350, 352, 368, 416,
432, 480, 864, 880, 928, 944, 992, 480, 992]
ndvi_crop = ndvi_xr.where(~cl_mask.isin(all_masked_values))
# NDVI mean
mean_ndvi = ndvi_crop.mean(skipna=True).item()
return mean_ndvi
###Output
_____no_output_____
###Markdown
Loop Code to Produce DataframeBelow is the for loop code to generate a dataframe containing the site name, date, and mean ndvi value for the HARV site. The for loops utilize the functions defined above to process the xarray objects and obtain mean ndvi values.
###Code
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
# Define the directory name
landsat_dir = "landsat-crop"
# Get a list of each directory
path = os.path.join("ndvi-automation", "sites")
# Get a list of both site directories (We will talk more about automation next week)
sites = glob(path + "/*/")
# Create an empty list
ndvi_list = []
# Loop through each site directory
for site_files in sites:
#print("I am looping through", site_files)
asite = os.path.split(os.path.normpath(site_files))
# Get site_name
site_name = os.path.basename(os.path.normpath(asite[1]))
# Open up the shapefile for clipping your landsat data to the study area
vector_dir = os.path.join(site_files, "vector")
# print(vector_dir)
# Open crop boundary
site_boundary_path = os.path.join(vector_dir, site_name + "-crop.shp")
# print(site_boundary_path)
crop_bound = gpd.read_file(site_boundary_path)
# print(crop_bound)
# Get a list of subdirectories for that site
new_path = os.path.join(site_files, landsat_dir)
all_dirs = glob(new_path + "/*/")
all_dirs.sort()
# Loop through each subdirectory where your data are stored
for adir in all_dirs:
#print("now processing", adir)
dir_name = os.path.basename(os.path.normpath(adir))
# print(dir_name)
date = dir_name[10:18]
all_bands_path = glob(os.path.join(adir, "*band*[4-5].tif"))
all_bands_path.sort()
# print(all_bands_path)
pixel_qa_path = glob(os.path.join(adir, "*qa*"))[0]
# print(pixel_qa_path)
crop_bounds = crop_bound
# print(crop_bounds)
all_masked_values = [328, 392, 840, 904, 1350, 352, 368, 416,
432, 480, 864, 880, 928, 944, 992, 480, 992]
# Calculate NDVI
ndvi_value = mask_crop_ndvi(all_bands=all_bands_path,
crop_bound=crop_bounds,
pixel_qa_path=pixel_qa_path,
vals=all_masked_values)
# Append columns
ndvi_list.append([site_name, date, ndvi_value])
ndvi_list
# Create final dataframe and rename columns
ndvi_final_df = pd.DataFrame(ndvi_list,
columns=["site", "date", "mean_ndvi"])
ndvi_final_df['date'] = pd.to_datetime(ndvi_final_df['date'])
ndvi_df = ndvi_final_df.set_index('date')
# Remove SJER site values
harv_ndvi_df = ndvi_df[ndvi_df['site'] != 'SJER']
# Remove ndvi no-data values (NaN)
harv_ndvi_clean = harv_ndvi_df[harv_ndvi_df['mean_ndvi'] > 0]
harv_ndvi_clean
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`). For loop code for both sitesThe code below is similar to the code above, but generates a dataframe with mean ndvi values for both the SJER and HARV sites. The subsequent dataframe is then used to generate a plot of the data for analysis.
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# Define the directory name
landsat_dir = "landsat-crop"
# Get a list of each directory
path = os.path.join("ndvi-automation", "sites")
# Get a list of both site directories (We will talk more about automation next week)
sites = glob(path + "/*/")
# Create an empty list
ndvi_list = []
# Loop through each site directory
for site_files in sites:
#print("I am looping through", site_files)
asite = os.path.split(os.path.normpath(site_files))
# Get site_name
site_name = os.path.basename(os.path.normpath(asite[1]))
# Open up the shapefile for clipping your landsat data to the study area
vector_dir = os.path.join(site_files, "vector")
# print(vector_dir)
# Open crop boundary
site_boundary_path = os.path.join(vector_dir, site_name + "-crop.shp")
# print(site_boundary_path)
crop_bound = gpd.read_file(site_boundary_path)
# print(crop_bound)
# Get a list of subdirectories for that site
new_path = os.path.join(site_files, landsat_dir)
all_dirs = glob(new_path + "/*/")
all_dirs.sort()
# Loop through each subdirectory where your data are stored
for adir in all_dirs:
#print("now processing", adir)
dir_name = os.path.basename(os.path.normpath(adir))
# print(dir_name)
date = dir_name[10:18]
all_bands_path = glob(os.path.join(adir, "*band*[4-5].tif"))
all_bands_path.sort()
# print(all_bands_path)
pixel_qa_path = glob(os.path.join(adir, "*qa*"))[0]
# print(pixel_qa_path)
crop_bounds = crop_bound
# print(crop_bounds)
# all_masked_values = [328, 392, 840, 904, 1350, 352, 368, 416,
# 432, 480, 864, 880, 928, 944, 992, 480, 992]
# print(all_masked_values)
# Calculate NDVI
ndvi_value = mask_crop_ndvi(all_bands=all_bands_path,
crop_bound=crop_bounds,
pixel_qa_path=pixel_qa_path,
vals=all_masked_values)
# Append columns
ndvi_list.append([site_name, date, ndvi_value])
ndvi_list
# Create final dataframe and rename columns
ndvi_final_df = pd.DataFrame(ndvi_list,
columns=["site", "date", "mean_ndvi"])
ndvi_final_df['date'] = pd.to_datetime(ndvi_final_df['date'])
ndvi_df = ndvi_final_df.set_index('date')
ndvi_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
# Code to export ndvi dataframe to a csv file
ndvi_df.to_csv('harv_sjer_mean_ndvi.csv')
###Output
_____no_output_____
###Markdown
Plot Code for Figure 1The for loop code below generates a line plot depicted the mean ndvi values from Jan 2017 to Dec 2017 for the HARV and SJER sites.
###Code
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
fig, ax = plt.subplots(figsize=(15, 10))
fig.suptitle('Mean Normalized Difference Vegetation Index (NDVI)\nJan 2017 - Dec 2017\nLandsat 8 With Clouds Removed',
fontsize=20, fontweight='bold')
for s, df in ndvi_df.dropna().groupby('site'):
ax.plot(df['mean_ndvi'], 'o-', label=s)
ax.legend(['HARV','SJER'])
ax.set(xlabel = "Month",
ylabel = "Mean NDVI")
# Define date format
date_form = DateFormatter("%b")
ax.xaxis.set_major_formatter(date_form)
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name:** Rachel Michaels --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too 1. Import required packages2. Download the data3. Set the working directory.4. Create paths to sites5. Create cloud mask – get the cloud pixel values from earthpy6. Create a function to extract site name and datetime from directory path names, using the path to the directory that contains the information of interest and the date and site name location within that directory path as index lists as the function parameters7. Create a function that will open, crop and specify valid ranges of a landsat band, using the path to the band, the cropping extent, and the valid range as function parameters8. Create dataframe of mean NDVI a. Create an empty list that will hold site, date, and mean NDVI information b. Create a for loop to loop through site paths i. Get list of scene paths of both sites using glob ii. Get shapefiles for each site using glob and pulling out index 0 iii. Open shapefiles iv. Create a nested for loop to loop through each scene 1. Go through each scene directory and pull out date and site information using the function created earlier in the notebook 2. Go through each scene and create sorted list of bands in each scene using glob. Only bands 4 and 5 are needed for calculating NDVI 3. Go through each scene and get qa pixel layers using glob and pulling out index 0. This will pop out each qa pixel layer as the loop loops through each scene so that it's not in list form and can be worked with 4. Open the qa layer 5. Crop the qa layer using the shapefile opened in the first layer of the loop 6. Create an empty list that will hold bands 4 and 5 once they are cleaned and free of clouds 7. Create another for loop inside the already nested loop a. Clean the bands using the previously created function that will open the band, crop it using its associate shapefile, and specify landsat's valid range b. Apply cloud mask to band c. Append list so that it holds the cloud free bands. This list will be used to calculate mean NDVI 8. Calculate mean NDVI 9. Append the mean NDVI to the list holding the site information (the function that pulled site and date information from scene directory paths created a list as the output) 10. Append this list of lists to the empty list created outside the for loop at the top9. Convert list into a pandas dataframe10. Set index on date11. Create figure a. Set figure space b. Create overall figure title c. Create a for loop to loop through dataframe and create individual dataframes grouped by site for plotting d. Set axes labels e. Format date on x axis f. Create a legend12. Drop na values from dataframe for exporting13. Export pandas dataframe to .csv file14. Create a figure that displays mean NDVI at the HARV and SJER locations over a year, with mean NDVI on the y-axis and the month on the x-axis using the pandas dataframe created in the previous step.
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
from glob import glob
import matplotlib.pyplot as plt
import pandas as pd
import rioxarray as rxr
import xarray as xr
import geopandas as gpd
import earthpy as et
import earthpy.mask as em
from datetime import datetime
import numpy as np
from matplotlib.dates import DateFormatter
# Download the data
et.data.get_data('ndvi-automation')
# Create a path to the directory
directory_path = os.path.join(et.io.HOME, "earth-analytics", "data")
# Set working directory
os.chdir(directory_path)
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
# Create paths to sites
site_paths = glob(os.path.join("ndvi-automation", "sites", "*"))
site_paths
# Create cloud mask
# Get the cloud pixel values from earthpy
high_cloud_confidence = (
em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"])
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
###Output
_____no_output_____
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
###Output
_____no_output_____
###Markdown
Create functions to extract site name and datetime from directory path names and open, crop and specify valid ranges of a landsat band.
###Code
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# Function to extract sitename and datetime from directory path names
def extract_sitename_date(directory_path,
sitename_location,
datetime_location):
"""Extract sitename and datetime from directory path name.
Parameters
-----------
directory_path : string
A path to the directory name
sitename_location : index list
List of first and last index of the site name
datetime_location : index list
List of first and last index of the date
Returns
-----------
list : list of site names and datetime information
"""
# Create an empty list to append sitename and date information
site_name_date_list = []
# Assign datetime location to an object
date_location = directory_path[datetime_location[0]:
datetime_location[1]]
# Specify datetime format
format = "%Y%m%d"
# Use datetime and format to create date varibale
date = datetime.strptime(date_location, format)
# Assign sitename information to a variable
site = directory_path[sitename_location[0]: sitename_location[1]]
# Append site variable to list
site_name_date_list.append(site)
# Append date variable to list
site_name_date_list.append(date)
return site_name_date_list
# Function to clean landsat bands
def open_clean_bands(band_path,
crop_extent,
valid_range=None):
"""Open, crop and specify valid ranges of a landsat band.
Parameters
-----------
band_path : string
A path to the array to be opened
valid_range : tuple (optional)
A tuple of min and max range of values for the data. Default = None
Returns
-----------
arr : xarray DataArray
An xarray DataArray with values that should be masked set to 1 for True (Boolean)
"""
# TODO add tests to ensure the arrays are the same .shape
band = rxr.open_rasterio(band_path, masked=True).rio.clip(crop_extent.geometry,
from_disk=True).squeeze()
# Only run this step if a valid range tuple is provided
if valid_range:
mask = ((band < valid_range[0]) | (band > valid_range[1]))
band = band.where(~xr.where(mask, True, False))
return band
# Create dataframe of mean NDVI in this cell using the functions created above
# Create path to HARV data
harv_path = os.path.join("ndvi-automation", "sites", "HARV")
# Open and clean all HARV bands
harv_scene_info = []
# Establish the scene directory path that is of interest
scene_path = sorted(glob(os.path.join(harv_path, "landsat-crop",
"LC080130302017031701T1-SC20181023151837")))
# Set the path to the associated shapefile
bound = os.path.join(harv_path, "vector", "HARV-crop.shp")
# Open the shapefile
harv_boundary = gpd.read_file(bound)
# Create a nested for loop to be able to work with each .tif file (band)
# in the scene, again this is necessary when working with multiple scenes
for tif in scene_path:
# Get site and date info from the scene directory path
site_info = extract_sitename_date(tif, [22, 26], [50, 58])
# Grab bands 4 and 5 (these are the bands needed for calculating NDVI)
harv_bands = sorted(glob(os.path.join(tif, "*band[4-5]*")))
# Set the path to the qa layer in the scene directory
qa_layer_path = os.path.join(tif,
"LC08_L1TP_013030_20170317_20170328_01_T1_pixel_qa.tif")
# Open the qa layer
opened_layer = rxr.open_rasterio(qa_layer_path, masked=True)
# Crop the qa layer using the boundary associated with the scene and
# opened in a previous step
cropped_layer = opened_layer.rio.clip(harv_boundary.geometry).squeeze()
# Create an empty list to store bands after they are cleaned of clouds
tif_bands = []
# Create an additional loop that is nested inside the other two that will
# be used to work with each band inside the scene directory
for a_band in harv_bands:
# Clean the band using the previously created function
# The function opens, crops, and sets landsat's valid range
clean_band = open_clean_bands(
a_band, harv_boundary, valid_range=(0, 10000))
# Apply the cloud mask to the clean band
cloud_free_band = clean_band.where(
~cropped_layer.isin(all_masked_values))
# The band to the empty list that will be used to calculate mean NDVI
tif_bands.append(cloud_free_band)
# Calculate mean NDVI using the list that is storing the clean bands
# that are free of clouds
mean_ndvi = np.nanmean(
(tif_bands[1]-tif_bands[0]) / (tif_bands[1]+tif_bands[0]))
# Append the mean NDVI to the list that was the result of the function
# that grabbed site and date information from the scene directory path name
site_info.append(mean_ndvi)
# Append this lists of lists to the list outside of the nested for
# loops at the top
harv_scene_info.append(site_info)
# Convert list into a pandas dataframe
harv_info_df = pd.DataFrame(harv_scene_info, columns=[
"site", "date", "mean_ndvi"])
# Set index
harv_date_as_index = harv_info_df.set_index("date")
# Call dataframe
harv_date_as_index
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`). 1. Create dataframe of mean NDVI a. Create an empty list that will hold site, date, and mean NDVI information b. Create a for loop to loop through site paths i. Get list of scene paths of both sites using glob ii. Get shapefiles for each site using glob and pulling out index 0 iii. Open shapefiles iv. Create a nested for loop to loop through each scene 1. Go through each scene directory and pull out date and site information using the function created earlier in the notebook 2. Go through each scene and create sorted list of bands in each scene using glob. Only bands 4 and 5 are needed for calculating NDVI 3. Go through each scene and get qa pixel layers using glob and pulling out index 0. This will pop out each qa pixel layer as the loop loops through each scene so that it's not in list form and can be worked with 4. Open the qa layer 5. Crop the qa layer using the shapefile opened in the first layer of the loop 6. Create an empty list that will hold bands 4 and 5 once they are cleaned and free of clouds 7. Create another for loop inside the already nested loop a. Clean the bands using the previously created function that will open the band, crop it using its associate shapefile, and specify landsat's valid range b. Apply cloud mask to band c. Append list so that it holds the cloud free bands. This list will be used to calculate mean NDVI 8. Calculate mean NDVI 9. Append the mean NDVI to the list holding the site information (the function that pulled site and date information from scene directory paths created a list as the output) 10. Append this list of lists to the empty list created outside the for loop at the top The below cell runs quickly and efficiently by using loops and functions to process data, which minimize repetition.
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Create an empty list that will hold site, date, and mean ndvi information
all_site_info = []
# Create a for loop to loop through site paths
for site in site_paths:
# Get list of scene paths of both sites using glob
dirs = glob(os.path.join(site, "landsat-crop", "*"))
# Get shapefiles for each site using glob and pulling out index 0
bounds = glob(os.path.join(site, "vector", "*-crop.shp"))[0]
# Open shapefiles
opened_bound = gpd.read_file(bounds)
# Create a nested for loop to loop through each scene
for all_dirs in dirs:
# Go through each scene directory and pull out date and site
# information using the function created earlier in the notebook
site_info = extract_sitename_date(all_dirs, [22, 26], [50, 58])
# Go through each scene and create sorted list of bands in each scene
# using glob. Only bands 4 and 5 are needed for calculating NDVI
scene_bands = sorted(glob(os.path.join(all_dirs, "*band[4-5]*")))
# Go through each scene and get qa pixel layers using glob and pulling
# out index 0. This will pop out each qa pixel layer as the loop loops
# through each scene so that it's not in list form and can be worked with
qa_layer_paths = glob(os.path.join(all_dirs, "*pixel_qa*"))[0]
# Open the qa layer
opened_layer = rxr.open_rasterio(qa_layer_paths, masked=True)
# Crop the qa layer using the shapefile opened in the first layer of
# the loop
cropped_layer = opened_layer.rio.clip(opened_bound.geometry).squeeze()
# Create an empty list that will hold bands 4 and 5 once they are
# cleaned and free of clouds
site_bands = []
# Create another for loop inside the already nested loop
for band in scene_bands:
# Clean the bands using the previously created function that will
# open the band, crop it using its associate shapefile, and specify
# landsat's valid range
clean_band = open_clean_bands(
band, opened_bound, valid_range=(0, 10000))
# Apply cloud mask to band
cloud_free_band = clean_band.where(
~cropped_layer.isin(all_masked_values))
# Append list so that it holds the cloud free bands. This list will
# be used to calculate mean NDVI
site_bands.append(cloud_free_band)
# Calculate mean NDVI
mean_ndvi = np.nanmean(
(site_bands[1]-site_bands[0]) / (site_bands[1]+site_bands[0]))
# Append the mean NDVI to the list holding the site information (the
# function that pulled site and date information from scene directory
# paths created a list as the output)
site_info.append(mean_ndvi)
# Append this list of lists to the empty list created outside the for
# loop at the top
all_site_info.append(site_info)
# Convert list into a pandas dataframe
site_info_df = pd.DataFrame(all_site_info, columns=[
"site", "date", "mean_ndvi"])
# Set index on date
indexed_site_info_df = site_info_df.set_index("date")
# Call dataframe
indexed_site_info_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points += 2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points += 2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points += 3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points += 3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
###Output
✅ Your data is stored in a DataFrame!
✅ Correct number of masked data values!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
Your total run time for processing the data was 0:00:24.302815.
➡ You received 10 out of 10 points for creating a dataframe.
###Markdown
Create a figure that displays mean NDVI at the HARV and SJER locations over a year, with mean NDVI on the y-axis and the month on the x-axis using the pandas dataframe created above.
###Code
# Add only the plot code to this cell
# Set figure space
fig, ax = plt.subplots(figsize=(12, 7))
# Create overall figure title
fig.suptitle(
"Mean Normalized Difference Vegetaion Index (NDVI) \nJan 2017 - Dec 2017 \nLandsat 8 with Clouds Removed")
# Create a for loop to loop through dataframe and create individual dataframes
# grouped by site for plotting
for site, site_name_df in indexed_site_info_df.dropna().groupby("site"):
ax.plot(site_name_df.index, site_name_df.mean_ndvi, marker="o", label=site)
# Set axes labels
ax.set(xlabel="Month",
ylabel="Mean NDVI")
# Format date on x axis
ax.xaxis.set_major_formatter(DateFormatter("%b"))
# Create a legend
ax.legend()
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. I would recommend that the flights take place in April for the SJER site. I would recommend that HARV flights take place in July. Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. I could possibly create NDVI difference maps to examine changes between time points (months, years, etc.). Due to the way my code is set up, I could also continue to add data to the HARV and SJER directories as it becomes available and run this same code to continue to monitor changes. Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. BONUS - Export a .CSV File to Share (10 points possible)This is optional - if you export a **.csv** file with the columns specified above: Site, Date and NDVI Value you can get an additional 10 points.* FULL CREDIT: File exists in csv format and contains the columns specified.We will check your github repo for this file!
###Code
# Drop na values from dataframe for exporting
no_nan_df = indexed_site_info_df.dropna()
# Export pandas dataframe to csv file
# Reproducible output
no_nan_df.to_csv(os.path.join(directory_path, "ndvi-automation", "outputs",
"ndvi_df.csv"))
# Export to my local repository
# no_nan_df.to_csv(os.path.join(et.io.HOME, "earth-analytics",
# "2022_spring",
# "assignments",
# "04_assignment",
# "ea-2022-04-ndvi-automation-rami8797",
# "ndvi_df.csv"))
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below Leah Manak --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Pseudocode for assignment workflow- Import libraries that will be used in this notebook- Make sure unused libraries are not in the list- Get the home directory using "et.data.get_data('ndvi-automation')"- Create data paths to our data from 'ndvi-automation' downloaded from earthpy- Create cloud mask and get cloud pixel values from earthpy- Create two functions: 1. extract the sitename and datetime from pathnames 2. open, crop, and identify ranges of a landsat band- Clip extent of bands to match study sites boundaries- Clip extent of qa layers to match study sites boundaries- Calculate NDVI - Add a cloud mask to the NDVI values- Calculate Mean NDVI, get site names and the dates, and create a list of lists- Make a pandas dataframe including the site names, mean NDVIs, and dates - Make sure the dataframe index is in datetime and that the date column is the index- Make sure the NA data is not included- Plot the data with date on the x-axis and mean NDVI values on the y axis- Download the data to a CSV file
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
from glob import glob
import matplotlib.pyplot as plt
import pandas as pd
import rioxarray as rxr
import xarray as xr
import geopandas as gpd
import earthpy as et
import earthpy.mask as em
from datetime import datetime
import numpy as np
from matplotlib.dates import DateFormatter
# Download the data
et.data.get_data('ndvi-automation')
# Create a path to the directory
directory_path = os.path.join(et.io.HOME,
"earth-analytics",
"data")
# Set working directory
os.chdir(directory_path)
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
# A function to extract the datetime and sitename from the directory paths
def extract_date_sitename(directory_path,
sitename_location,
datetime_location):
"""Extract datetime and sitename from directory path names.
Parameters
-----------
directory_path : string
A path to the directory name
sitename_location : index list
Index of sitename location in directory path name
datetime_location : index list
Index of datetime location in directory path name
Returns
-----------
list : list of the datetime location and sitename information
"""
# Create an empty list to append both sitename and date information
sitename_date = []
# Assign datetime location to an object and specify datetime format
date_location = directory_path[datetime_location[0]: datetime_location[1]]
format = "%Y%m%d"
# Create a date varaiable using new object and the datetime format
date = datetime.strptime(date_location, format)
# Create a location variable called "site"
site = directory_path[sitename_location[0]: sitename_location[1]]
# Append site and date variables to list
sitename_date.append(site)
sitename_date.append(date)
# Return the populated sitename_date list
return sitename_date
# A function to open clean landsat bands
def open_clean_bands(band_path,
crop_extent,
valid_range=None):
"""Open, crop, and identify the range of the bands.
Parameters
-----------
band_path : string
A path to the array that we will open
valid_range : tuple (optional)
A tuple of min and max range of values for the data. Default = None
Returns
-----------
arr : xarray DataArray
An xarray DataArray with values that should be masked
set to 1 for True (Boolean)
"""
# tests to ensure the arrays are clipped to the same .shape
band = rxr.open_rasterio(
band_path, masked=True).rio.clip(
crop_extent.geometry,from_disk=True).squeeze()
# This last step is only for a valid tuple
if valid_range:
mask = ((band < valid_range[0]) | (band > valid_range[1]))
band = band.where(~xr.where(mask, True, False))
return band
# Create dataframe of mean NDVI in this cell using the functions created above
# Create path to the two sites "SJER" and "HARV"
site_paths = glob(os.path.join("ndvi-automation",
"sites",
"*"))
site_paths
# Create cloud mask and get the cloud pixel values from earthpy
high_cloud_confidence = (
em.pixel_flags["pixel_qa"]["L8"]["High Cloud Confidence"])
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
harv_path = glob(os.path.join("ndvi-automation", "sites", "HARV"))
# Open and clean all HARV: first create empty HARV list
harv_info = []
# Create a for loop for the HARV path
for path in harv_path:
# Establish the scene directory path
scene_path = glob(os.path.join(path, "landsat-crop",
"LC080130302017031701T1-SC20181023151837"))
# Set the path to the cropped shapefile and open as the HARV boundary
bound = os.path.join(path, "vector", "HARV-crop.shp")
harv_boundary = gpd.read_file(bound)
# Create a nested for loop associated with each .tif file (band 4-5)
for tifs in scene_path:
# Get site and date info from the scene directory path
site_info = extract_date_sitename(tifs, [22, 26], [50, 58])
# Order the bands 4-5 with glob
harv_bands = sorted(glob(os.path.join(tifs, "*band[4-5]*")))
# Set the path to the qa layer in the scene directory and open it
qa_layer_path = os.path.join(tifs,
"LC08_L1TP_013030_20170317_20170328_01_T1_pixel_qa.tif")
qa_layer = rxr.open_rasterio(qa_layer_path, masked=True)
# Crop the qa layer using the harv_boundary
cropped_qa = qa_layer.rio.clip(harv_boundary.geometry).squeeze()
# New empty list for bands without cloud interference
tif_bands = []
# Create an additional loop for the bands in harv
for a_band in harv_bands:
# Clean the band using the open_clean_bands function
clean_band = open_clean_bands(
a_band, harv_boundary, valid_range=(0, 10000))
# Apply the cloud mask to the clean band
band_cloud_mask = clean_band.where(
~qa_layer.isin(all_masked_values))
# Add clean bands to empty for calculating the mean NDVI
tif_bands.append(band_cloud_mask)
# Calculate mean NDVI using tif_bands list
mean_ndvi = np.nanmean(
(tif_bands[1]-tif_bands[0]) / (tif_bands[1]+tif_bands[0]))
# Append the mean NDVI to the site_info list
site_info.append(mean_ndvi)
# Append site_info list to the initial list prior to the for loop
#called "harv_info"
harv_info.append(site_info)
# Create a pandas dataframe with the harv_info list
harv_df = pd.DataFrame(harv_info, columns=[
"site", "date", "mean_ndvi"])
# Set index from the date
harv_final = harv_df.set_index("date")
# Call dataframe
harv_final
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# YOUR CODE HERE
# Create an empty list with site, date, and mean ndvi information
location_info = []
# make a for loop for the location paths
for site in site_paths:
# Get list of the location path with glob
locations = glob(os.path.join(site, "landsat-crop", "*"))
# grab the shapefiles from the locations and make the index 0
bounds = glob(os.path.join(site, "vector", "*-crop.shp"))[0]
# Open the shapefiles
opened_bound = gpd.read_file(bounds)
# Create a nested for loop for all locations
for all_locations in locations:
# Extract date and site info using "extract_date_sitename" function
site_info = extract_date_sitename(all_locations, [22, 26], [50, 58])
# Create sorted list of bands 5 & 5 in each location using glob
scene_bands = sorted(glob(os.path.join(all_locations, "*band[4-5]*")))
# Extract qa pixel layers using glob and pulling out index 0.
qa_layer_paths = glob(os.path.join(all_locations, "*pixel_qa*"))[0]
# Open the qa layer
opened_layer = rxr.open_rasterio(qa_layer_paths, masked=True)
# Crop the qa layer using the 'opened_layer' shapefile
cropped_layer = opened_layer.rio.clip(opened_bound.geometry).squeeze()
# Create an empty list for cleaned bands 4 and 5
site_bands = []
# Create a for loop to clean bands 4&5
for band in scene_bands:
# Clean the bands using 'open_clean_bands' function
clean_band = open_clean_bands(
band, opened_bound, valid_range=(0, 10000))
# Apply cloud mask
cloud_free_band = clean_band.where(
~cropped_layer.isin(all_masked_values))
# Append list with the cloud free bands
site_bands.append(cloud_free_band)
# Calculate mean NDVI
mean_ndvi = np.nanmean(
(site_bands[1]-site_bands[0]) / (site_bands[1]+site_bands[0]))
# Append the mean NDVI to the empty site_info list
site_info.append(mean_ndvi)
# Append this list of lists the empty location_info list
location_info.append(site_info)
# Create a pandas dataframe
location_info_df = pd.DataFrame(location_info, columns=[
"site", "date", "mean_ndvi"])
# Set index on date
indexed_location_df = location_info_df.set_index("date")
final_NDVI_df = indexed_location_df.sort_values(by="date")
# Call dataframe
final_NDVI_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
fig, ax = plt.subplots(figsize = (12, 7))
for site, df in final_NDVI_df.dropna().groupby("site"):
if site in ["HARV"]:
site_name = 'HARV'
color = 'goldenrod'
else:
site_name = 'SJER'
color = 'purple'
ax.plot(df.index, df.mean_ndvi, label = site_name, marker = 'o',
color = color)
ax.set(title = "Mean Normalized Difference Vegetation Index (NDVI)\
for two sites (HARV & SJER) \n Mar 2017 - Dec 2017 (cloud-free data)",
xlabel = "Month", ylabel = "Mean NDVI")
ax.xaxis.set_major_formatter(DateFormatter("%b"))
ax.legend(title = "Site")
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. Based on the plot, I would recommend the flights to take place in different times for the different locations. I would say April for the SJER site, and July for the HARV site. Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. Instead of comparing the two sites, I would create a separate plot for each site. Each location plot would have a line for each year as a separate color, showing how the vegetation might change each month over a span of a certain amount of years. Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. BONUS - Export a .CSV File to Share (10 points possible)This is optional - if you export a **.csv** file with the columns specified above: Site, Date and NDVI Value you can get an additional 10 points.* FULL CREDIT: File exists in csv format and contains the columns specified.We will check your github repo for this file!
###Code
# CSV needs to have no nan values... drop them with .dropna()
final_NDVI_df_csv = final_NDVI_df.dropna()
# Export pandas dataframe to csv file
final_NDVI_df_csv.to_csv(os.path.join(
directory_path,
"ndvi-automation",
"outputs",
"ndvi_df.csv"))
# Export to personal
final_NDVI_df_csv.to_csv(os.path.join(et.io.HOME,
"earth-analytics",
"earth-analytics-python-env",
"ea-2022-04-ndvi-automation-LManak",
"ndvi_df.csv"))
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name: Thomas Schoenrock-Rossiter** --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Workflow pseudocodeRun first for one month at one site - next run for all months in one site - last run for all months in both sitesPreparation - - Import packages- Set working directory to 'data'- Download data from earthpyAccess Landsat band files for single scene -- Create path to Landsat data- Create list of Landsat bands in scene (into df?)- Subset bands into just RGB, NIR- Stack bands into single raster- clip stacked raster by boundary layer- Output raster for individual monthAccess Landsat band files for all scenes in a site -- Get list of all directories- For bands in each scene directory perform the above steps- Output, into a list or df, the average ndvi and data for each sceneAccess Landsat band files for all scenes in both sites -Calculate ndvi -- Run ndvi expression on red and NIR bands- Calculate mean ndvi for site- Output ndvi df/raster- Output ndvi values to spreadsheetPlot ndvi -- Create ndvi graph
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
from glob import glob
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import patches as mpatches
from matplotlib import colors
from matplotlib.dates import DateFormatter
import seaborn as sns
import numpy as np
import pandas as pd
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
from datetime import datetime
import pyproj
pyproj.set_use_global_context(True)
import rioxarray as rxr
import rasterio
from rasterio.plot import plotting_extent
import xarray as xr
import geopandas as gpd
import earthpy as et
import earthpy.spatial as es
import earthpy.plot as ep
import earthpy.mask as em
from shapely.geometry import mapping, box
sns.set(font_scale = 1.2, style = "darkgrid")
sns.set_style("ticks")
landsat_data = et.data.get_data('ndvi-automation')
os.chdir(os.path.join(et.io.HOME, 'earth-analytics', 'data'))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
###Output
_____no_output_____
###Markdown
The cell below contains the three functions that were used to streamline the amount of code and repitition used. The third function takes the previous two functions, one of which opens a sinlge band and crops it and the other takes the cropped band and masks it using a qa layer, and combines them so that they can be called in the subsequent loop. The loop applies this code to each landsat scene file for both of the sites before outputting a list.
###Code
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
#Create paths to both sites
site_path = os.path.join("ndvi-automation",
"sites")
# path to sjer and harv sites
study_sites = glob(os.path.join(site_path + "/*/"))
print(study_sites)
#function to open single landsat band, mask using qa and crop
def open_clean_bands(band_path,
crop_extent,
valid_range=None,):
"""Open and mask a single landsat band using a pixel_qa layer.
Parameters
-----------
band_path : string
A path to the array to be opened
crop_extent : vector
A path to the vector file to use as crop boundary
valid_range : tuple (optional)
A tuple of min and max range of values for the data. Default = None
Returns
-----------
arr : xarray DataArray
An xarray DataArray with values that should be masked set to 1 for
True (Boolean)
"""
band = rxr.open_rasterio(band_path, masked = True).rio.clip\
(crop_extent.geometry, from_disk = True).squeeze()
# Only run this step if a valid range tuple is provided
if valid_range:
mask = ((band < valid_range[0]) | (band > valid_range[1]))
band = band.where(~xr.where(mask, True, False))
return band
# function to apply cloud mask and caluclate ndvi (mean)
def mask_crop_ndvi(all_bands,
crop_bound,
pixel_qa,
masked_values):
"""Open and mask a single landsat band using a pixel_qa layer.
Parameters
-----------
all_bands : list
a list containing the xarray objects for landsat bands 4 and 5
crop_bound: geopandas GeoDataFrame
A geopandas dataframe to be used to crop the raster data using
rasterio mask().
pixel_qa: xarray DataArray
An xarray DataArray with pixel qa values that have not yet been
turned into a mask (0s and 1s)
masked_values: list
A list of cloud mask values
Returns
-----------
ndvi_crop : Xarray Dataset
a cropped and masked xarray object containing NDVI values
"""
crop_json = crop_bound.geometry
# Clip pixel qa cloud mask layer
cl_mask_crop = pixel_qa.rio.clip(crop_json)
# Calculate NDVI
ndvi_xr = (all_bands[1]-all_bands[0]) / (all_bands[1]+all_bands[0])
# Clip NDVI layer
ndvi_crop = ndvi_xr.rio.clip(crop_json)
# Apply cloud mask to NDVI
ndvi_crop = ndvi_crop.where(~pixel_qa.isin(masked_values))
return ndvi_crop.mean().values
#third function containing the above two
def output_ndvi(scene_path, crop_extent):
"""Function that combines the above two functions to open, stack, crop
and mask Landsat scenes.
Parameters
-----------
scene_path : string
A path to the array to be opened
crop_extent: vector
A path to the vector file to use as crop boundary
Returns
-----------
ndvi_crop : ndarray Dataset
a cropped and masked ndarray object containing mean NDVI values
"""
band_paths = glob(os.path.join(scene_path, "*band*[4-5].tif"))
pixel_qa_path = glob(os.path.join(scene_path, "*qa*"))
landsat_qa = rxr.open_rasterio(pixel_qa_path[0], masked = True).squeeze()
high_cloud_confidence = em.pixel_flags["pixel_qa"]["L8"]\
["High Cloud Confidence"]
cloud = em.pixel_flags["pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags["pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
#run clean_bands function
bands = []
for band_path in band_paths:
cleaned_band = open_clean_bands(band_path = band_path,
crop_extent = crop_extent,
valid_range = (0, 10000))
bands.append(cleaned_band)
#run mask_crop function
ndvi = mask_crop_ndvi(all_bands = bands,
crop_bound = crop_extent,
pixel_qa = landsat_qa,
masked_values = all_masked_values)
#print(ndvi)
return ndvi
ndvi_list = []
for site_file in study_sites:
all_dirs = glob(os.path.join(site_file, 'landsat-crop', '*'))
site_name = os.path.basename(os.path.normpath(site_file))
crop_bound_path = os.path.join(site_file, 'vector', site_name +
"-crop.shp")
crop_bound = gpd.read_file(crop_bound_path)
for adir in all_dirs:
date = adir[-29 : -21]
ndvi = output_ndvi(adir, crop_bound)
ndvi_list.append([site_name, date, ndvi])
ndvi_final_df = pd.DataFrame(ndvi_list,
columns = ["site", "date", "mean_ndvi"])
ndvi_final_df['date'] = pd.to_datetime(ndvi_final_df['date'])
ndvi_final_df.set_index('date', inplace = True)
#this was used as .dropna did not seem to work
ndvi_df = ndvi_final_df[ndvi_final_df['mean_ndvi'] > 0]
ndvi_df
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
###Output
_____no_output_____
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
#export csv file
csv_output_path = os.path.join('ndvi-automation', 'outputs', 'mean_ndvi.csv')
ndvi_df.to_csv(csv_output_path)
ndvi_df
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
fig, ax = plt.subplots(figsize = (14, 10))
for site, df in ndvi_df.groupby("site"):
if site in ["HARV"]:
site_name = 'HARV'
color = 'red'
else:
site_name = 'SJER'
color = 'green'
ax.plot(df.index, df.mean_ndvi, label = site_name, marker = 'o',
color = color)
ax.set(title = "Mean Normalized Difference Vegetation Index (NDVI)\
for two sites (HARV & SJER) \n Mar 2017 - Dec 2017 (Cleaned data)",
xlabel = "Month", ylabel = "Mean NDVI")
ax.legend(title = "Site")
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Earth Analytics Education - EA Python Course Spring 2021 Important - Assignment Guidelines1. Before you submit your assignment to GitHub, make sure to run the entire notebook with a fresh kernel. To do this first, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart & Run All)2. Always replace the `raise NotImplementedError()` code with your code that addresses the activity challenge. If you don't replace that code, your notebook will not run.``` YOUR CODE HEREraise NotImplementedError()```3. Any open ended questions will have a "YOUR ANSWER HERE" within a markdown cell. Replace that text with your answer also formatted using Markdown.4. **DO NOT RENAME THIS NOTEBOOK File!** If the file name changes, the autograder will not grade your assignment properly.6. When you create a figure, comment out `plt.show()` to ensure the autograder can grade your plots. For figure cells, DO NOT DELETE the code that says `DO NOT REMOVE LINE BELOW`.``` DO NOT REMOVE LINE BELOW student_plot1_ax = nb.convert_axes(plt)```* Only include the package imports, code, and outputs that are required to run your homework assignment.* Be sure that your code can be run on any operating system. This means that: 1. the data should be downloaded in the notebook to ensure it's reproducible 2. all paths should be created dynamically using the `os.path.join` Follow to PEP 8 Syntax Guidelines & Documentation* Run the `autopep8` tool on all cells prior to submitting (HINT: hit shift + the tool to run it on all cells at once!* Use clear and expressive names for variables. * Organize your code to support readability.* Check for code line length* Use comments and white space sparingly where it is needed* Make sure all python imports are at the top of your notebook and follow PEP 8 order conventions* Spell check your Notebook before submitting it.For all of the plots below, be sure to do the following:* Make sure each plot has a clear TITLE and, where appropriate, label the x and y axes. Be sure to include UNITS in your labels. Add Your Name Below **Your Name:**Jensen Widtfeldt --- Week 04 and 05 Homework - Automate NDVI WorkflowFor this assignment, you will write code to generate a plot of the mean normalized difference vegetation index (NDVI) for two different sites in the United States across one year of data:* San Joaquin Experimental Range (SJER) in Southern California, United States* Harvard Forest (HARV) in the Northeastern United StatesThe data that you will use for this week is available from **earthpy** using the following download: `et.data.get_data('ndvi-automation')` Assignment GoalsYour goal in this assignment is to create the most efficient and concise workflow that you can that allows for:1. The code to scale if you added new sites or more time periods to the analysis.2. Someone else to understand your workflow.3. The LEAST and most efficient (i.e. runs fast, minimize repetition) amount of code that completes the task. HINTS* Remove values outside of the landsat valid range of values as specified in the metadata, as needed.* Keep any output files SEPARATE FROM input files. Outputs should be created in an outputs directory that is created in the code (if needed) and/or tested for.* Use the functions that we demonstrated during class to make your workflow more efficient.* BONUS - if you chose - you can export your data as a csv file. You will get bonus points for doing this. Assignment RequirementsYour submission to the GitHub repository should include:* This Jupyter Notebook file (.ipynb) with: * The code to create a plot of mean NDVI across a year for 2 NEON Field Sites: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object * The **data should be cleaned to remove the influence of clouds**. See the [earthdatascience website for an example of what your plot might look like with and without removal of clouds](https://www.earthdatascience.org/courses/earth-analytics-python/create-efficient-data-workflows/).* BONUS: Create one output `.csv` file that has 3 columns - NDVI, Date and Site Name - with values for SJER and HARV.Your notebook should:* Have *at least* 2 well documented and well named functions with docstrings.* Include a Markdown cell at the top of the notebook that outlines the overall workflow using pseudocode (i.e. plain language, not code)* Include additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Replace this cell with your pseudocode for this workflowIf you happen to be a diagram person a diagram is ok too Psuedo-workflow! 1. Gather and open the data- Download data- List files in download- Sort / filter for right files2. Calculate the NDVI and stats- Open the raster data and crop bands as needed- Calculate NDVI- Calculate other key metrics if needed- Save into sharable form (CSV)3. Do for other sites- Use functions and loops for new site- Rinse and repeat!
###Code
# Autograding imports - do not modify this cell
import matplotcheck.autograde as ag
import matplotcheck.notebook as nb
import matplotcheck.timeseries as ts
from datetime import datetime
# Import needed packages in PEP 8 order
# and no unused imports listed (10 points total)
import os
import re
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import geopandas as gpd
import rioxarray as rxr
import xarray as xr
from rasterio.plot import plotting_extent
import earthpy as et
import earthpy.mask as em
import earthpy.spatial as es
import earthpy.plot as ep
# Get the data!
data = et.data.get_data('ndvi-automation')
os.chdir(os.path.join(et.io.HOME,
"earth-analytics",
"data"))
# DO NOT MODIFY THIS CELL
# Tests that the working directory is set to earth-analytics/data
path = os.path.normpath(os.getcwd())
student_wd_parts = path.split(os.sep)
if student_wd_parts[-2:] == ['earth-analytics', 'data']:
print("\u2705 Great - it looks like your working directory is set correctly to ~/earth-analytics/data")
else:
print("\u274C Oops, the autograder will not run unless your working directory is set to earth-analytics/data")
###Output
✅ Great - it looks like your working directory is set correctly to ~/earth-analytics/data
###Markdown
Figure 1: Plot 1 - Mean NDVI For Each Site Across the Year (50 points)Create a plot of the mean normalized difference vegetation index (NDVI) for the two different sites in the United States across the year: * NDVI on the x axis and formatted dates on the y for both NEON sites on one figure/axis object.* Each site should be identified with a different color in the plot and legend.* The final plot **data should be cleaned to remove the influence of clouds**.* Be sure to include appropriate title and axes labels.Add additional cells as needed for processing data (e.g. defining functions, etc), but be sure to:* follow the instructions in the code cells that have been provided to ensure that you are able to use the sanity check tests that are provided. * include only the plot code in the cell identified for the final plot code below Task 1: In the cell below, create a single dataframe containing MEAN NDVI, the site name, and the date of the data for the HARV site scene `HARV/landsat-crop/LC080130302017031701T1-SC20181023151837`. The column names for the finalDataFrame should be`mean_ndvi`, and `site`, and the data should be **indexed on the date**. Use the functions that we reviewed in class (or create your own versions of them) to implement your code In the Cell below Place All Functions Needed to Run this Notebook (20 points)
###Code
### DO NOT REMOVE THIS LINE OR EDIT / MOVE THIS CELL ###
start_time = datetime.now()
# In this cell place all of the functions needed to run your notebook
# You will be graded here on function application, docstrings, efficiency so ensure
# All functions are placed here!
def open_clean_bands(band_path,
crop_extent,
valid_range=None):
""""Open and mask a landsat band with squeeze.
Parameters
----------
band_path : string
Path to the array you use
valid_range : tuple
A range for min and max values for the data.
Returns
-------
band : xarray DataArray
An xarray with invalid values that are masked
"""
band = (rxr.open_rasterio(band_path, masked=True)
.rio.clip(crop_extent.geometry, from_disk=True)
.squeeze())
# specify the valid range
if valid_range:
mask = (band <= 0) | (band > 10000)
band = band.where(~mask, np.nan)
return band
# Function 2: Mask cloud bands and crop
def mask_crop_ndvi(all_band_paths,
crop_bound,
pixel_qa_path,
vals):
"""Open a landsat band, mask potential clouds, and calculate NDVI.
Parameters
-----------
all_band_paths : list
a list for the xarray objects (using landsat bands 4 and 5)
crop_bound: gpd GeoDataFrame
A geopandas dataframe to crop the raster data (rasterio)
pixel_qa_path: xarray DataArray
An xarray DataArray with pixel qa values
vals: list
A list of values needed to create the cloud mask
Returns
-----------
ndvi_mark : Xarray Dataset
a cropped and masked xarray object containing NDVI values
"""
# open all bands
bands = []
for band_path in all_band_paths:
band = open_clean_bands(
band_path=band_path,
crop_extent=crop_bound,
valid_range=(0, 10000))
bands.append(band)
# open and mask cloud layer
cl_mask = (rxr.open_rasterio(pixel_qa_path[0], masked=True)
.rio.clip(crop_bound.geometry, from_disk=True)
.squeeze())
# final NVDI calcs
ndvi_xr = (bands[1]-bands[0]) / (bands[1]+bands[0])
# apply cloud mask to NDVI
ndvi_mask = ndvi_xr.where(~cl_mask.isin(vals))
return ndvi_mask
###Output
_____no_output_____
###Markdown
Create code to navigate directories for Figure 1
###Code
# Background code prior to functions
# Navigate the site data
path = os.path.join("ndvi-automation",
"sites")
all_sites = glob(path + "/*/")
# define path to HARV sites
site_name = os.path.basename(os.path.normpath(all_sites[0]))
#Open shapefile for first site
vector_dir = os.path.join(all_sites[0], "vector")
site_boundary_path = os.path.join(vector_dir,
site_name + "-crop.shp")
crop_bound = gpd.read_file(site_boundary_path)
crop_bound.plot()
plt.show()
# explore HARV landsat paths
HARV_landsat_dirs = sorted(glob(os.path.join(
all_sites[0], "landsat-crop", "*")))
# pick the right directory for HARV LC080130302017031701T1-SC20181023151837
HARV_dir = HARV_landsat_dirs[4]
# grab the bands needed for NDVI
HARV_band_paths = sorted(glob(os.path.join(HARV_dir,
"*band*[4-5].tif")))
# get components
HARV_path = os.path.normpath(HARV_dir)
HARV_path_components = HARV_path.split(os.sep)
HARV_date = HARV_path_components[-1][10:18]
# Test Function 1 with a loop!
bands = []
for band_path in HARV_band_paths:
band = open_clean_bands(
band_path=band_path,
crop_extent=crop_bound,
valid_range=(0, 10000))
bands.append(band)
# calculate NDVI
ndvi_2 = es.normalized_diff(bands[1], bands[0])
ep.plot_bands(ndvi_2,
cmap="Greys",
vmin=-1)
ndvi_2_mean = ndvi_2.mean()
print(ndvi_2_mean)
# test Function 2 with another loop for HARV
# prep by creating cloud masks for functions to deal with pesky clouds
high_cloud_confidence = em.pixel_flags[
"pixel_qa"]["L8"]["High Cloud Confidence"]
cloud = em.pixel_flags[
"pixel_qa"]["L8"]["Cloud"]
cloud_shadow = em.pixel_flags[
"pixel_qa"]["L8"]["Cloud Shadow"]
all_masked_values = cloud_shadow + cloud + high_cloud_confidence
# Prep by open cloud mask layer for HARV
HARV_pixel_qa_path = glob(os.path.join(HARV_dir, "*qa*"))
#Now use with a for loop to generate NDVI for HARV site
HARV_ndvi_clean = []
for band_path in HARV_band_paths:
ndvi_clean = mask_crop_ndvi(all_band_paths=HARV_band_paths,
crop_bound=crop_bound,
pixel_qa_path=HARV_pixel_qa_path,
vals=all_masked_values)
site=HARV_path_components[2]
date=HARV_band_paths[0][-27:-19]
mean_ndvi=ndvi_clean.mean().values
# create output
output = [site,date,mean_ndvi]
HARV_ndvi_clean.append(output)
#create dataframe from output and set date index
HARV_df = pd.DataFrame(HARV_ndvi_clean,
columns=["site","date","mean_ndvi"])
HARV_df['date'] = pd.to_datetime(HARV_df['date'],
format='%Y-%m-%d')
HARV_df_indexed = HARV_df.set_index("date")
# test view the final cropped and cleaned NDVI data
ndvi_clean.plot.imshow(vmin=-1,
vmax=1)
# Create dataframe of mean NDVI in this cell using the functions created above
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Call the dataframe at the end of the cell so the tests run on it!
# Be sure that the date column is an index of type date
# HINT: the time series lessons may help you remember how to do this!
#clean and view the HARV dataframe
HARV_df_final = HARV_df_indexed[:-1]
HARV_df_final
# This cell is testing your data output above
student_ndvi_ts_single_site = _
single_scene_points = 0
# Ensure the data is stored in a dataframe.
if isinstance(student_ndvi_ts_single_site, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
single_scene_points += 1
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Ensure that the date column is the index
if isinstance(student_ndvi_ts_single_site.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
single_scene_points += 2
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_ts_single_site.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
single_scene_points += 2
else:
print('\u274C The data in your date column is not datetime.')
# Ensure the site name is correct
if student_ndvi_ts_single_site.site.values[0] == 'HARV':
print('\u2705 You have the correct site name!')
single_scene_points += 5
else:
print('\u274C You do not have the correct site name.')
if np.allclose(0.281131628228094, student_ndvi_ts_single_site.mean_ndvi.values[0]):
print('\u2705 You have the correct mean NDVI value!')
single_scene_points += 5
else:
print('\u274C You do not have the correct mean ndvi value.')
print("\n \u27A1 You received {} out of 15 points for creating a dataframe.".format(
single_scene_points))
single_scene_points
###Output
✅ Your data is stored in a DataFrame!
✅ You have the index set to the date column!
✅ The data in your date column is datetime!
✅ You have the correct site name!
✅ You have the correct mean NDVI value!
➡ You received 15 out of 15 points for creating a dataframe.
###Markdown
Task 2:In the cell below, process all of the landsat scenes. Create a DataFrame that contains the following information for each scene| | index | site | mean_ndvi | |---|---|---|---|| Date | | | || 2017-01-07 | 0 | SJER | .4 | Be sure to call your dataframe at the end of the cell to ensure autograding works.HINT: FOR THIS STEP, leave any rows containing missing values (`NAN`).
###Code
# Create dataframe of NDVI including the cleaning data to deal with clouds
# Important: to use the ungraded tests below as a sanity check,
# name your columns: mean_ndvi and site
# Don't forget to set date as the index and make the values of type datetime
# Create dataframe by loping through file paths
# Create empty list for dataframe
ndvi_list = []
# loop for each site
for site_dir in all_sites:
print("Looping through", site_dir)
asite = os.path.normpath(site_dir).split(os.sep)[-1]
print("Working through", asite)
# define crop_bound for each site
site_boundary_path = os.path.join(path, asite,
"vector", asite + "-crop.shp")
site_crop_bound = gpd.read_file(site_boundary_path)
site_crop_bound.plot()
plt.show()
#get a list of subdirectories for each site
new_path=os.path.join(site_dir, "landsat-crop")
all_dirs=glob(new_path + "/*/")
# loop through the subdirectories to get the data!
for single_dir in all_dirs:
# pull out date from subdirectory name
scene_date = single_dir.split(os.sep)[-2][-29:-21]
# Create path for the pixel_qa_layer for each subdirectory scene
scene_pixel_qa_path = glob(os.path.join(single_dir, "*qa*"))
# define band paths used for NDVI calcs in each subdirectory
total_band_paths = sorted(glob(os.path.join(single_dir,
"*band*[4-5].tif")))
# calc NDVI
ndvi = mask_crop_ndvi(all_band_paths=total_band_paths,
crop_bound=site_crop_bound,
pixel_qa_path=scene_pixel_qa_path,
vals=all_masked_values)
mean_ndvi = ndvi.mean(skipna=True).item()
# create output
output = [asite, scene_date, mean_ndvi]
#append
ndvi_list.append(output)
#create dataframe
ndvi_df = pd.DataFrame(ndvi_list,
columns=["site","date","mean_ndvi"])
ndvi_df['date'] = pd.to_datetime(ndvi_df['date'], format='%Y-%m-%d')
ndvi_df_indexed = ndvi_df.set_index("date")
ndvi_df_indexed
# Last sanity check before creating your plot (10 points)
# Ensure that you call your dataframe at the bottom of the cell above
# and that it has columns called: mean_ndvi and site
# Ensure the data is stored in a dataframe.
student_ndvi_df = _
df_points = 0
if isinstance(student_ndvi_df, pd.DataFrame):
print('\u2705 Your data is stored in a DataFrame!')
df_points +=2
else:
print('\u274C It appears your data is not stored in a DataFrame. ',
'To see what type of object your data is stored in, check its type with type(object)')
# Check that dataframe contains the appropriate number of NAN values
if student_ndvi_df.mean_ndvi.isna().sum() == 15:
print('\u2705 Correct number of masked data values!')
df_points +=2
else:
print('\u274C The amount of null data in your dataframe is incorrect.')
# Ensure that the date column is the index
if isinstance(student_ndvi_df.index, pd.core.indexes.datetimes.DatetimeIndex):
print('\u2705 You have the index set to the date column!')
df_points +=3
else:
print('\u274C You do not have the index set to the date column.')
# Ensure that the date column is datetime
if isinstance(student_ndvi_df.index[0], pd._libs.tslibs.timestamps.Timestamp):
print('\u2705 The data in your date column is datetime!')
df_points +=3
else:
print('\u274C The data in your date column is not datetime.')
# Output for timer, # DO NOT MODIFY
end_time = datetime.now()
total_time = end_time - start_time
print(
"Your total run time for processing the data was {0}.".format(total_time))
print("\n \u27A1 You received {} out of 10 points for creating a dataframe.".format(
df_points))
df_points
# Add only the plot code to this cell
# This is the final figure of mean NDVI
# for both sites across the year
# with data cleaned to deal with clouds
# Create plot
fig, ax = plt.subplots(figsize=(12, 12))
fig.suptitle("Annual NDVI Comparison\n SJER and HARV Sites", fontsize = 24)
# Loops for each subplot
#subplot 1
for site, df in ndvi_df_indexed.dropna().groupby('site'):
if site == "HARV":
loc = "HARV"
color = "blue"
else:
loc = "SJER"
color = "orange"
ax.plot(df.index,
df.mean_ndvi,
label=loc,
color=color,
marker="o")
ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left',
prop={'size': 11})
ax.set(xlabel = "Date",
ylabel = "NDVI")
### DO NOT REMOVE LINES BELOW ###
final_masked_solution = nb.convert_axes(plt, which_axes="current")
# Ignore this cell for the autograding tests
# Ignore this cell for the autograding tests
###Output
_____no_output_____
###Markdown
Question 1 (10 points)Imagine that you are planning NEON’s upcoming flight season to capture remote sensing data in these locations and want to ensure that you fly the area when the vegetation is the most green.When would you recommend the flights take place for each site? Answer the question in 2-3 sentences in the Markdown cell below. Answer For the HARV site, the data shows highest vegetation density from May to October. For the SJER site, March and April show the highest vegetation amounts. Question 2 (10 points)How could you modify your workflow to look at vegetation changes over time in each site? Answer the question in 2-3 sentences in the Markdown cell below. Answer To look for vegetation changes over time, you can compare the NDVI for the same month year-over-year. A higher NDVI for the same month year-over-year would indicate that the selected year has a higher vegetation year than previous years Do not edit this cell! (10 points)The notebook includes:* additional Markdown cells throughout the notebook to describe: * the data that you used - and where it is from * how data are being processing * how the code is optimized to run fast and be more concise Do not edit this cell! (20 points)The notebook will also be checked for overall clean code requirements as specified at the **top** of this notebook. Some of these requirements include (review the top cells for more specifics): * Notebook begins at cell [1] and runs on any machine in its entirety.* PEP 8 format is applied throughout (including lengths of comment and code lines).* No additional code or imports in the notebook that is not needed for the workflow.* Notebook is fully reproducible. This means: * reproducible paths using the os module. * data downloaded using code in the notebook. * all imports at top of notebook. BONUS - Export a .CSV File to Share (10 points possible)This is optional - if you export a **.csv** file with the columns specified above: Site, Date and NDVI Value you can get an additional 10 points.* FULL CREDIT: File exists in csv format and contains the columns specified.We will check your github repo for this file!
###Code
outpath = os.path.join("ndvi-automation\outputs\mean_ndvi_both_sites.csv")
ndvi_df_export = ndvi_df_indexed.reset_index()
ndvi_df_export.to_csv(outpath, index=False)
ndvi_df_export
###Output
_____no_output_____ |
Class imbalance.ipynb | ###Markdown
Ranks of predictions are still basically the same
###Code
pd.DataFrame([lr.predict_proba(X)[:,1], lr_bal.predict_proba(X)[:,1]]).T.corr(method='spearman')
pd.DataFrame([lr.predict_proba(X)[:,1], lr_smote.predict_proba(X)[:,1]]).T.corr(method='spearman')
f, ax = plt.subplots(figsize=(6, 6))
ax.scatter(lr.decision_function(X), lr_bal.decision_function(X), c=y)
add_identity(ax, color='gray', ls='--')
plt.ylabel('Balanced logit')
plt.xlabel('Unbalanced logit')
f, ax = plt.subplots(figsize=(6, 6))
ax.scatter(lr.decision_function(X), lr_bal.decision_function(X) + (lr.intercept_ - lr_bal.intercept_)[0], c=y)
add_identity(ax, color='gray', ls='--')
plt.ylabel('Balanced logit (+ diff in intercept)')
plt.xlabel('Unbalanced logit')
f, ax = plt.subplots(figsize=(6, 6))
ax.scatter(lr.decision_function(X), lr_smote.decision_function(X), c=y)
add_identity(ax, color='gray', ls='--')
plt.ylabel('Balanced logit (SMOTE)')
plt.xlabel('Unbalanced logit')
###Output
_____no_output_____ |
AT_Lesson_0_(Trial_Class)_Class_Copy_v0_14.ipynb | ###Markdown
Lesson 0: COVID-19 Outbreak Analysis Teacher-Student ActivitiesWe all know that coronavirus is spreading on a daily basis in India. So, let's try to visualise how fast it is spreading.First, let's look at the dashboard created by Johns Hopkins University. You can look at the following live dashboard to see the real-time trend.[COVID-19 Live Dashboard](https://www.arcgis.com/apps/opsdashboard/index.html/bda7594740fd40299423467b48e9ecf6)Now, let's create a similar map for India using Python to visualise the most affected states in India due to coronavirus. After the class, you can share it with your parents, relatives and friends by sending them the link to the map. --- **At this point, the student should share/present their screen with the teacher.** --- Activity 1: Run Source CodeThis is the source code for the map to be created. You will learn to write it after signing up for the applied tech course. Right now, you just have to execute the code.
###Code
# Student Action: Run the code below.
# Download data
!git clone https://github.com/CSSEGISandData/COVID-19.git
# Install 'geocoder'
!pip install geocoder
# Importing modules
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
import geocoder
import folium
from folium import plugins
# DataFrame for the world
conf_csv = '/content/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
conf_df = pd.read_csv(conf_csv)
grouped_conf_df = conf_df.groupby(by = ['Country/Region'], as_index = False).sum()
# DataFrame for India
india_df = pd.read_csv("https://api.covid19india.org/csv/latest/state_wise.csv")
india_df = india_df.iloc[1:36, :]
state_latitudes = []
state_longitudes = []
for i in india_df.index:
state = india_df['State'][i]
state_lat = geocoder.osm(state).lat
state_lng = geocoder.osm(state).lng
state_latitudes.append(state_lat)
state_longitudes.append(state_lng)
state_latitudes = pd.Series(data = state_latitudes, index = india_df.index)
state_longitudes = pd.Series(data = state_longitudes, index = india_df.index)
india_df['Latitude'] = state_latitudes
india_df['Longitude'] = state_longitudes
# state_coordinates = [(19.7515, 75.7139), # Maharashtra
# (11.1271, 78.6569), # Tamil Nadu
# (15.9129, 79.7400), # Andhra Pradesh
# (15.317, 75.7139), # Karnataka
# (28.7041, 77.1025), # Delhi
# (26.8467, 80.9462), # UP
# (22.9868, 87.8550), # WB
# (25.0961, 85.3131), # Bihar
# (18.1124, 79.0193), # Telangana
# (22.2587, 71.1924), # Gujarat
# (26.2006, 92.9376), # Assam
# (27.0238, 74.2179), # Rajasthan
# (20.9517, 85.0985), # Odisha
# (29.0588, 76.0856), # Haryana
# (22.9734, 78.6569), # Madhya Pradesh
# (10.8505, 76.2711), # Kerala
# (31.1471, 75.3412), # Punjab
# (33.7782, 76.5762), # Jammu and Kashmir
# (23.6102, 85.2799), # Jharkhand
# (21.2787, 81.8661), # Chattisgarh
# (30.0668, 79.0193), # Uttarakhand
# (15.2993, 74.1240), # Goa
# (23.9408, 91.9882), # Tripura
# (11.9416, 79.8083), # Puducherry
# (24.6637, 93.9063), # Manipur
# (31.1048, 77.1734), # Himachal Pradesh
# (26.1584, 94.5624), # Nagaland
# (28.2180, 94.7278), # Arunachal Pradesh
# (11.7401, 92.6586), # Andaman and Nicobar
# (34.1700, 77.5800), # Ladakh
# (30.7333, 76.7794), # Chandigarh
# (20.1809, 73.0169), # Dadra and Nagar Haveli
# (25.4670, 91.3662), # Meghalaya
# (27.5330, 88.5122), # Sikkim
# (23.1645, 92.9376), # Mizoram
# ]
# ind_state_lat = pd.Series([s[0] for s in state_coordinates], index = india_df.index)
# ind_state_lng = pd.Series([s[1] for s in state_coordinates], index = india_df.index)
# india_df['Latitude'] = ind_state_lat
# india_df['Longitude'] = ind_state_lng
# DataFrame for the US
us_conf_csv = '/content/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_US.csv'
us_conf_df = pd.read_csv(us_conf_csv)
us_conf_df = us_conf_df.dropna()
grouped_us_conf_df = us_conf_df.groupby(by = ['Combined_Key'], as_index = False).sum()
# Function to get total confirmed cases in a country
def get_total_confirmed_cases_for_country(country_name):
total_cases_country = conf_df[conf_df['Country/Region'] == country_name].iloc[:, 4:].apply(sum, axis = 0)
total_cases_country.index = pd.to_datetime(total_cases_country.index)
return total_cases_country
# Function to get total confirmed cases in the world
def get_total_confirmed_global_cases():
global_cases = conf_df.iloc[:, 4:].apply(sum, axis=0)
global_cases.index = pd.to_datetime(global_cases.index)
return global_cases
# Function to create a line plot
def line_plot(your_name, plot_background, fig_width, fig_height, country_name, colour, linewidth, markertype):
dt_series = None
if country_name != 'global':
dt_series = get_total_confirmed_cases_for_country(country_name)
else:
dt_series = get_total_confirmed_global_cases()
plt.style.use(plot_background)
plt.figure(figsize = (fig_width, fig_height))
plt.title(f'{country_name.upper()}: Total Coronavirus Cases Reported\nCreated by {your_name.upper()}\nPowered by WhiteHat Jr', fontsize = 16)
plt.plot(dt_series.index, dt_series, c = colour, lw = linewidth, marker = markertype, markersize = 7)
plt.xticks(rotation = 45)
plt.ylabel("Total Cases")
plt.grid(linestyle='--', c='grey')
plt.show()
# Add minimap
def add_minimap(map_name):
# Plugin for mini map
minimap = plugins.MiniMap(toggle_display = True)
map_name.add_child(minimap) # Add minimap
plugins.ScrollZoomToggler().add_to(map_name) # Add scroll zoom toggler to map
plugins.Fullscreen(position='topright').add_to(map_name) # Add full screen button to map
# Add title to map
def add_title(map_name, country, your_name):
title_html = '''
<h2 align="center" style="font-size:20px"><b>Coronavirus Total Confirmed Cases in {}</b></h2>
<h4 align="center" style="font-size:16px"><i>Created by</i> {}</h4>
<h4 align="center" style="font-size:16px"><i>Powered by</i>
<a href="https://www.whitehatjr.com/">WhiteHat Jr</a>
</h4>
'''.format(country, your_name.upper())
return map_name.get_root().html.add_child(folium.Element(title_html))
# Function to create folium maps using for India, US and the world
def folium_map_with_circles(your_name, country, map_width, map_height, left_margin, top_margin, map_tile, zoom, circle_color, minimap):
last_col = conf_df.columns[-1]
if country == 'India':
india_map = folium.Map(location = [22.3511148, 78.6677428],
width = map_width, height = map_height,
left = f"{left_margin}%", top = f"{top_margin}%",
tiles = map_tile, zoom_start = zoom)
if minimap == True:
add_minimap(india_map)
add_title(india_map, country, your_name)
for i in india_df.index:
folium.Circle(radius = float(india_df.loc[i, 'Confirmed']) / 3,
location = [india_df.loc[i, 'Latitude'], india_df.loc[i, 'Longitude']],
popup = "{}\n {}\n on {}".format(india_df.loc[i, 'State'],
india_df.loc[i, 'Confirmed'],
india_df.loc[i, 'Last_Updated_Time']),
color = circle_color,
fill = True).add_to(india_map)
return india_map
elif country == 'US':
us_map = folium.Map(location = [39.381266, -97.922211],
width = map_width, height = map_height,
left = f"{left_margin}%", top = f"{top_margin}%",
tiles = map_tile, zoom_start = zoom)
if minimap == True:
add_minimap(us_map)
add_title(us_map, country, your_name)
for i in grouped_us_conf_df.index:
folium.Circle(location = [grouped_us_conf_df.loc[i, 'Lat'], grouped_us_conf_df.loc[i, 'Long_']],
radius = int(grouped_us_conf_df.loc[i, last_col]),
popup = "{}\n {}\n on {}".format(grouped_us_conf_df.loc[i, 'Combined_Key'],
grouped_us_conf_df.loc[i, last_col],
last_col),
color = circle_color,
fill = True).add_to(us_map)
return us_map
elif country == 'World':
world_map = folium.Map(location = [0, 0],
width = map_width, height = map_height,
left = f"{left_margin}%", top = f"{top_margin}%",
tiles = map_tile, zoom_start = zoom)
if minimap == True:
add_minimap(world_map)
add_title(world_map, country, your_name)
for i in grouped_conf_df.index:
folium.Circle(location = [grouped_conf_df.loc[i, 'Lat'], grouped_conf_df.loc[i, 'Long']],
radius = int(grouped_conf_df.loc[i, last_col]) / 2,
popup = "{}\n {}\n on {}".format(grouped_conf_df.loc[i, 'Country/Region'],
grouped_conf_df.loc[i, last_col],
last_col),
color = circle_color,
fill = True).add_to(world_map)
return world_map
else:
print("\nWrong input! Enter either India, US or World.\n")
# Total confirmed cases in the descending order.
grouped_conf_df = conf_df.groupby(by='Country/Region', as_index=False).sum()
desc_grp_conf_df = grouped_conf_df.sort_values(by=conf_df.columns[-1], ascending=False)
# Function to create a bar plot displaying the top 10 countries having the most number of coronavirus confirmed cases.
def bar_plot(your_name, num_countries, width, height):
last_col = conf_df.columns[-1]
latest_date = datetime.datetime.strptime(last_col, '%m/%d/%y').strftime('%B %d, %Y') # Modify the latest date in the 'Month DD, YYYY' format.
plt.figure(figsize = (width, height))
plt.title(f'Top {num_countries} Countries with Highest COVID-19 Confirmed Cases\nCreated by {your_name.upper()}\nPowered by WhiteHat Jr',
fontsize = 16)
sns.barplot(desc_grp_conf_df[last_col].head(num_countries), desc_grp_conf_df['Country/Region'].head(num_countries), orient = 'h')
plt.xlabel(f'Total Confirmed Cases (in millions) as of {latest_date}')
plt.show()
# Non-cumulative Confirmed Cases.
non_cum_conf_df = desc_grp_conf_df.iloc[:, :4]
for i in range(len(desc_grp_conf_df.columns[3:]) - 1):
series = desc_grp_conf_df[desc_grp_conf_df.columns[3 + (i + 1) ]] - desc_grp_conf_df[desc_grp_conf_df.columns[3 + i]]
non_cum_conf_df[desc_grp_conf_df.columns[3 + (i + 1)]] = series
# Function to get the total non-cumulative confirmed cases in a country.
def get_total_daily_confirmed_cases_for_country(country_name):
total_daily_cases = non_cum_conf_df[non_cum_conf_df['Country/Region'] == country_name].iloc[:, 4:].apply(sum, axis = 0)
total_daily_cases.index = pd.to_datetime(total_daily_cases.index)
return total_daily_cases
# Line plot for the daily (non-cumulative) confirmed cases in various countries.
def daily_cases_line_plot(your_name, num_countries, width, height):
plt.figure(figsize=(width, height))
plt.title(f'Non-Cumulative COVID-19 Confirmed Cases\nCreated by {your_name.upper()}\nPowered by WhiteHat Jr', fontsize = 16)
for region in non_cum_conf_df.iloc[:num_countries, :]['Country/Region']:
total_conf_cases = get_total_daily_confirmed_cases_for_country(region)
plt.plot(total_conf_cases.index[53:], total_conf_cases[53:], lw=2.5, label=region)
plt.xticks(rotation=45)
plt.legend()
plt.grid('major', linestyle='--', c='grey')
plt.show()
###Output
_____no_output_____
###Markdown
--- Activity 2: Line Plot^Let's create a line plot to visualise the total number of confirmed cases in India till yesterday. For the line plot, the dataset that we have on coronavirus is maintained at Johns Hopkins University which gets according to the US time. Hence, we have data updated till yesterday. To view this dataset, write `conf_df[conf_df['Country/Region'] == 'India']` in the code cell below.
###Code
# Student Action: Write conf_df[conf_df['Country/Region'] == 'India'] to view the dataset for India that will be used to create a line plot.
###Output
_____no_output_____
###Markdown
So, in this dataset, we have data for the total confirmed cases in India starting from January 22, 2020. The date given here is in the `MM/DD/YY` format where - `MM` stands for month- `DD` stands for day- `YY` stands for yearNow, let's create a line plot. To create a line plot, you need to use the `line_plot()` function which takes the following inputs:- Name of the person who is creating the line plot which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`).- The background style of the line plot which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`).. Here is the list of most commonly used background styles: 1. `'dark_background'` (most preferred) 2. `'ggplot'` 3. `'seaborn'` 4. `'fivethirtyeight'` and many more.- Width of the line plot (numeric value).- Height of the line plot (numeric value).- Name of the country which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`).- Colour of the lines which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`). Here's the list of most commonly used colours: 1. `'red'` 2. `'cyan'` 3. `'magenta'` 4. `'yellow'` 5. `'green'`- The width of the line (numeric value)- The marker style on the line plot which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`). Here is the list of the most commonly used marker styles: 1. `'o'` for a circular marker 2. `'*'` for a starred marker 3. `'^'` for a upper triangular marker
###Code
# Student Action: Create a line plot for the total confirmed cases in India using the 'line_plot()' function.
###Output
_____no_output_____
###Markdown
**Note:** The `line_plot()` function is NOT a standard Python function. It is a user-defined function created at WhiteHat Jr using Python to simplify the line plot creation process. You will learn to create your own user-defined function in the subsequent classes in this course. --- Activity 3: Map^^Let's create a map for India. For this, we are going to use a dataset showing state-wise data for India. To view the first five rows for the total confirmed cases in India, call the `head()` function on the `india_df` variable which stores the data.
###Code
# Student Action: List the first five rows of the dataset containing the total number of confirmed cases in India.
###Output
_____no_output_____
###Markdown
Let's now create a map for India to show the state-wise total confirmed cases of coronavirus. Using the latitude and longitude values (which are numeric values with decimal), we can create circular markers on a map. For this, you need to use the `folium_map_with_circles()` function which takes the following inputs:- Name of the person who is creating the map which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`).- Name of the country for which a map needs to be created. It should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`). For the map only three values are supported: 1. `'India'` 2. `'US'` 3. `'World'`- Width of the map (numeric value).- Height of the map (numeric value).- Left margin for the map (numeric value).- Top margin for the map (numeric value).- The background style of the map which should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`). Here is the list of most commonly used background styles: 1. `'OpenStreetMap'` 2. `'Stamen Terrain'` 3. `'Stamen Toner'`- Initial zoom in value (a numeric value)- Colour of the circles on the map should be a text value enclosed within single-quotes (`''`) or double-quotes (`""`). Here's the list of most commonly used colours: 1. `'red'` 2. `'blue'` 3. `'magenta'` 4. `'yellow'` 5. `'green'`- Whether you want the map to have a minimap or not; `True` for **yes** and `False` for **no**.
###Code
# Student Action: Create a map for India to show the state-wise total confirmed cases of coronavirus.
###Output
_____no_output_____
###Markdown
**Note:** The `folium_map_with_circles()` function is NOT a standard Python function. It is a user-defined function created at WhiteHat Jr using Python to simplify the map creation process. You will learn to create your own user-defined function in the subsequent classes in this course.Let's export the above map as an HTML file. You can make it a web page like a website and share it with your parents or friends. To do this, you need to use the `save()` function which is a standard Python function. The input to this function should be a path (or location) of the directory where you want to store the HTML file. Also, name the file as `index.html`. This is very important.
###Code
# Student Action: Export the world map as an HTML file.
###Output
_____no_output_____ |
site/en/tutorials/distribute/multi_worker_with_keras.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL)`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackBackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard = False
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates how to perform multi-worker distributed training with a Keras model and the `Model.fit` API using the `tf.distribute.MultiWorkerMirroredStrategy` API. With the help of this strategy, a Keras model that was designed to run on a single-worker can seamlessly work on multiple workers with minimal code changes.To learn how to use the `MultiWorkerMirroredStrategy` with Keras and a custom training loop, refer to [Custom training loop with Keras and MultiWorkerMirroredStrategy](multi_worker_with_ctl.ipynb).This tutorial contains a minimal multi-worker example with two workers for demonstration purposes. Choose the right strategy Before you dive in, make sure that `tf.distribute.MultiWorkerMirroredStrategy` is the right choice for your accelerator(s) and training. These are two common ways of distributing training with data parallelism:* _Synchronous training_, where the steps of training are synced across the workers and replicas, such as `tf.distribute.MirroredStrategy`, `tf.distribute.TPUStrategy`, and `tf.distribute.MultiWorkerMirroredStrategy`. All workers train over different slices of input data in sync, and aggregating gradients at each step.* _Asynchronous training_, where the training steps are not strictly synced, such as `tf.distribute.experimental.ParameterServerStrategy`. All workers are independently training over the input data and updating variables asynchronously.If you are looking for multi-worker synchronous training without TPU, then `tf.distribute.MultiWorkerMirroredStrategy` is your choice. It creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keeps the variables in sync. For those interested, check out the `tf.distribute.experimental.CommunicationOptions` parameter for the collective implementation options we are providing.For an overview of `tf.distribute.Strategy` APIs, refer to [Distributed training in TensorFlow](../../guide/distributed_training.ipynb). SetupStart with some necessary imports:
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment:* In a real-world application, each worker would be on a different machine. For the purposes of this tutorial, all the workers will run on the **this** machine. So disable all GPUs to prevents errors caused by all workers trying to use the same GPU.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
* Reset the `TF_CONFIG` environment variable (you'll learn more about this later):
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
* Make sure that the current directory is on Python's path—this allows the notebook to import the files written by `%%writefile` later:
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Finally, import TensorFlow:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next, create an `mnist_setup.py` file with a simple model and dataset setup. This Python file will be used by the worker processes in this tutorial:
###Code
%%writefile mnist_setup.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the [0, 255] range.
# You need to convert them to float32 with values in the [0, 1] range.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Model training on a single workerTry training the model for a small number of epochs and observe the results of _a single worker_ to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist_setup
batch_size = 64
single_worker_dataset = mnist_setup.mnist_dataset(batch_size)
single_worker_model = mnist_setup.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker configurationNow let's enter the world of multi-worker training. A cluster with jobs and tasksIn TensorFlow, distributed training involves a `'cluster'`with several jobs, and each of the jobs may have one or more `'task'`s.You will need the `TF_CONFIG` configuration environment variable for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration for each worker that is part of the cluster.There are two components of a `TF_CONFIG` variable: `'cluster'` and `'task'`.* A `'cluster'` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs, such as `'worker'` or `'chief'`. - In multi-worker training with `tf.distribute.MultiWorkerMirroredStrategy`, there is usually one `'worker'` that takes on more responsibilities, such as saving a checkpoint and writing a summary file for TensorBoard, in addition to what a regular `'worker'` does. Such `'worker'` is referred to as the chief worker (with a job name `'chief'`). - It is customary for the worker with `'index'` `0` to be the `'chief'`.* A `'task'` provides information on the current task and is different for each worker. It specifies the `'type'` and `'index'` of that worker.Below is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Note that `tf_config` is just a local variable in Python. To use it for training configuration, serialize it as a JSON and place it in a `TF_CONFIG` environment variable.
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
In the example configuration above, you set the task `'type'` to `'worker'` and the task `'index'` to `0`. Therefore, this machine is the _first_ worker. It will be appointed as the `'chief'` worker.Note: Other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `'cluster'` dict, but different task `'type'`s or task `'index'`es, depending on the roles of those machines. In practice, you would create multiple workers on external IP addresses/ports and set a `TF_CONFIG` variable on each worker accordingly. For illustration purposes, this tutorial shows how you may set up a `TF_CONFIG` variable with two workers on a `localhost`:- The first (`'chief'`) worker's `TF_CONFIG` as shown above.- For the second worker, you will set `tf_config['task']['index']=1` Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this Jupyter Notebook process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
... then you can access the environment variable from the subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this method to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way in a real-world scenario—this tutorial is just showing how to do it with a minimal multi-worker example. Train the model To train the model, firstly create an instance of the `tf.distribute.MultiWorkerMirroredStrategy`:
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training. With the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you encounter `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist_setup.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist_setup
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_setup.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
Serialize the `TF_CONFIG` to JSON and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file so that you can inspect what happened in a log file later.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now, inspect what's been output to the worker's log file so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
If you recheck the logs written by the first worker, you'll learn that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Note: This may run slower than the test run at the beginning of this tutorial because running multiple workers on a single machine only adds overhead. The goal here is not to improve the training time but to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi-worker training in depth So far, you have learned how to perform a basic multi-worker setup. The rest of the tutorial goes over other factors, which may be useful or important for real use cases, in detail. Dataset shardingIn multi-worker training, _dataset sharding_ is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`.To learn more about _auto-sharding_, refer to the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn the auto sharding off, so that each replica processes every example (_not recommended_):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist_setup.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Evaluation If you pass the `validation_data` into `Model.fit` as well, it will alternate between training and evaluation for each epoch. The evaluation work is distributed across the same set of workers, and its results are aggregated and available to all workers.Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set the `validation_steps`.A repeated dataset (by calling `tf.data.Dataset.repeat`) is recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. Performance To tweak the performance of multi-worker training, you can try the following:- `tf.distribute.MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation): - `RING` implements ring-based collectives using gRPC as the cross-host communication layer. - `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives. - `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number of GPUs, the type of GPUs, and the network interconnects in the cluster. To override the automatic choice, specify the `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor. For example: ```python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CommunicationImplementation.NCCL) ```- Cast the variables to `tf.float` if possible: - The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how to do this. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists.Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You can do this by preserving the training state in the distributed file system of your choice, such that upon a restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note: Previously, the `ModelCheckpoint` callback provided a mechanism to restore the training state upon a restart from a job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, which also adds the support to single-worker training for a consistent experience, and removed the fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new `BackupAndRestore` callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the saving destination needs to be different for each worker.- For non-chief workers, you will need to save the model to a temporary directory.- For the chief, you will need to save to the provided model directory.The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location.The model saved in all the directories is identical, and typically only the model saved by the chief should be referenced for restoring or serving.You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason for saving on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.Using the `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`:- `task_type` tells you what the current job is (e.g. `'worker'`).- `task_id` tells you the identifier of the worker.- The worker with `task_id == 0` is designated as the chief worker.In the code snippet below, the `write_filepath` function provides the file path to write, which depends on the worker's `task_id`:- For the chief worker (with `task_id == 0`), it writes to the original file path. - For other workers, it creates a temporary directory—`temp_dir`—with the `task_id` in the directory path to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this Colab section, the `task_type is None` case
# is added because it is effectively run with only a single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work.Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()` (note that `strategy = tf.distribute.MultiWorkerMirroredStrategy()`, as defined earlier):
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save your model's weights and restore them without having to save the whole model.Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by the `tf.train.CheckpointManager`, so that only the latest checkpoint is preserved:
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save and remove the checkpoints the non-chief workers had saved:
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore the model, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackThe `tf.keras.callbacks.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.BackupAndRestore` at the `Model.fit` call.With `MultiWorkerMirroredStrategy`, if a worker gets interrupted, the whole cluster will pause until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker will rejoin the cluster. Then, every worker will read the checkpoint file that was previously saved and pick up its former state, thereby allowing the cluster to get back in sync. Then, the training will continue.The `BackupAndRestore` callback uses the `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, the `BackupAndRestore` callback supports single-worker training with no strategy—`MirroredStrategy`—and multi-worker training with `MultiWorkerMirroredStrategy`.Below are two examples for both multi-worker training and single-worker training:
###Code
# Multi-worker training with `MultiWorkerMirroredStrategy`
# and the `BackupAndRestore` callback.
callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackBackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
import json
import os
import sys
import tensorflow as tf
# Reset `TF_CONFIG` if the notebook is restarted.
os.environ.pop('TF_CONFIG', None)
# Be sure that the current directory is on python's path
# This allows the notebook to import the files written by %%writefile
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates how to perform multi-worker distributed training with a Keras model and the `Model.fit` API using the `tf.distribute.Strategy` API—specifically the `tf.distribute.MultiWorkerMirroredStrategy` class. With the help of this strategy, a Keras model that was designed to run on a single-worker can seamlessly work on multiple workers with minimal code changes.For those interested in a deeper understanding of `tf.distribute.Strategy` APIs, the [Distributed training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports.To learn how to use the `MultiWorkerMirroredStrategy` with Keras and a custom training loop, refer to [Custom training loop with Keras and MultiWorkerMirroredStrategy](multi_worker_with_ctl.ipynb).Note that the purpose of this tutorial is to demonstrate a minimal multi-worker example with two workers. SetupStart with some necessary imports:
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment:1. Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. In a real-world application, each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
2. Reset the `TF_CONFIG` environment variable (you'll learn more about this later):
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
3. Make sure that the current directory is on Python's path—this allows the notebook to import the files written by `%%writefile` later:
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next, create an `mnist.py` file with a simple model and dataset setup. This Python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the [0, 255] range.
# You need to convert them to float32 with values in the [0, 1] range.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Model training on a single workerTry training the model for a small number of epochs and observe the results of _a single worker_ to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker configurationNow let's enter the world of multi-worker training. A cluster with jobs and tasksIn TensorFlow, distributed training involves: a `'cluster'`with several jobs, and each of the jobs may have one or more `'task'`s.You will need the `TF_CONFIG` configuration environment variable for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration for each worker that is part of the cluster.There are two components of a `TF_CONFIG` variable: `'cluster'` and `'task'`.* A `'cluster'` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs, such as `'worker'` or `'chief'`. - In multi-worker training with `tf.distribute.MultiWorkerMirroredStrategy`, there is usually one `'worker'` that takes on responsibilities, such as saving a checkpoint and writing a summary file for TensorBoard, in addition to what a regular `'worker'` does. Such `'worker'` is referred to as the chief worker (with a job name `'chief'`). - It is customary for the `'chief'` to have `'index'` `0` be appointed to (in fact, this is how `tf.distribute.Strategy` is implemented).* A `'task'` provides information of the current task and is different for each worker. It specifies the `'type'` and `'index'` of that worker.Below is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Note that`tf_config` is just a local variable in Python. To be able to use it for a training configuration, this dict needs to be serialized as a JSON and placed in a `TF_CONFIG` environment variable. In the example configuration above, you set the task `'type'` to `'worker'` and the task `'index'` to `0`. Therefore, this machine is the _first_ worker. It will be appointed as the `'chief'` worker and do more work than the others.Note: Other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `'cluster'` dict, but different task `'type'`s or task `'index'`es, depending on the roles of those machines. For illustration purposes, this tutorial shows how you may set up a `TF_CONFIG` variable with two workers on a `localhost`.In practice, you would create multiple workers on external IP addresses/ports and set a `TF_CONFIG` variable on each worker accordingly.In this tutorial, you will use two workers:- The first (`'chief'`) worker's `TF_CONFIG` is shown above.- For the second worker, you will set `tf_config['task']['index']=1` Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent.For example, you can set an environment variable in this Jupyter Notebook process as follows:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
Then, you can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use a similar method to pass the `TF_CONFIG` to the worker subprocesses. In a real-world scenario, you wouldn't launch your jobs this way, but it's sufficient in this example. Choose the right strategyIn TensorFlow, there are two main forms of distributed training:* _Synchronous training_, where the steps of training are synced across the workers and replicas, and* _Asynchronous training_, where the training steps are not strictly synced (for example, [parameter server training](parameter_server_training.ipynb)).This tutorial demonstrates how to perform synchronous multi-worker training using an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter: 1) `RING` implements ring-based collectives using gRPC as the cross-host communication layer; 2) `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives; and 3) `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you encounter `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file so that you can inspect what happened in a log file later.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now, inspect what's been output to the worker's log file so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
If you recheck the logs written by the first worker, you'll learn that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly, this ran _slower_ than the test run at the beginning of this tutorial.Running multiple workers on a single machine only adds overhead.The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi-worker training in depthSo far, you have learned how to perform a basic multi-worker setup.During the rest of the tutorial, you will learn about other factors, which may be useful or important for real use cases, in detail. Dataset shardingIn multi-worker training, _dataset sharding_ is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`.To learn more about _auto-sharding_, refer to the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn the auto sharding off, so that each replica processes every example (_not recommended_):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass the `validation_data` into `Model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking the `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers.Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set the `validation_steps`.A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PerformanceYou now have a Keras model that is all set up to run in multiple workers with the `MultiWorkerMirroredStrategy`.To tweak performance of multi-worker training, you can try the following:- `tf.distribute.MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation): - `RING` implements ring-based collectives using gRPC as the cross-host communication layer. - `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives. - `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number of GPUs, the type of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify the `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor. For example: ```python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL) ```- Cast the variables to `tf.float` if possible: - The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists.Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You can do this by preserving the training state in the distributed file system of your choice, such that upon a restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note: Previously, the `ModelCheckpoint` callback provided a mechanism to restore the training state upon a restart from a job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the saving destination needs to be different for each worker.- For non-chief workers, you will need to save the model to a temporary directory.- For the chief, you will need to save to the provided model directory.The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location.The model saved in all the directories is identical, and typically only the model saved by the chief should be referenced for restoring or serving.You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason for saving on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.Using the `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`:- `task_type` tells you what the current job is (e.g. `'worker'`).- `task_id` tells you the identifier of the worker.- The worker with `task_id == 0` is designated as the chief worker.In the code snippet below, the `write_filepath` function provides the file path to write, which depends on the the worker's `task_id`:- For the chief worker (with `task_id == 0`), it writes to the original file path. - For other workers, it creates a temporary directory—`temp_dir`—with the `task_id` in the directory path to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this Colab section, the `task_type is None` case
# is added because it is effectively run with only a single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work.Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()` (note that `strategy = tf.distribute.MultiWorkerMirroredStrategy()`, as defined earlier):
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save your model's weights and restore them without having to save the whole model.Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by the `tf.train.CheckpointManager`, so that only the latest checkpoint is preserved:
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save and remove the checkpoints the non-chief workers had saved:
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore the model, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackThe `tf.keras.callbacks.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.BackupAndRestore` at the `Model.fit` call.With `MultiWorkerMirroredStrategy`, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then, the training continues.The `BackupAndRestore` callback uses the `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, the `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy
# and the BackupAndRestore callback.
callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
!pip install tf-nightly
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# Creation of dataset needs to be after MultiWorkerMirroredStrategy object
# is instantiated.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
If a worker gets preempted, the whole cluster pauses until the preempted worker is restarted. Once the worker rejoins the cluster, other workers will also restart. Now, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.If you inspect the directory containing the `filepath` you specified in `ModelCheckpoint`, you may notice some temporarily generated checkpoint files. Those files are needed for recovering the previously lost instances, and they will be removed by the library at the end of `tf.keras.Model.fit()` upon successful exiting of your multi-worker training. Save/Restore outside `ModelCheckpoint` callbackIf you want to save your model using `model.save` or `tf.saved_model.save`, you will need to save the model to a temporary directory on the workers and to the provided model directory on the chief. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.If you want to restore a checkpoint, you will need to find the latest checkpoint in the model directory using `tf.train.latest_checkpoint` and call `restore` with this. This means that on workers you will be saving to a temporary directory but restoring from the model directory to which only the chief checkpoints. `ModelCheckpoint` callback encompasses this save and restore logic. This is why you may have noticed additional temporary directories created during training.The reason we need to save on the chief and workers is because we might be aggregating variables during checkpointing which requires the chief and workers to participate in the allreduce communication protocol. Letting chief and workers save to the same model directory will result in errors due to contention.
###Code
# Saving a model
# Let `is_chief` be a utility function that inspects the cluster spec and
# current task type and returns True if the worker is the chief and False
# otherwise.
def is_chief():
return True
if is_chief():
# This is the model directory will be ideally be a cloud bucket.
path = '/tmp/model_dir'
else:
# Save to a path that is unique across workers.
worker_id = 1
path = '/tmp/model_dir/worker_tmp_' + str(worker_id)
multi_worker_model.save(path)
# Restoring a checkpoint
# On the Chief
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
status = checkpoint.restore(manager.latest_checkpoint)
# On the Workers
# This is the path that the chief saves the model to
model_dir_path = '/tmp/model_dir'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
latest_checkpoint = tf.train.latest_checkpoint(model_dir_path)
status = checkpoint.restore(latest_checkpoint)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# Creation of dataset needs to be after MultiWorkerMirroredStrategy object
# is instantiated.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# Creation of dataset needs to be after MultiWorkerMirroredStrategy object
# is instantiated.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
import os
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset. The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images. In this example, we will take the training part of the datasets to demonstrate.
###Code
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# We need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: If you have an infinite dataset (by calling `.repeat()` on the dataset), you must specify the number of steps to run through `steps_per_epoch` argument to `model.fit()`. In that case, `model.fit()` does not create a new iterator from the input every epoch, but continues from wherever the last epoch ended. If you have a finite dataset, setting `steps_per_epoch` is optional. In particular, if the sharding is not balanced (for example, this could happen if you have a file-based dataset with the number of files more than the number of workers and some workers get files that contain more data than others. You can shard the data more evenly by manually setting `tf.data.experimental.AutoShardPolicy`, more details [here](https://www.tensorflow.org/tutorials/distribute/inputsharding)), and `steps_per_epoch` is not set or set to be greater than the size of the smallest shard divided by the per-worker batch size, you might get partial batches towards the end of training.
###Code
num_workers = 4
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training with `MultiWorkerMirroredStrategy`, sharding the dataset is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly passed to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically. It shards the dataset at the file level which may create skewed shards. In extreme cases where there is only one file, only the first shard (i.e. worker) will get training or evaluation data and as a result all workers will get errors.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `global_batch_size = per_worker_batch_size * num_workers`, which is `num_workers` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. We are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, user is responsible to load the model manually.Optionally user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time, is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, we take advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As we described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, we assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that we have the model restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore to them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL)`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackBackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL)`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this colab section, we also add `task_type is None`
# case because it is effectively run with only single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackBackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
!pip install tf-nightly
import os
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset. The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images. In this example, we will take the training part of the datasets to demonstrate.
###Code
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# We need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Always pass in `steps_per_epoch` argument to `model.fit()` since `MultiWorkerMirroredStrategy` does not support last partial batch handling. When using `steps_per_epoch`, `model.fit()` does not create a new iterator from the input every epoch, but continues from wherever the last epoch ended. Hence, make sure to call `.repeat()` on the dataset so it has an adequate number of examples for N epochs. If your dataset is not a repeated dataset, the `steps_per_epoch` should be set based on the amount of training data on each worker so that all workers would perform the same number of steps of training or evaluation, which is required by allreduce. In particular, if the sharding is not balanced, `steps_per_epoch` should be set to the size of the smallest sharded devided by the per-worker batch size.
###Code
num_workers = 4
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training with `MultiWorkerMirroredStrategy`, sharding the dataset is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly passed to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically. It shards the dataset at the file level which may create skewed shards. In extreme cases where there is only one file, only the first shard (i.e. worker) will get training or evaluation data and as a result all workers will get errors.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `global_batch_size = per_worker_batch_size * num_workers`, which is `num_workers` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. We are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, user is responsible to load the model manually.Optionally user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time, is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, we take advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As we described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, we assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that we have the model restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore to them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates how to perform multi-worker distributed training with a Keras model and the `Model.fit` API using the `tf.distribute.MultiWorkerMirroredStrategy` API. With the help of this strategy, a Keras model that was designed to run on a single-worker can seamlessly work on multiple workers with minimal code changes.To learn how to use the `MultiWorkerMirroredStrategy` with Keras and a custom training loop, refer to [Custom training loop with Keras and MultiWorkerMirroredStrategy](multi_worker_with_ctl.ipynb).This tutorial contains a minimal multi-worker example with two workers for demonstration purposes. Choose the right strategy Before you dive in, make sure that `tf.distribute.MultiWorkerMirroredStrategy` is the right choice for your accelerator(s) and training. These are two common ways of distributing training with data parallelism:* _Synchronous training_, where the steps of training are synced across the workers and replicas, such as `tf.distribute.MirroredStrategy`, `tf.distribute.TPUStrategy`, and `tf.distribute.MultiWorkerMirroredStrategy`. All workers train over different slices of input data in sync, and aggregating gradients at each step.* _Asynchronous training_, where the training steps are not strictly synced, such as `tf.distribute.experimental.ParameterServerStrategy`. All workers are independently training over the input data and updating variables asynchronously.If you are looking for multi-worker synchronous training without TPU, then `tf.distribute.MultiWorkerMirroredStrategy` is your choice. It creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keeps the variables in sync. For those interested, check out the `tf.distribute.experimental.CommunicationOptions` parameter for the collective implementation options.For an overview of `tf.distribute.Strategy` APIs, refer to [Distributed training in TensorFlow](../../guide/distributed_training.ipynb). SetupStart with some necessary imports:
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment:* In a real-world application, each worker would be on a different machine. For the purposes of this tutorial, all the workers will run on the **this** machine. So disable all GPUs to prevents errors caused by all workers trying to use the same GPU.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
* Reset the `TF_CONFIG` environment variable (you'll learn more about this later):
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
* Make sure that the current directory is on Python's path—this allows the notebook to import the files written by `%%writefile` later:
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Finally, import TensorFlow:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next, create an `mnist_setup.py` file with a simple model and dataset setup. This Python file will be used by the worker processes in this tutorial:
###Code
%%writefile mnist_setup.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the [0, 255] range.
# You need to convert them to float32 with values in the [0, 1] range.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Model training on a single workerTry training the model for a small number of epochs and observe the results of _a single worker_ to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist_setup
batch_size = 64
single_worker_dataset = mnist_setup.mnist_dataset(batch_size)
single_worker_model = mnist_setup.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker configurationNow let's enter the world of multi-worker training. A cluster with jobs and tasksIn TensorFlow, distributed training involves a `'cluster'`with several jobs, and each of the jobs may have one or more `'task'`s.You will need the `TF_CONFIG` configuration environment variable for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration for each worker that is part of the cluster.There are two components of a `TF_CONFIG` variable: `'cluster'` and `'task'`.* A `'cluster'` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs, such as `'worker'` or `'chief'`. - In multi-worker training with `tf.distribute.MultiWorkerMirroredStrategy`, there is usually one `'worker'` that takes on more responsibilities, such as saving a checkpoint and writing a summary file for TensorBoard, in addition to what a regular `'worker'` does. Such `'worker'` is referred to as the chief worker (with a job name `'chief'`). - It is customary for the worker with `'index'` `0` to be the `'chief'`.* A `'task'` provides information on the current task and is different for each worker. It specifies the `'type'` and `'index'` of that worker.Below is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Note that `tf_config` is just a local variable in Python. To use it for training configuration, serialize it as a JSON and place it in a `TF_CONFIG` environment variable.
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
In the example configuration above, you set the task `'type'` to `'worker'` and the task `'index'` to `0`. Therefore, this machine is the _first_ worker. It will be appointed as the `'chief'` worker.Note: Other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `'cluster'` dict, but different task `'type'`s or task `'index'`es, depending on the roles of those machines. In practice, you would create multiple workers on external IP addresses/ports and set a `TF_CONFIG` variable on each worker accordingly. For illustration purposes, this tutorial shows how you may set up a `TF_CONFIG` variable with two workers on a `localhost`:- The first (`'chief'`) worker's `TF_CONFIG` as shown above.- For the second worker, you will set `tf_config['task']['index']=1` Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this Jupyter Notebook process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
... then you can access the environment variable from the subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this method to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way in a real-world scenario—this tutorial is just showing how to do it with a minimal multi-worker example. Train the model To train the model, firstly create an instance of the `tf.distribute.MultiWorkerMirroredStrategy`:
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training. With the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you encounter `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist_setup.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist_setup
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_setup.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
Serialize the `TF_CONFIG` to JSON and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file so that you can inspect what happened in a log file later.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now, inspect what's been output to the worker's log file so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
If you recheck the logs written by the first worker, you'll learn that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Note: This may run slower than the test run at the beginning of this tutorial because running multiple workers on a single machine only adds overhead. The goal here is not to improve the training time but to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi-worker training in depth So far, you have learned how to perform a basic multi-worker setup. The rest of the tutorial goes over other factors, which may be useful or important for real use cases, in detail. Dataset shardingIn multi-worker training, _dataset sharding_ is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`.To learn more about _auto-sharding_, refer to the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn the auto sharding off, so that each replica processes every example (_not recommended_):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist_setup.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Evaluation If you pass the `validation_data` into `Model.fit` as well, it will alternate between training and evaluation for each epoch. The evaluation work is distributed across the same set of workers, and its results are aggregated and available to all workers.Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set the `validation_steps`.A repeated dataset (by calling `tf.data.Dataset.repeat`) is recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what an Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. Performance To tweak the performance of multi-worker training, you can try the following:- `tf.distribute.MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation): - `RING` implements ring-based collectives using gRPC as the cross-host communication layer. - `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives. - `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number of GPUs, the type of GPUs, and the network interconnects in the cluster. To override the automatic choice, specify the `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor. For example: ```python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CommunicationImplementation.NCCL) ```- Cast the variables to `tf.float` if possible: - The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how to do this. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists.Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You can do this by preserving the training state in the distributed file system of your choice, such that upon a restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note: Previously, the `ModelCheckpoint` callback provided a mechanism to restore the training state upon a restart from a job failure for multi-worker training. The TensorFlow team is introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, which also adds the support to single-worker training for a consistent experience, and removed the fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new `BackupAndRestore` callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally, users can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the saving destination needs to be different for each worker.- For non-chief workers, you will need to save the model to a temporary directory.- For the chief, you will need to save to the provided model directory.The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location.The model saved in all the directories is identical, and typically only the model saved by the chief should be referenced for restoring or serving.You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason for saving on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.Using the `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`:- `task_type` tells you what the current job is (e.g. `'worker'`).- `task_id` tells you the identifier of the worker.- The worker with `task_id == 0` is designated as the chief worker.In the code snippet below, the `write_filepath` function provides the file path to write, which depends on the worker's `task_id`:- For the chief worker (with `task_id == 0`), it writes to the original file path. - For other workers, it creates a temporary directory—`temp_dir`—with the `task_id` in the directory path to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this Colab section, the `task_type is None` case
# is added because it is effectively run with only a single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work.Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()` (note that `strategy = tf.distribute.MultiWorkerMirroredStrategy()`, as defined earlier):
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save your model's weights and restore them without having to save the whole model.Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by the `tf.train.CheckpointManager`, so that only the latest checkpoint is preserved:
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save and remove the checkpoints the non-chief workers had saved:
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore the model, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackThe `tf.keras.callbacks.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.BackupAndRestore` at the `Model.fit` call.With `MultiWorkerMirroredStrategy`, if a worker gets interrupted, the whole cluster will pause until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker will rejoin the cluster. Then, every worker will read the checkpoint file that was previously saved and pick up its former state, thereby allowing the cluster to get back in sync. Then, the training will continue.The `BackupAndRestore` callback uses the `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, the `BackupAndRestore` callback supports single-worker training with no strategy—`MirroredStrategy`—and multi-worker training with `MultiWorkerMirroredStrategy`.Below are two examples for both multi-worker training and single-worker training:
###Code
# Multi-worker training with `MultiWorkerMirroredStrategy`
# and the `BackupAndRestore` callback.
callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distribute_strategy.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
train_datasets_unbatched = datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard = False
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates how to perform multi-worker distributed training with a Keras model and the `Model.fit` API using the `tf.distribute.Strategy` API—specifically the `tf.distribute.MultiWorkerMirroredStrategy` class. With the help of this strategy, a Keras model that was designed to run on a single-worker can seamlessly work on multiple workers with minimal code change.For those interested in a deeper understanding of `tf.distribute.Strategy` APIs, the [Distributed training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports.To learn how to use the `MultiWorkerMirroredStrategy` with Keras and a custom training loop, refer to [Custom training loop with Keras and MultiWorkerMirroredStrategy](multi_worker_with_ctl.ipynb).Note that the purpose of this tutorial is to demonstrate a minimal multi-worker example with two workers. SetupStart with some necessary imports:
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment:1. Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. In a real-world application, each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
2. Reset the `TF_CONFIG` environment variable (you'll learn more about this later):
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
3. Make sure that the current directory is on Python's path—this allows the notebook to import the files written by `%%writefile` later:
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next, create an `mnist.py` file with a simple model and dataset setup. This Python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the [0, 255] range.
# You need to convert them to float32 with values in the [0, 1] range.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Model training on a single workerTry training the model for a small number of epochs and observe the results of _a single worker_ to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker configurationNow let's enter the world of multi-worker training. A cluster with jobs and tasksIn TensorFlow, distributed training involves: a `'cluster'`with several jobs, and each of the jobs may have one or more `'task'`s.You will need the `TF_CONFIG` configuration environment variable for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration for each worker that is part of the cluster.There are two components of a `TF_CONFIG` variable: `'cluster'` and `'task'`.* A `'cluster'` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs, such as `'worker'` or `'chief'`. - In multi-worker training with `tf.distribute.MultiWorkerMirroredStrategy`, there is usually one `'worker'` that takes on responsibilities, such as saving a checkpoint and writing a summary file for TensorBoard, in addition to what a regular `'worker'` does. Such `'worker'` is referred to as the chief worker (with a job name `'chief'`). - It is customary for the `'chief'` to have `'index'` `0` be appointed to (in fact, this is how `tf.distribute.Strategy` is implemented).* A `'task'` provides information of the current task and is different for each worker. It specifies the `'type'` and `'index'` of that worker.Below is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Note that`tf_config` is just a local variable in Python. To be able to use it for a training configuration, this dict needs to be serialized as a JSON and placed in a `TF_CONFIG` environment variable. In the example configuration above, you set the task `'type'` to `'worker'` and the task `'index'` to `0`. Therefore, this machine is the _first_ worker. It will be appointed as the `'chief'` worker and do more work than the others.Note: Other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `'cluster'` dict, but different task `'type'`s or task `'index'`es, depending on the roles of those machines. For illustration purposes, this tutorial shows how you may set up a `TF_CONFIG` variable with two workers on a `localhost`.In practice, you would create multiple workers on external IP addresses/ports and set a `TF_CONFIG` variable on each worker accordingly.In this tutorial, you will use two workers:- The first (`'chief'`) worker's `TF_CONFIG` is shown above.- For the second worker, you will set `tf_config['task']['index']=1` Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent.For example, you can set an environment variable in this Jupyter Notebook process as follows:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
Then, you can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use a similar method to pass the `TF_CONFIG` to the worker subprocesses. In a real-world scenario, you wouldn't launch your jobs this way, but it's sufficient in this example. Choose the right strategyIn TensorFlow, there are two main forms of distributed training:* _Synchronous training_, where the steps of training are synced across the workers and replicas, and* _Asynchronous training_, where the training steps are not strictly synced (for example, [parameter server training](parameter_server_training.ipynb).This tutorial demonstrates how to perform synchronous multi-worker training using an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter: 1) `RING` implements ring-based collectives using gRPC as the cross-host communication layer; 2) `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives; and 3) `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you encounter `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file so that you can inspect what happened in a log file later.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now, inspect what's been output to the worker's log file so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
If you recheck the logs written by the first worker, you'll learn that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly, this ran _slower_ than the test run at the beginning of this tutorial.Running multiple workers on a single machine only adds overhead.The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi-worker training in depthSo far, you have learned how to perform a basic multi-worker setup.During the rest of the tutorial, you will learn about other factors, which may be useful or important for real use cases, in detail. Dataset shardingIn multi-worker training, _dataset sharding_ is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`.To learn more about _auto-sharding_, refer to the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn the auto sharding off, so that each replica processes every example (_not recommended_):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass the `validation_data` into `Model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking the `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers.Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set the `validation_steps`.A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PerformanceYou now have a Keras model that is all set up to run in multiple workers with the `MultiWorkerMirroredStrategy`.To tweak performance of multi-worker training, you can try the following:- `tf.distribute.MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation): - `RING` implements ring-based collectives using gRPC as the cross-host communication layer. - `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives. - `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number of GPUs, the type of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify the `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor. For example: ```python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL) ```- Cast the variables to `tf.float` if possible: - The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists.Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You can do this by preserving the training state in the distributed file system of your choice, such that upon a restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note: Previously, the `ModelCheckpoint` callback provided a mechanism to restore the training state upon a restart from a job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the saving destination needs to be different for each worker.- For non-chief workers, you will need to save the model to a temporary directory.- For the chief, you will need to save to the provided model directory.The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location.The model saved in all the directories is identical, and typically only the model saved by the chief should be referenced for restoring or serving.You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason for saving on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.Using the `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`:- `task_type` tells you what the current job is (e.g. `'worker'`).- `task_id` tells you the identifier of the worker.- The worker with `task_id == 0` is designated as the chief worker.In the code snippet below, the `write_filepath` function provides the file path to write, which depends on the the worker's `task_id`:- For the chief worker (with `task_id == 0`), it writes to the original file path. - For other workers, it creates a temporary directory—`temp_dir`—with the `task_id` in the directory path to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this Colab section, the `task_type is None` case
# is added because it is effectively run with only a single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work.Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()` (note that `strategy = tf.distribute.MultiWorkerMirroredStrategy()`, as defined earlier):
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save your model's weights and restore them without having to save the whole model.Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by the `tf.train.CheckpointManager`, so that only the latest checkpoint is preserved:
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save and remove the checkpoints the non-chief workers had saved:
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore the model, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackThe `tf.keras.callbacks.experimental.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `Model.fit` call.With `MultiWorkerMirroredStrategy`, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then, the training continues.The `BackupAndRestore` callback uses the `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, the `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy
# and the BackupAndRestore callback.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# Creation of dataset needs to be after MultiWorkerMirroredStrategy object
# is instantiated.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
import os
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset. The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images. In this example, you will take the training part of the datasets to demonstrate.
###Code
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
###Output
_____no_output_____
###Markdown
Build the Keras modelHere, you use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with the MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately. So, for example, if you have 2 workers, you should set the task `index` to `0` and `1` separately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: If you have an infinite dataset (by calling `.repeat()` on the dataset), you must specify the number of steps to run through `steps_per_epoch` argument to `model.fit()`. In that case, `model.fit()` does not create a new iterator from the input every epoch, but continues from wherever the last epoch ended. If you have a finite dataset, setting `steps_per_epoch` is optional. In particular, if the sharding is not balanced (for example, this could happen if you have a file-based dataset with the number of files more than the number of workers and some workers get files that contain more data than others. You can shard the data more evenly by manually setting `tf.data.experimental.AutoShardPolicy`, more details [here](https://www.tensorflow.org/tutorials/distribute/inputsharding)), and `steps_per_epoch` is not set or set to be greater than the size of the smallest shard divided by the per-worker batch size, you might get partial batches towards the end of training.
###Code
num_workers = 4
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously, you used 64,
# and now this becomes 128.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training with `MultiWorkerMirroredStrategy`, sharding the dataset is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly passed to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically. It shards the dataset at the file level which may create skewed shards. In extreme cases where there is only one file, only the first shard (i.e. worker) will get training or evaluation data and as a result all workers will get errors.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` API. For example:
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, you use `global_batch_size = per_worker_batch_size * num_workers`, which is `num_workers` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change you are keeping the per worker batch size same as before. EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, user is responsible to load the model manually.Optionally user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time, is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore to them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
!pip install tf-nightly
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# Creation of dataset needs to be after MultiWorkerMirroredStrategy object
# is instantiated.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distribute_strategy.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf #nightly-gpu
except Exception:
pass
import tensorflow_datasets as tfds
tf.enable_v2_behavior()
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
train_datasets_unbatched = datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard = False
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, some necessary imports.
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment.Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
###Code
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.Here is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
There are two components of `TF_CONFIG`: `cluster` and `task`.* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker. In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are. For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1` Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable. Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
You can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example. Choose the right strategyIn TensorFlow there are two main forms of distributed training:* Synchronous training, where the steps of training are synced across the workers and replicas, and* Asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now look what's been output to the worker's logfile so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi worker training in depthSo far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases. Dataset shardingIn multi-worker training, dataset sharding is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL)`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this colab section, we also add `task_type is None`
# case because it is effectively run with only single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackBackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
!pip install tf-nightly
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# Creation of dataset needs to be after MultiWorkerMirroredStrategy object
# is instantiated.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
If a worker gets preempted, the whole cluster pauses until the preempted worker is restarted. Once the worker rejoins the cluster, other workers will also restart. Now, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.If you inspect the directory containing the `filepath` you specified in `ModelCheckpoint`, you may notice some temporarily generated checkpoint files. Those files are needed for recovering the previously lost instances, and they will be removed by the library at the end of `tf.keras.Model.fit()` upon successful exiting of your multi-worker training. Save/Restore outside `ModelCheckpoint` callbackIf you want to save your model using `model.save` or `tf.saved_model.save`, you will need to save the model to a temporary directory on the workers and to the provided model directory on the chief. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.If you want to restore a checkpoint, you will need to find the latest checkpoint in the model directory using `tf.train.latest_checkpoint` and call `restore` with this. This means that on workers you will be saving to a temporary directory but restoring from the model directory to which only the chief checkpoints. `ModelCheckpoint` callback encompasses this save and restore logic. This is why you may have noticed additional temporary directories created during training.The reason we need to save on the chief and workers is because we might be aggregating variables during checkpointing which requires the chief and workers to participate in the allreduce communication protocol. Letting chief and workers save to the same model directory will result in errors due to contention.
###Code
# Saving a model
# Let `is_chief` be a utility function that inspects the cluster spec and
# current task type and returns True if the worker is the chief and False
# otherwise.
def is_chief():
return True
if is_chief():
# This is the model directory will be ideally be a cloud bucket.
path = '/tmp/model_dir'
else:
# Save to a path that is unique across workers.
worker_id = 1
path = '/tmp/model_dir/worker_tmp_' + str(worker_id)
multi_worker_model.save(path)
# Restoring a checkpoint
# On the Chief
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
status = checkpoint.restore(manager.latest_checkpoint)
# On the Workers
# This is the path that the chief saves the model to
model_dir_path = '/tmp/model_dir'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
latest_checkpoint = tf.train.latest_checkpoint(model_dir_path)
status = checkpoint.restore(latest_checkpoint)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
!pip install tf-nightly
import os
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset. The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images. In this example, we will take the training part of the datasets to demonstrate.
###Code
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# We need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: If you have an infinite dataset (by calling `.repeat()` on the dataset), you must specify the number of steps to run through `steps_per_epoch` argument to `model.fit()`. In that case, `model.fit()` does not create a new iterator from the input every epoch, but continues from wherever the last epoch ended. If you have a finite dataset, setting `steps_per_epoch` is optional. In particular, if the sharding is not balanced (for example, this could happen if you have a file-based dataset with the number of files more than the number of workers and some workers get files that contain more data than others. You can shard the data more evenly by manually setting `tf.data.experimental.AutoShardPolicy`, more details [here](https://www.tensorflow.org/tutorials/distribute/inputsharding)), and `steps_per_epoch` is not set or set to be greater than the size of the smallest shard divided by the per-worker batch size, you might get partial batches towards the end of training.
###Code
num_workers = 4
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training with `MultiWorkerMirroredStrategy`, sharding the dataset is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly passed to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically. It shards the dataset at the file level which may create skewed shards. In extreme cases where there is only one file, only the first shard (i.e. worker) will get training or evaluation data and as a result all workers will get errors.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `global_batch_size = per_worker_batch_size * num_workers`, which is `num_workers` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. EvaluationIf you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PredictionCurrently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.` PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue.Note:Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. We are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, user is responsible to load the model manually.Optionally user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason you need to save on the chief and workers at the same time, is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, we take advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# If `task_type` is None, this may be operating as single worker, which works
# effectively as chief.
return task_type is None or task_type == 'chief' or (
task_type == 'worker' and task_id == 0)
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As we described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, we assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that we have the model restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save model's weights and restore to them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore, you can find the latest checkpoint saved using convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackNote: `tf.keras.callbacks.experimental.BackupAndRestore` callback is only available in tf-nightly.BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.Below are two examples for both multi-worker training and single worker training.
###Code
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
!pip install tf-nightly
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset. The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images. In this example, we will take the training part of the datasets to demonstrate.
###Code
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# We need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Always pass in `steps_per_epoch` argument to `model.fit()` since `MultiWorkerMirroredStrategy` does not support last partial batch handling. When using `steps_per_epoch`, `model.fit()` does not create a new iterator from the input every epoch, but continues from wherever the last epoch ended. Hence, make sure to call `.repeat()` on the dataset so it has an adequate number of examples for N epochs.
###Code
num_workers = 4
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `global_batch_size = per_worker_batch_size * num_workers`, which is `num_workers` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
If a worker gets preempted, the whole cluster pauses until the preempted worker is restarted. Once the worker rejoins the cluster, other workers will also restart. Now, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.If you inspect the directory containing the `filepath` you specified in `ModelCheckpoint`, you may notice some temporarily generated checkpoint files. Those files are needed for recovering the previously lost instances, and they will be removed by the library at the end of `tf.keras.Model.fit()` upon successful exiting of your multi-worker training. Save/Restore outside `ModelCheckpoint` callbackIf you want to save your model using `model.save` or `tf.saved_model.save`, you will need to save the model to a temporary directory on the workers and to the provided model directory on the chief. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.If you want to restore a checkpoint, you will need to find the latest checkpoint in the model directory using `tf.train.latest_checkpoint` and call `restore` with this. This means that on workers you will be saving to a temporary directory but restoring from the model directory to which only the chief checkpoints. `ModelCheckpoint` callback encompasses this save and restore logic. This is why you may have noticed additional temporary directories created during training.The reason we need to save on the chief and workers is because we might be aggregating variables during checkpointing which requires the chief and workers to participate in the allreduce communication protocol. Letting chief and workers save to the same model directory will result in errors due to contention.
###Code
# Saving a model
# Let `is_chief` be a utility function that inspects the cluster spec and
# current task type and returns True if the worker is the chief and False
# otherwise.
def is_chief():
return True
if is_chief():
# This is the model directory will be ideally be a cloud bucket.
path = '/tmp/model_dir'
else:
# Save to a path that is unique across workers.
worker_id = 1
path = '/tmp/model_dir/worker_tmp_' + str(worker_id)
multi_worker_model.save(path)
# Restoring a checkpoint
# On the Chief
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
status = checkpoint.restore(manager.latest_checkpoint)
# On the Workers
# This is the path that the chief saves the model to
model_dir_path = '/tmp/model_dir'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
latest_checkpoint = tf.train.latest_checkpoint(model_dir_path)
status = checkpoint.restore(latest_checkpoint)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
train_datasets_unbatched = datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard = False
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset. The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images. In this example, we will take the training part of the datasets to demonstrate.
###Code
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# We need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. The first component `cluster` is the same for all workers, and the second component `task` is different on each worker and specifies the `type` and `index` of that worker. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Always pass in `steps_per_epoch` argument to `model.fit()` since `MultiWorkerMirroredStrategy` does not support last partial batch handling. When using `steps_per_epoch`, `model.fit()` does not create a new iterator from the input every epoch, but continues from wherever the last epoch ended. Hence, make sure to call `.repeat()` on the dataset so it has an adequate number of examples for N epochs.
###Code
num_workers = 4
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `global_batch_size = per_worker_batch_size * num_workers`, which is `num_workers` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
If a worker gets preempted, the whole cluster pauses until the preempted worker is restarted. Once the worker rejoins the cluster, other workers will also restart. Now, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.If you inspect the directory containing the `filepath` you specified in `ModelCheckpoint`, you may notice some temporarily generated checkpoint files. Those files are needed for recovering the previously lost instances, and they will be removed by the library at the end of `tf.keras.Model.fit()` upon successful exiting of your multi-worker training. Save/Restore outside `ModelCheckpoint` callbackIf you want to save your model using `model.save` or `tf.saved_model.save`, you will need to save the model to a temporary directory on the workers and to the provided model directory on the chief. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. We recommend that you have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.If you want to restore a checkpoint, you will need to find the latest checkpoint in the model directory using `tf.train.latest_checkpoint` and call `restore` with this. This means that on workers you will be saving to a temporary directory but restoring from the model directory to which only the chief checkpoints. `ModelCheckpoint` callback encompasses this save and restore logic. This is why you may have noticed additional temporary directories created during training.The reason we need to save on the chief and workers is because we might be aggregating variables during checkpointing which requires the chief and workers to participate in the allreduce communication protocol. Letting chief and workers save to the same model directory will result in errors due to contention.
###Code
# Saving a model
# Let `is_chief` be a utility function that inspects the cluster spec and
# current task type and returns True if the worker is the chief and False
# otherwise.
def is_chief():
return True
if is_chief():
# This is the model directory will be ideally be a cloud bucket.
path = '/tmp/model_dir'
else:
# Save to a path that is unique across workers.
worker_id = 1
path = '/tmp/model_dir/worker_tmp_' + str(worker_id)
multi_worker_model.save(path)
# Restoring a checkpoint
# On the Chief
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
status = checkpoint.restore(manager.latest_checkpoint)
# On the Workers
# This is the path that the chief saves the model to
model_dir_path = '/tmp/model_dir'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
latest_checkpoint = tf.train.latest_checkpoint(model_dir_path)
status = checkpoint.restore(latest_checkpoint)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates how to perform multi-worker distributed training with a Keras model and the `Model.fit` API using the `tf.distribute.Strategy` API—specifically the `tf.distribute.MultiWorkerMirroredStrategy` class. With the help of this strategy, a Keras model that was designed to run on a single-worker can seamlessly work on multiple workers with minimal code changes.For those interested in a deeper understanding of `tf.distribute.Strategy` APIs, the [Distributed training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports.To learn how to use the `MultiWorkerMirroredStrategy` with Keras and a custom training loop, refer to [Custom training loop with Keras and MultiWorkerMirroredStrategy](multi_worker_with_ctl.ipynb).Note that the purpose of this tutorial is to demonstrate a minimal multi-worker example with two workers. SetupStart with some necessary imports:
###Code
import json
import os
import sys
###Output
_____no_output_____
###Markdown
Before importing TensorFlow, make a few changes to the environment:1. Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. In a real-world application, each worker would be on a different machine.
###Code
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
###Output
_____no_output_____
###Markdown
2. Reset the `TF_CONFIG` environment variable (you'll learn more about this later):
###Code
os.environ.pop('TF_CONFIG', None)
###Output
_____no_output_____
###Markdown
3. Make sure that the current directory is on Python's path—this allows the notebook to import the files written by `%%writefile` later:
###Code
if '.' not in sys.path:
sys.path.insert(0, '.')
###Output
_____no_output_____
###Markdown
Now import TensorFlow:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Dataset and model definition Next, create an `mnist_setup.py` file with a simple model and dataset setup. This Python file will be used by the worker processes in this tutorial:
###Code
%%writefile mnist_setup.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the [0, 255] range.
# You need to convert them to float32 with values in the [0, 1] range.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Model training on a single workerTry training the model for a small number of epochs and observe the results of _a single worker_ to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
###Code
import mnist_setup
batch_size = 64
single_worker_dataset = mnist_setup.mnist_dataset(batch_size)
single_worker_model = mnist_setup.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
Multi-worker configurationNow let's enter the world of multi-worker training. A cluster with jobs and tasksIn TensorFlow, distributed training involves: a `'cluster'`with several jobs, and each of the jobs may have one or more `'task'`s.You will need the `TF_CONFIG` configuration environment variable for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration for each worker that is part of the cluster.There are two components of a `TF_CONFIG` variable: `'cluster'` and `'task'`.* A `'cluster'` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs, such as `'worker'` or `'chief'`. - In multi-worker training with `tf.distribute.MultiWorkerMirroredStrategy`, there is usually one `'worker'` that takes on responsibilities, such as saving a checkpoint and writing a summary file for TensorBoard, in addition to what a regular `'worker'` does. Such `'worker'` is referred to as the chief worker (with a job name `'chief'`). - It is customary for the `'chief'` to have `'index'` `0` be appointed to (in fact, this is how `tf.distribute.Strategy` is implemented).* A `'task'` provides information of the current task and is different for each worker. It specifies the `'type'` and `'index'` of that worker.Below is an example configuration:
###Code
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
###Output
_____no_output_____
###Markdown
Here is the same `TF_CONFIG` serialized as a JSON string:
###Code
json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Note that`tf_config` is just a local variable in Python. To be able to use it for a training configuration, this dict needs to be serialized as a JSON and placed in a `TF_CONFIG` environment variable. In the example configuration above, you set the task `'type'` to `'worker'` and the task `'index'` to `0`. Therefore, this machine is the _first_ worker. It will be appointed as the `'chief'` worker and do more work than the others.Note: Other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `'cluster'` dict, but different task `'type'`s or task `'index'`es, depending on the roles of those machines. For illustration purposes, this tutorial shows how you may set up a `TF_CONFIG` variable with two workers on a `localhost`.In practice, you would create multiple workers on external IP addresses/ports and set a `TF_CONFIG` variable on each worker accordingly.In this tutorial, you will use two workers:- The first (`'chief'`) worker's `TF_CONFIG` is shown above.- For the second worker, you will set `tf_config['task']['index']=1` Environment variables and subprocesses in notebooks Subprocesses inherit environment variables from their parent.For example, you can set an environment variable in this Jupyter Notebook process as follows:
###Code
os.environ['GREETINGS'] = 'Hello TensorFlow!'
###Output
_____no_output_____
###Markdown
Then, you can access the environment variable from a subprocesses:
###Code
%%bash
echo ${GREETINGS}
###Output
_____no_output_____
###Markdown
In the next section, you'll use a similar method to pass the `TF_CONFIG` to the worker subprocesses. In a real-world scenario, you wouldn't launch your jobs this way, but it's sufficient in this example. Choose the right strategyIn TensorFlow, there are two main forms of distributed training:* _Synchronous training_, where the steps of training are synced across the workers and replicas, and* _Asynchronous training_, where the training steps are not strictly synced (for example, [parameter server training](parameter_server_training.ipynb)).This tutorial demonstrates how to perform synchronous multi-worker training using an instance of `tf.distribute.MultiWorkerMirroredStrategy`.`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The `tf.distribute.Strategy` [guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training. `MultiWorkerMirroredStrategy` provides multiple implementations via the `tf.distribute.experimental.CommunicationOptions` parameter: 1) `RING` implements ring-based collectives using gRPC as the cross-host communication layer; 2) `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives; and 3) `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the modelWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
###Code
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
###Output
_____no_output_____
###Markdown
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you encounter `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated. To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.Like the `mnist_setup.py` file written earlier, here is the `main.py` that each of the workers will run:
###Code
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist_setup
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_setup.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
###Output
_____no_output_____
###Markdown
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers. The current directory now contains both Python files:
###Code
%%bash
ls *.py
###Output
_____no_output_____
###Markdown
So json-serialize the `TF_CONFIG` and add it to the environment variables:
###Code
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
###Code
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
###Output
_____no_output_____
###Markdown
There are a few things to note about the above command:1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file so that you can inspect what happened in a log file later.So, wait a few seconds for the process to start up:
###Code
import time
time.sleep(10)
###Output
_____no_output_____
###Markdown
Now, inspect what's been output to the worker's log file so far:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed. So update the `tf_config` for the second worker's process to pick up:
###Code
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
###Output
_____no_output_____
###Markdown
Launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
###Code
%%bash
python main.py
###Output
_____no_output_____
###Markdown
If you recheck the logs written by the first worker, you'll learn that it participated in training that model:
###Code
%%bash
cat job_0.log
###Output
_____no_output_____
###Markdown
Unsurprisingly, this ran _slower_ than the test run at the beginning of this tutorial.Running multiple workers on a single machine only adds overhead.The goal here was not to improve the training time, but only to give an example of multi-worker training.
###Code
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
###Output
_____no_output_____
###Markdown
Multi-worker training in depthSo far, you have learned how to perform a basic multi-worker setup.During the rest of the tutorial, you will learn about other factors, which may be useful or important for real use cases, in detail. Dataset shardingIn multi-worker training, _dataset sharding_ is needed to ensure convergence and performance.The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`.To learn more about _auto-sharding_, refer to the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/inputsharding).Here is a quick example of how to turn the auto sharding off, so that each replica processes every example (_not recommended_):
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist_setup.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
###Output
_____no_output_____
###Markdown
EvaluationIf you pass the `validation_data` into `Model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking the `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers.Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set the `validation_steps`.A repeated dataset is also recommended for evaluation.Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted. PerformanceYou now have a Keras model that is all set up to run in multiple workers with the `MultiWorkerMirroredStrategy`.To tweak performance of multi-worker training, you can try the following:- `tf.distribute.MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation): - `RING` implements ring-based collectives using gRPC as the cross-host communication layer. - `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives. - `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number of GPUs, the type of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify the `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor. For example: ```python communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL) ```- Cast the variables to `tf.float` if possible: - The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists.Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You can do this by preserving the training state in the distributed file system of your choice, such that upon a restart of the instance that previously failed or preempted, the training state is recovered.When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.Note: Previously, the `ModelCheckpoint` callback provided a mechanism to restore the training state upon a restart from a job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback, which also adds the support to single-worker training for a consistent experience, and removed the fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new `BackupAndRestore` callback. ModelCheckpoint callback`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](scrollTo=kmH8uCUhfn4w) callback instead.The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback. Model saving and loadingTo save your model using `model.save` or `tf.saved_model.save`, the saving destination needs to be different for each worker.- For non-chief workers, you will need to save the model to a temporary directory.- For the chief, you will need to save to the provided model directory.The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location.The model saved in all the directories is identical, and typically only the model saved by the chief should be referenced for restoring or serving.You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.The reason for saving on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.Using the `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`:- `task_type` tells you what the current job is (e.g. `'worker'`).- `task_id` tells you the identifier of the worker.- The worker with `task_id == 0` is designated as the chief worker.In the code snippet below, the `write_filepath` function provides the file path to write, which depends on the the worker's `task_id`:- For the chief worker (with `task_id == 0`), it writes to the original file path. - For other workers, it creates a temporary directory—`temp_dir`—with the `task_id` in the directory path to write in:
###Code
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this Colab section, the `task_type is None` case
# is added because it is effectively run with only a single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
###Output
_____no_output_____
###Markdown
With that, you're now ready to save:
###Code
multi_worker_model.save(write_model_path)
###Output
_____no_output_____
###Markdown
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
###Code
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
###Output
_____no_output_____
###Markdown
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work.Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()` (note that `strategy = tf.distribute.MultiWorkerMirroredStrategy()`, as defined earlier):
###Code
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
Checkpoint saving and restoringOn the other hand, checkpointing allows you to save your model's weights and restore them without having to save the whole model.Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by the `tf.train.CheckpointManager`, so that only the latest checkpoint is preserved:
###Code
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
###Output
_____no_output_____
###Markdown
Once the `CheckpointManager` is set up, you're now ready to save and remove the checkpoints the non-chief workers had saved:
###Code
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
###Output
_____no_output_____
###Markdown
Now, when you need to restore the model, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
###Code
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
###Output
_____no_output_____
###Markdown
BackupAndRestore callbackThe `tf.keras.callbacks.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.To use it, provide an instance of `tf.keras.callbacks.BackupAndRestore` at the `Model.fit` call.With `MultiWorkerMirroredStrategy`, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then, the training continues.The `BackupAndRestore` callback uses the `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.Currently, the `BackupAndRestore` callback supports single-worker training with no strategy—`MirroredStrategy`—and multi-worker training with `MultiWorkerMirroredStrategy`.Below are two examples for both multi-worker training and single-worker training:
###Code
# Multi-worker training with `MultiWorkerMirroredStrategy`
# and the `BackupAndRestore` callback.
callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist_setup.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Multi-worker training with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs. SetupFirst, setup TensorFlow and the necessary imports.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
Preparing datasetNow, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000training examples and 10,000 test examples of the handwritten digits 0–9,formatted as 28x28-pixel monochrome images.
###Code
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build the Keras modelHere we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/kerassequential_model).
###Code
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
###Code
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Multi-worker ConfigurationNow let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task. In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.```os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0}})``` Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size. Choose the right strategyIn TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
###Code
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
_____no_output_____
###Markdown
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy.__init__()` is called, so `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. `MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.pyL928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. Train the model with MultiWorkerMirroredStrategyWith the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.Note: Since `MultiWorkerMirroredStrategy` does not support last partial batch handling, pass the `steps_per_epoch` argument to `model.fit()` when dataset is imbalanced on multiple workers.
###Code
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
multi_worker_model = build_and_compile_cnn_model()
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch. Note that the numbers here are for demonstration
# purposes only and may not sufficiently produce a model with good quality.
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
###Output
_____no_output_____
###Markdown
Dataset sharding and batch sizeIn multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.If you prefer manual sharding for your training, automatic sharding can be turned off via `tf.data.experimental.DistributeOptions` api. Concretely,
###Code
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
###Output
_____no_output_____
###Markdown
Another thing to notice is the batch size for the `datasets`. In the code snippet above, we use `GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS`, which is `NUM_WORKERS` times as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before. PerformanceYou now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.pyL466) of how this can be done. Fault toleranceIn synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. We do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.Since all the workers are kept in sync in terms of training epochs and steps, other workers would need to wait for the failed or preempted worker to restart to continue. ModelCheckpoint callbackTo take advantage of fault tolerance in multi-worker training, provide an instance of `tf.keras.callbacks.ModelCheckpoint` at the `tf.keras.Model.fit()` call. The callback will store the checkpoint and training state in the directory corresponding to the `filepath` argument to `ModelCheckpoint`.
###Code
# Replace the `filepath` argument with a path in the file system
# accessible by all workers.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
###Output
_____no_output_____ |
notebooks/Generate_features_AK.ipynb | ###Markdown
Generate features for each protein sequenceWith each protein sequence generated and compiled using Bio.Entrez, the following script will generate corresponding features to train the prediction model and visualization of each protein's gene expression. After this, WIDI's MODEL.
###Code
# import relevant libraries
import Bio
from Bio.SeqUtils.ProtParam import ProteinAnalysis
import pandas as pd
# test Biopython functionality
ProtA = ProteinAnalysis("MAEGEITTFTALTEKFNLPPGNYKKPKLLYCSNGGHFLRILPDGTVDGT"
"RDRSDQHIQLQLSAESVGEVYIKSTETGQYLAMDTSGLLYGSQTPSEEC"
"LFLERLEENHYNTYTSKKHAEKNWFVGLKKNGSCKRGPRTHYGQKAILF"
"LPLPV")
MW = ProtA.molecular_weight()
count_AA = ProtA.count_amino_acids()
arom = ProtA.aromaticity()
iso_el = ProtA.isoelectric_point()
print(MW)
print(count_AA)
print(arom)
print(iso_el)
###Output
17103.1617
{'A': 6, 'C': 3, 'D': 5, 'E': 12, 'F': 6, 'G': 14, 'H': 5, 'I': 5, 'K': 12, 'L': 18, 'M': 2, 'N': 7, 'P': 8, 'Q': 6, 'R': 6, 'S': 10, 'T': 13, 'V': 5, 'W': 1, 'Y': 8}
0.09868421052631579
7.7224523544311525
###Markdown
PSEUDOCODE: take 1 protein sequence from dataframe (1 column, iterate through rows): for each protein sequence run MW run count AA run arom run iso_el output to new columns
###Code
# generate overall structure for features generation
d = pd.read_csv('compiled_features.csv')
df = pd.DataFrame(d)
MW_features = []
count_features = []
arom_features = []
iso_features = []
for i in df['SEQUENCE'].values:
ProtA = ProteinAnalysis(i)
MW = ProtA.molecular_weight()
MW_features.append(MW)
#count_AA = ProtA.count_amino_acids()
#count_features.append(count_AA)
arom = ProtA.aromaticity()
arom_features.append(arom)
iso_e = ProtA.isoelectric_point()
iso_features.append(iso_e)
df['MW'] = MW_features
#df['COUNT_AA'] = count_features
df['AROM'] = arom_features
df['ISO_E'] = iso_features
def create_BioPy_features(sequence):
"""
Taken-in dataframe with a column specified as 'SEQUENCE' and uses Biopython to
generate additional features from each protein sequence to help better train a machine
learning model to predict log2FC
"""
# initiate results lists
MW_features = []
count_features = []
arom_features = []
iso_features = []
# Determine features for each element of 'SEQUENCE' column of dataframe
ProtA = ProteinAnalysis(i)
molwt_BP = ProtA.molecular_weight()
aromatics = ProtA.aromaticity()
isoelectric_point = ProtA.isoelectric_point()
return molwt_BP, aromatics, isoelectric_point
# testing new function
d = pd.read_csv('compiled_features.csv')
df = pd.DataFrame(d)
new_df = create_BioPy_features(df)
new_df
df.head()
df.to_csv('compiled_features_complete.csv')
df
df.iloc[0]
df['PROT_SEQ']
df.values
###Output
_____no_output_____ |
old_notebooks/tcav_keras_azure.ipynb | ###Markdown
Run TCAV with Keras with code from https://gist.github.com/Gareth001/e600d2fbc09e690c4333388ec5f06587
###Code
tcav = None
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import decode_predictions
from keras.models import Model, load_model
import keras.backend as K
import model as tcav_model
import tcav as tcav
import utils as utils
import activation_generator as act_gen
import tensorflow as tf
import utils_plot as utils_plot
#import keras_model as keras_model
from keras.preprocessing import image
from keras.applications.inception_v3 import preprocess_input
import numpy as np
from nltk.corpus import wordnet as wn
import os
import operator
from os import listdir
from os.path import isfile, join
import subprocess
from PIL import Image
import requests
from io import BytesIO
import urllib.request
sess = K.get_session()
#model = InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
#model.save('v3_model.h5')
tf.logging.set_verbosity(0)
model = None
model = load_model('v3_model.h5')
endpoints_v3 = dict(
input=model.inputs[0].name,
input_tensor=model.inputs[0],
logit=model.outputs[0].name,
prediction=model.outputs[0].name,
prediction_tensor=model.outputs[0],
)
tf.logging.set_verbosity(0)
working_dir = '/home/tyler/Desktop/tcav_on_azure'
label_path = os.path.join(working_dir,'labels.txt')
mymodel = tcav_model.KerasModelWrapper(sess,
label_path, [299, 299, 3], endpoints_v3,
'InceptionV3_public', (-1, 1))
###Output
WARNING:tensorflow:From /data/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
###Markdown
Making Predictions
###Code
img_path = '/Users/tyler/Desktop/dissertation/programming/tcav/2012_val/val/ILSVRC2012_val_00000001.JPEG'
img_path = os.path.join(working_dir,'concepts/random500_3/ILSVRC2012_val_00017255.JPEG')
img = image.load_img(img_path, target_size=(299, 299))
img
img_array = np.array(img)
x = np.expand_dims(img_array, axis=0)
x = preprocess_input(x)
pred = mymodel.get_predictions(x)
decode_predictions(pred, top=5)
###Output
_____no_output_____
###Markdown
Run TCAV
###Code
#print(model.summary())
#working_dir = '/Users/tyler/Desktop/dissertation/programming/tcav'
working_dir = '/home/tyler/Desktop/tcav_on_azure'
activation_dir = working_dir + '/activations/'
cav_dir = working_dir + '/cavs/'
source_dir = working_dir + '/concepts/'
target = 'zebra'
concepts = ['dotted_sub_2']#,'striped_sub_2']
#'mixed0','mixed1', 'mixed2', 'mixed3', 'mixed4', 'mixed5', 'mixed6', 'mixed7', 'mixed8', 'mixed9_0', 'mixed9', 'mixed9_1', 'mixed10'
bottlenecks = ['mixed9']
alphas = [0.1]
#source_dir
act_generator = None
act_generator = act_gen.ImageActivationGenerator(mymodel, source_dir, activation_dir, max_examples=50)
act_generator.max_examples
tf.logging.set_verbosity(1)
mytcav = tcav.TCAV(sess,
target, concepts, bottlenecks,
act_generator, alphas,
cav_dir = cav_dir,
num_random_exp=1)
results = mytcav.run(run_parallel=True)
#mymodel.ends
results
###Output
_____no_output_____ |
sagemaker/xgboost/1.xgboost_direct_marketing_sagemaker.ipynb | ###Markdown
Amazon SageMaker XGBoost를 이용한 다이렉트 마케팅 타게팅_**Gradient Boosted Trees를 이용하는 지도학습: 편향된 클래스의 이진 분류 예측문제 해결 **_---본 노트북은 다음 소스를 한글로 번역하고 일부 코드를 수정하였습니다. - https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_applying_machine_learning/xgboost_direct_marketing/xgboost_direct_marketing_sagemaker.ipynb --- 목차1. [배경](배경)1. [준비](준비)1. [데이터](데이터) 1. [탐험](탐험) 1. [변형](변형)1. [학습](학습)1. [호스팅](호스팅)1. [평가](평가)1. [확장](확장)--- 배경메일, 이메일, 전화 등을 이용하는 다이렉트 마케팅은 고객을 모객하는 일반적인 방법입니다. 우리의 자원과 고객의 시간이 제한적이므로 우리는 특정 제안에 참여할 가능성이 높은 고객의 서브그룹에 포키싱할 필요가 있습니다. 인구통계정보나 과거 상호작용, 환경요인 등을 고려하여 이런 잠재고객을 예측하는 것은 머신러닝의 일반적인 문제입니다. 본 노트북은 한 번 또는 그 이상의 전화요청으로 고객이 은행 정기예금(term deposit)에 가입할 지를 예측하는 문제 사례입니다. 진행 단계는 다음과 같습니다.* Amazon SageMaker notebook을 준비합니다.* 인터넷에서 Amazon SageMaker로 데이터를 다운로드합니다.* 데이터를 조사하고 SageMaker 알고리즘에서 사용할 수 있도록 변형합니다.* Gradient Boosting 알고리즘을 이용하여 모델을 학습합니다.* 모델의 성능을 검증합니다.* 앞으로의 예측에 모델을 적용합니다.--- 준비_본 노트북은 ml.m4.xlarge 인스턴스에서 테스트되었습니다._다음을 정의합니다.Let's start by specifying:- S3 버킷과 prefix : 노트북 인스턴스, 학습, 호스팅 인스턴스와 동일한 리전에 있어야 합니다. - IAM 역할(role) arn : 학습, 호스팅작업이 데이터에 접근할 때 사용합니다.
###Code
# !pip install -U sagemaker
import sagemaker
sagemaker.__version__
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker.Session().default_bucket() # replace with an existing bucket if needed
prefix = 'sagemaker/DEMO-xgboost-dm' # prefix used for all data stored within the bucket
# Define IAM role
import boto3
from sagemaker import get_execution_role
role = get_execution_role()
###Output
_____no_output_____
###Markdown
분석과정에서 필요한 파이썬 라이브러리를 로드합니다.
###Code
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker # Amazon SageMaker's Python SDK provides many helper functions
# from sagemaker.predictor import csv_serializer # Converts strings for HTTP POST requests on inference
###Output
_____no_output_____
###Markdown
--- 데이터[direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) 데이터를 다운로드합니다.\[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
###Code
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
!apt-get install unzip -y
!unzip -o bank-additional.zip
###Output
--2021-01-23 06:33:48-- https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
Resolving sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)... 52.218.152.105
Connecting to sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)|52.218.152.105|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 432828 (423K) [application/zip]
Saving to: ‘bank-additional.zip.3’
bank-additional.zip 100%[===================>] 422.68K 1.16MB/s in 0.4s
2021-01-23 06:33:49 (1.16 MB/s) - ‘bank-additional.zip.3’ saved [432828/432828]
/bin/sh: apt-get: command not found
Archive: bank-additional.zip
inflating: bank-additional/bank-additional-names.txt
inflating: bank-additional/bank-additional.csv
inflating: bank-additional/bank-additional-full.csv
###Markdown
pandas 데이터프레임으로 로드하고 내용을 살펴봅니다.
###Code
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 20) # Keep the output on one page
data
###Output
_____no_output_____
###Markdown
데이터를 살펴봅니다. 대략적으로 다음 내용이 눈에 띕니다.Let's talk about the data. At a high level, we can see:* 3만건 이상의 고객 레코드가 있고 각 고객에 대하여 20건의 속성값(feature)이 습니다.* 속성은 숫자 또는 명목형(categorical) 값이 섞여 있습니다.* 데이터는 `time`과 `contact` 등에 따라 정렬된 것처럼 보입니다. _**속성 정보:**_*인구통계정보:** `age`: 고객의 나이 (numeric)* `job`: 직업군 (categorical: 'admin.', 'services', ...)* `marital`: 결혼여부 (categorical: 'married', 'single', ...)* `education`: 학업 (categorical: 'basic.4y', 'high.school', ...)*과거 고객 이벤트:** `default`: dafault 여부? (categorical: 'no', 'unknown', ...)* `housing`: 주택대출 여부? (categorical: 'no', 'yes', ...)* `loan`: 개인대출 여부? (categorical: 'no', 'yes', ...)*과거 다이렉트 마케팅 이력:** `contact`: 커뮤니케이션 유형 (categorical: 'cellular', 'telephone', ...)* `month`: 최종 접촉 월 (categorical: 'may', 'nov', ...)* `day_of_week`: 최종 접촉 주 (categorical: 'mon', 'fri', ...)* `duration`: 최종 접촉 기간(초) (numeric). 중요: duration 이 0 이면 `y`는 'no'임. *캠페인 정보:** `campaign`: 이번 캠페인 동안 접촉한 회수 (numeric, includes last contact)* `pdays`: 이전 캠페인에서 접촉한 마지막 날짜 이후 경과 시간(일) (numeric)* `previous`: 이번 캠페인 이전에 고객과의 접촉 회수 (numeric)* `poutcome`: 이전 캠페인의 성과 (categorical: 'nonexistent','success', ...)*외부 환경 요인:** `emp.var.rate`: 고용 변화율(Employment variation rate, quarterly) (numeric)* `cons.price.idx`: 소비자 가격 지수(Consumer price index, monthly) (numeric)* `cons.conf.idx`: 고객 확신 지수 (Consumer confidence index, monthly) (numeric)* `euribor3m`: Euribor 3 개월 비율 (daily) (numeric)* `nr.employed`: 고용자 수(Number of employees, quarterly) (numeric)*타겟 변수:** `y`: 고객이 정기예금에 가입하였는가? (binary: 'yes','no') 탐험데이터 탐험(EDA)를 해봅니다. 첫번째로 속성의 분포를 살펴봅니다.
###Code
# Frequency tables for each categorical feature
for column in data.select_dtypes(include=['object']).columns:
display(pd.crosstab(index=data[column], columns='% observations', normalize='columns'))
# Histograms for each numeric features
display(data.describe())
%matplotlib inline
hist = data.hist(bins=30, sharey=True, figsize=(10, 10))
###Output
_____no_output_____
###Markdown
다음을 생각해 볼 수 있습니다.:* `y`값의 약 90%가 "no"입니다. 곧, 대부분의 고객은 정기예금에 가입하지 않았습니다.* 많은 속성값이 "unknown"상태이고 그 분포는 속성에 따라 다릅니다. "unknown"상태를 다룰 때 해당 값의 원인이 무엇일 지 어떻게 다루어야 할 지 주의가 필요합니다. * "unknown"이 고유한 카테고리를 가지고 있지만 실제로는 발생 원인에 따라 다른 카테고리중 하나에 포함될 것입니다.* 속성들이 소수의 관찰값을 보유한 카테고리를 가지고 있는 경우가 많습니다. 만약 어떤 작은 카테고리의 속성이 타겟값에 높은 영향을 미친다면, 이를 일반화할 수 있을 정도로 충분히 많은 수의 데이터 샘플(evidence)을 가지고 있나요?* 접촉 타이밍 정보가 과도하게 치우쳐(skewed) 있습니다. 약 1/3이 5월에 이루어졌으며 12월은 1%미만입니다. 우리가 만약 다음 12월에 대한 예측을 실행한다면 이것이 어떤 의미를 가질까요?* 숫자형 데이터에 결측치는 없습니다. 결측값은 이미 보정된 상태입니다. * `pdays`는 대부분의 고객들에 대하여 1000이 넘습니다. 이전 컨택이 많지 않았음을 알려줍니다.* 일부 숫자형 속성은 긴 롱테일을 가지고 있습니다. 속성의 주요 부분과 롱테일 부분을 분리해서 접근해야 할 필요가 있나요?* 일부 숫자형 속성(특히 외부 시장지표 값들)은 몇가지 그룹으로 나누어질 수 있어 보입니다. 이들을 명목형 변수로 바꾸어야 할까요?다음은, 이들 속성들이 예측하고자 하는 타겟값과 어떻게 관련되는지 살펴봅니다.
###Code
for column in data.select_dtypes(include=['object']).columns:
if column != 'y':
display(pd.crosstab(index=data[column], columns=data['y'], normalize='columns'))
for column in data.select_dtypes(exclude=['object']).columns:
print(column)
hist = data[[column, 'y']].hist(by='y', bins=30)
plt.show()
###Output
_____no_output_____
###Markdown
다음을 확인할 수 있습니다.* "blue-collar", "married", 디폴트 여부가 "unknown"인 사용자가 "telephone"으로 "may" 시점에 연락되었다면 정기예금 가입에 매우 적은 "yes"결과를 얻었을 것입니다.* 숫자형 변수값의 분포는 정기예금 가입 "yes", "no"여부에 따라 다르지만 그 관계가 명확하지는 않습니다. 다음은 속성들이 서로 어떻게 연관되는지 살펴봅니다.
###Code
display(data.corr())
pd.plotting.scatter_matrix(data, figsize=(12, 12))
plt.show()
###Output
_____no_output_____
###Markdown
Notice that:다음을 확인할 수 있습니다.* 속성간 상호 연관성은 다양합니다. 일부는 매우 음의 상관관계를, 또 다른 일부는 양의 상관관계를 보여줍니다. * 속성간의 관계는 대부분 비선형이며, 연관성이 크지 않습니다. 변형데이터 클린징은 대부분의 머신러닝 프로젝트에서 필요한 작업입니다. 이 작업은 적절히 수행되지 않으면 결과에 악영향을 끼치며, 주관적인 판단이 많이 개입됩니다. 몇가지 일반적인 기술들은 다음과 같습니다.* 결측치의 처리 : 일부 머신러닝 알고리즘은 결측치를 처리할 수 있는 경우도 있지만 대부분은 그렇지 않습니다. 이를 처리하는 옵션은: * 결측값 제거 : 결측값이 매우 일부분일 경우 적용합니다. * 결측속성 제거 : 다량의 결측값을 가지는 속성이 일부분일 경우 적용합니다. * 결측값 채우기(imputing) : 다음 책[books](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247) 전체에서 이 주제에 대해 다루고 있습니다. 일반적인 선택은 결측값을 해당 속성의 다른 값들의 평균이나 최빈값(mode)으로 대체하는 것입니다.* 명목형(categorical) 속성을 수치형 속성으로 변환 : 가장 일반적인 방법은 원 핫 인코딩(one hot encoding)이라 불리는, 각 명목값들을 컬럼으로 정의한 후 해당값에 매칭되는 여부에 따라 1 또는 0의 값을 가지도록 변환하는 것입니다.* 분포가 고르지 않은 데이터 : Gradient Boosted Trees와 같은 비선형 모델에서도 좋지 않은 영향을 가져오며, 회기(regression)와 같은 파라미터 방식에서도 과도하게 편향된 데이터는 정확도가 떨어지는 결과를 리턴할 수 있습니다. 간혹 로그(log)값을 취하는 것으로 충분히 정규분포로 변환하는 경우도 있고 개별 범위로 구분하여 명목형 번수로 변환한 후 다시 원 핫 인코딩으로 적용할 수도 있습니다.* 보다 복잡한 데이터 타입 처리 : 본 노트북에서 다루지는 않지만 이미지, 텍스트, 또는 다양한 grain을 가지는 데이터들에 대해서도 추가 변형이 필요합니다. 다행히 이들 중 일부는 이미 처리되어 있습니다. 그리고 지금 우리가 다루려고 하는 알고리즘은 드문드문하거나(sparce) 분포가 일정하지 않은 경우에도 잘 동작하는 경향이 있습니다. 따라서 본 예제에서는 최소한의 전처리만 하겠습니다.
###Code
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators
###Output
_____no_output_____
###Markdown
모델을 만들기 전에 필요한 또 다른 질문은 특정 속성이 최종 목표에 기여를 하는지 여부입니다. 예를 들어, 최선의 예측을 제공하는 것이 목표인 경우, 예측을 하려는 시점에 해당 데이터를 사용가능한 지 생각해 봅니다. 우산 판매의 예측에서 비가 올지 여부를 안다면 판매예측에서 매우 유리할 것이지만, 미래의 날씨를 예측하는 것은 날씨 정보 없이 우산판매를 예측하는 것보다 더 어려울 수 있습니다. 이런 경우 과거 날씨정보가 모델의 속성에 포함된다면 정확성을 왜곡할 수도 있습니다. 이런 논리로, 데이터의 속성들 중 미래에 대한 예측이 필요한 경제 지표들과 `duration`부분을 제외하겠습니다.이전 분기의 경제 지표 값을 사용할 수도 있겠지만, 이 값들은 실제 업무환경에서는 현실성이 없을 가능성이 높습니다.
###Code
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
###Output
_____no_output_____
###Markdown
모델을 만들 때 1차적인 목표는 새로운 데이터에 대한 타겟변수 값을 예측하는 것이며, 이 때 과적합(overfitting)을 이해하는 것이 중요합니다. 지도학습(Supervised learning) 모델은 타겟변수 값에 대한 실제값과 모델의 예측값 사이의 차이(error)를 최소하는 방식으로 설계됩니다. 이 마지막 부분이 중요합니다. 종종 머신러닝 모델은 보다 높은 정확도를 찾는 과정에서 자신이 본 데이터들만에 대한 사소한 특징들까지 고려하는 편중(bias)를 가지게 됩니다. 이런 특징이 새로운 데이터에서 반복적으로 나타나지 않을 경우 실제 예측에서는 정확도가 떨어지게 되고 학습과정에서의 정확도 수준을 보이지 않게 됩니다. 이를 예방하는 가장 일반적인 방법은 모델이 학습을 할 때 학습데이터 뿐 아니라 새로운 데이터에 대해서도 적합성을 함께 판단하도록 하는 것이며 홀드아웃 검증(holdout validation), 교차검증(cross-validation), 일회성 검증(leave-one-out validation) 등 여러가지 방식이 있습니다. 본 예제에서는 단순히 랜덤하게 3개의 그룹으로 데이터를 나눌 것입니다. 모델은 70%의 데이터를 이용하여 학습을 하고, 20%의 데이터를 새로운 데이터에대한 정확도를 평가하는 용도로 사용하고, 10%의 데이터를 마지막 테스트셋으로 분리하여 성능을 테스트하겠습니다. 또한 데이터셋 분리시 랜덤하게 순서를 조정하고 있음에도 주목합니다.
###Code
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
###Output
_____no_output_____
###Markdown
Amazon SageMaker의 XGBoost 컨테이너는 libSVM 또는 CSV 포맷의 데이터를 사용합니다. 본 예제에서는 CSV를 이용합니다. CSV파일에서 첫번째 컬럼을 타겟변수 값으로 지정해야 하며 헤더를 포함하고 있지 않아야 합니다. 본 예제에서는 데이터를 train|validation|test 데이터셋으로 분리한 후 작업을 하고 있습니다.
###Code
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
pd.concat([test_data['y_yes'], test_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('test.csv', index=False, header=False)
pd.concat([test_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('test_features.csv', index=False, header=False)
###Output
_____no_output_____
###Markdown
다음은 SageMaker의 관리형 학습환경에서 이 데이터에 접근할 수 있도록 파일을 S3로 복사하겠습니다.
###Code
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'test/test.csv')).upload_file('test.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'test/test_features.csv')).upload_file('test.csv')
###Output
_____no_output_____
###Markdown
--- 학습우리가 사용하는 데이터의 많은 속성들이 편향된 분포를 가지고 있습니다. 일부 속성들은 서로 높은 연관성을 가지고 있고 일부는 타겟속성 값과 비선형 관계를 가지고 있었습니다. 또한 미래 마케팅에 대한 예측에서 높은 정확도가 필요하고 왜 그렇게 판단하는지에 대한 설명 또한 중요합니다. 이런 점들을 고려할 때 Gradient boosted tree와 같은 알고리즘이 매우 적합한 후보입니다. Gradient boosted tree는 작은 모델들이 결합되어 작동하며, 각 모델은 이전 모델의 결함을 보완하는 방식으로 동작합니다. 단순한 모델들이 모여 크고 복잡한 다른 모델들보다 높은 성능을 냅니다. Gradient boosting tree알고리즘이 다른 알고리즘과 어떻게 다른지에 대해 설명하는 다른 SageMaker 노트북이 있으니 이를 참고합니다.`xgboost`는 매우 인기있는 Gradient bossted tree에 대한 오픈소스 패키지 입니다. 계산성능이 뛰어나고, 필요한 기능들을 모두 구현하고 있으며, 많은 머신러닝 경쟁에서 성공적인 성과를 보여주고 있습니다. SageMaker의 관리형, 분산 학습 프레임워크를 이용하여 학습할 수 있도록 간단한 `xgboost`모델을 시작해 보겠습니다. 먼저, SageMaker의 XGBoost구현체가 있는 ECR 컨터이너를 지정합니다.
###Code
from sagemaker import image_uris
container = image_uris.retrieve('xgboost', region='us-east-1', version='latest')
###Output
_____no_output_____
###Markdown
우리는 CSV 파일 포맷을 사용하므로 S3의 파일 위치를 알려주는 `s3_input`오브젝트를 생성하고 콘텐츠 타입을 CSV로 지정합니다.
###Code
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
###Output
_____no_output_____
###Markdown
다음으로 다음 파라미터를 지정하여 esitmator를 생성합니다.1. `xgboost` 알고리즘 컨테이너를 사용1. 사용할 IAM 역할(role)1. 학습용 인스턴스 타입과 수량 1. 출력데이터를 위한 S3위치 1. 알고리즘 하이퍼파라미터 이제 다음 파라미터를 이용하여 `.fit()` 명령을 실행합니다.1. 학습용 데이터가 있는 S3 위치. 본 예제는 학습과 검증 데이터셋을 모두 사용하므로 두 채널을 모두 지정합니다.
###Code
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2021-01-23 06:34:42 Starting - Starting the training job...
2021-01-23 06:35:07 Starting - Launching requested ML instancesProfilerReport-1611383682: InProgress
......
2021-01-23 06:36:08 Starting - Preparing the instances for training......
2021-01-23 06:37:11 Downloading - Downloading input data
2021-01-23 06:37:11 Training - Downloading the training image...
2021-01-23 06:37:36 Uploading - Uploading generated training model[34mArguments: train[0m
[34m[2021-01-23:06:37:31:INFO] Running standalone xgboost training.[0m
[34m[2021-01-23:06:37:31:INFO] File size need to be processed in the node: 4.35mb. Available memory size in the node: 8417.98mb[0m
[34m[2021-01-23:06:37:31:INFO] Determined delimiter of CSV input is ','[0m
[34m[06:37:31] S3DistributionType set as FullyReplicated[0m
[34m[06:37:31] 28831x59 matrix with 1701029 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-01-23:06:37:31:INFO] Determined delimiter of CSV input is ','[0m
[34m[06:37:31] S3DistributionType set as FullyReplicated[0m
[34m[06:37:31] 8238x59 matrix with 486042 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[06:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.100482#011validation-error:0.103545[0m
[34m[06:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.099858#011validation-error:0.103545[0m
[34m[06:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.099476#011validation-error:0.10403[0m
[34m[06:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.099025#011validation-error:0.10403[0m
[34m[06:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.099476#011validation-error:0.10318[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.099372#011validation-error:0.10318[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.09906#011validation-error:0.10318[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.099025#011validation-error:0.102938[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.099164#011validation-error:0.102816[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.098817#011validation-error:0.103666[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.098817#011validation-error:0.103787[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.098817#011validation-error:0.103545[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.098852#011validation-error:0.103545[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.098574#011validation-error:0.103666[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.098609#011validation-error:0.10403[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.098401#011validation-error:0.103909[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.098401#011validation-error:0.10403[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.098297#011validation-error:0.103545[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.098054#011validation-error:0.103545[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.098158#011validation-error:0.10318[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.098193#011validation-error:0.103787[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.098193#011validation-error:0.103302[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.098124#011validation-error:0.10318[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.098124#011validation-error:0.103545[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.097881#011validation-error:0.103909[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.097777#011validation-error:0.104273[0m
[34m[06:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.097742#011validation-error:0.104151[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.097707#011validation-error:0.104394[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.097291#011validation-error:0.104394[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.097152#011validation-error:0.104637[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.097256#011validation-error:0.104758[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.097083#011validation-error:0.104758[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.097083#011validation-error:0.104637[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.097083#011validation-error:0.10488[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.097152#011validation-error:0.10488[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.097256#011validation-error:0.104758[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.097187#011validation-error:0.104394[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.097118#011validation-error:0.104516[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.097152#011validation-error:0.104516[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.096736#011validation-error:0.104637[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[40]#011train-error:0.09691#011validation-error:0.104758[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.096736#011validation-error:0.104637[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[42]#011train-error:0.096771#011validation-error:0.10488[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[43]#011train-error:0.096806#011validation-error:0.10488[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[44]#011train-error:0.096736#011validation-error:0.105001[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 14 pruned nodes, max_depth=2[0m
[34m[45]#011train-error:0.096806#011validation-error:0.105123[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[46]#011train-error:0.096459#011validation-error:0.104516[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[47]#011train-error:0.096424#011validation-error:0.104394[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[48]#011train-error:0.096528#011validation-error:0.104273[0m
[34m[06:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[49]#011train-error:0.096563#011validation-error:0.103666[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 38 pruned nodes, max_depth=5[0m
[34m[50]#011train-error:0.096597#011validation-error:0.10403[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[51]#011train-error:0.096528#011validation-error:0.104394[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[52]#011train-error:0.096112#011validation-error:0.104394[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[53]#011train-error:0.096077#011validation-error:0.104394[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[54]#011train-error:0.09632#011validation-error:0.104637[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 16 pruned nodes, max_depth=3[0m
[34m[55]#011train-error:0.09632#011validation-error:0.104637[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 30 pruned nodes, max_depth=4[0m
[34m[56]#011train-error:0.096147#011validation-error:0.104516[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[57]#011train-error:0.09632#011validation-error:0.104758[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[58]#011train-error:0.096112#011validation-error:0.104394[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 34 pruned nodes, max_depth=5[0m
[34m[59]#011train-error:0.096042#011validation-error:0.104273[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[60]#011train-error:0.096008#011validation-error:0.104758[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[61]#011train-error:0.096042#011validation-error:0.104758[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[62]#011train-error:0.096077#011validation-error:0.104516[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 30 pruned nodes, max_depth=3[0m
[34m[63]#011train-error:0.096147#011validation-error:0.104273[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[64]#011train-error:0.096216#011validation-error:0.10403[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[65]#011train-error:0.09632#011validation-error:0.104151[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[66]#011train-error:0.096181#011validation-error:0.104273[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[67]#011train-error:0.095904#011validation-error:0.104151[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[68]#011train-error:0.096008#011validation-error:0.104516[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 26 pruned nodes, max_depth=0[0m
[34m[69]#011train-error:0.096042#011validation-error:0.104516[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[70]#011train-error:0.096077#011validation-error:0.104758[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[71]#011train-error:0.095938#011validation-error:0.104758[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[72]#011train-error:0.095938#011validation-error:0.104758[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 28 pruned nodes, max_depth=2[0m
[34m[73]#011train-error:0.096008#011validation-error:0.104637[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 12 pruned nodes, max_depth=0[0m
[34m[74]#011train-error:0.095869#011validation-error:0.104637[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0[0m
[34m[75]#011train-error:0.095938#011validation-error:0.104637[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[76]#011train-error:0.095904#011validation-error:0.104637[0m
[34m[06:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[77]#011train-error:0.0958#011validation-error:0.104758[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=3[0m
[34m[78]#011train-error:0.09573#011validation-error:0.104758[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[79]#011train-error:0.095765#011validation-error:0.105123[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[80]#011train-error:0.095834#011validation-error:0.104637[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[81]#011train-error:0.095592#011validation-error:0.104758[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 22 pruned nodes, max_depth=4[0m
[34m[82]#011train-error:0.095557#011validation-error:0.104394[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 26 pruned nodes, max_depth=3[0m
[34m[83]#011train-error:0.095557#011validation-error:0.104273[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 32 pruned nodes, max_depth=5[0m
[34m[84]#011train-error:0.095453#011validation-error:0.104758[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=4[0m
[34m[85]#011train-error:0.095453#011validation-error:0.105001[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[86]#011train-error:0.095453#011validation-error:0.10488[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0[0m
[34m[87]#011train-error:0.095453#011validation-error:0.10488[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[88]#011train-error:0.095349#011validation-error:0.10488[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 30 pruned nodes, max_depth=2[0m
[34m[89]#011train-error:0.095037#011validation-error:0.105365[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 14 pruned nodes, max_depth=0[0m
[34m[90]#011train-error:0.095106#011validation-error:0.105487[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0[0m
[34m[91]#011train-error:0.095037#011validation-error:0.105487[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0[0m
[34m[92]#011train-error:0.095106#011validation-error:0.105365[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[93]#011train-error:0.095314#011validation-error:0.10488[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[94]#011train-error:0.095314#011validation-error:0.105123[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 24 pruned nodes, max_depth=3[0m
[34m[95]#011train-error:0.095314#011validation-error:0.105123[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[96]#011train-error:0.095279#011validation-error:0.105123[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[97]#011train-error:0.094828#011validation-error:0.105487[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 22 pruned nodes, max_depth=2[0m
[34m[98]#011train-error:0.094863#011validation-error:0.105365[0m
[34m[06:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[99]#011train-error:0.094759#011validation-error:0.104758[0m
2021-01-23 06:38:09 Completed - Training job completed
Training seconds: 49
Billable seconds: 49
###Markdown
--- 호스팅입력데이터에 대해 `xgboost` 모델의 학습이 완료되면 이 모델을 실시간 추론을 위한 엔드포인트로 배포하겠습니다.
###Code
xgb_predictor = xgb.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
###Output
---------------!
###Markdown
--- 평가머신러닝 모델의 성능을 확인하는 여러가지 방법이 있습니다. 여기서는 단순히 실제값과 예측값을 비교하겠습니다. 정기예금에 가입을 한 경우(`1`) 와 그렇지 않은 경우(`0`)를 이용하여 혼돈행렬(confusion matrix)를 생성하겠습니다.이를 위해 추론용 데이터를 엔드포인트에 전달하고 결과를 받아야 합니다. 현재 데이터는 노트북 인스턴스의 메모리에 NumPy 배열로 저장되어 있습니다. 데이터를 HTTP POST request로 보내기 위해 CSV형태로 직렬화(serialize)하고 결과로 리턴되는 CSV를 디코딩합니다.*주의: SageMaker XGBoost에서 CSV포맷으로 추론할 때 요청 데이터는 타겟속성 컬럼을 포함하지 않습니다.*
###Code
from sagemaker.serializers import CSVSerializer
xgb_predictor.serializer = CSVSerializer()
###Output
_____no_output_____
###Markdown
엔드포인트를 호출하는 간단한 함수를 생성합니다.:1. 테스트 데이터셋을 반복(Loop)1. rows 만큼 미니매치로 나누기1. 미니배치를 CSV string payloads로 변환 (타겟속성 변수를 제거합니다.)1. XGBoost 엔드포인트를 호출하고 예측값 수신1. CSV결과로 리턴된 예측값을 다시 NumPy 배열로 변환
###Code
def predict(data, rows=500):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ''
for array in split_array:
predictions = ','.join([predictions, xgb_predictor.predict(array).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
predictions = predict(test_data.drop(['y_no', 'y_yes'], axis=1).to_numpy())
###Output
_____no_output_____
###Markdown
예측결과와 실제값을 비교하는 혼돈행렬을 생성합니다.
###Code
pd.crosstab(index=test_data['y_yes'], columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
모델의 예측결과, 약 4,000명의 잠재고객에 대하여 136명이 정기예금에 가입할 것으로 예측하였고 이 중 94명이 실제로 가입한 것으로 확인됩니다. 그리고 모델이 가입할 것으로 예측하지 않았으나 실제로 가입한 고객은 389명으로 확인됩니다. 이 결과는 기대했던 것보다 낮은 성능일 수 있습니다. 하지만 우리는 최소의 노력으로 링크[here](http://media.salford-systems.com/video/tutorial/2015/targeted_marketing.pdf)에 소개된 결과와 유사한 수준의 정확도를 달성하였고 또 더 개선될 여지도 있습니다. _알고리즘의 샘플링과정에서 랜덤요소가 반영되므로 결과의 숫자는 위 결과와 정확히 동일하지 않을 수 있습니다._ --- 확장본 예제는 비교적 작은 데이터셋을 이용한 분석이지만 SageMaer의 분산, 관리형 학습과 실시간 모델 호스팅 기능은 대량의 데이터를 다루어야 하는 문제에도 쉽게 적용될 수 있습니다. 예측 정확도를 더 개선하기 위해 false-positives와 false-negatives에 변화를 주도록 threshold값을 조정할 수 있습니다. 실제 업무환경에서는 데이터의 속성을 보다 면밀히 살피고, 현재 데이터셋에서 추가로 더 많은 고객정보를 확보하기 위해 더 많은 시간을 소비하게 될 것입니다. (옵션) 리소스 제거본 예제를 모두 마무리한 후 아래 셀을 실행합니다. 다음 명령은 추론 단계에서 생성한 SageMaker에서 호스팅되고 있는 엔드포인트를 제거합니다. 엔드포인트를 삭제하지 않으면 계속 사용요금이 발생할 수 있습니다.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____ |
DS_Sprint_Challenge_7.ipynb | ###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train_master = pd.read_csv(train_url)
test_master = pd.read_csv(test_url)
assert train_master.shape == (51916, 17)
assert test_master.shape == (17306, 17)
train = train_master
test = test_master
from sklearn.model_selection import train_test_split
train_size = 0.8
train,val = train_test_split(train,
train_size = train_size,
random_state=42)
train.shape,val.shape,test.shape
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding. Exploration/Cleaning Iterations
###Code
train.isna().sum()
#The value here is used when encoding
train.Risk[train.Risk.isna()]
train.City.value_counts().head()
#The number of non-chicago entries is so small that we can drop it
#You can get rid of head in the above code to see all values.
###Output
_____no_output_____
###Markdown
Datetime
###Code
#Test code to check if my function would work properly
time = pd.to_datetime(train['Inspection Date'],infer_datetime_format=True)
#Turning the datetime value into a time in days from beginning of dataset
# train['time'] = time-min(time)
# train['time'] = train.time.dt.days
# train.time
#TODO
#Splitting up data into day, month, and year
###Output
_____no_output_____
###Markdown
Change Type
###Code
#I actually can't do this because of the nans. I'm using an imputer
#to deal with the nans right now. I would need to deal with them differently
#to change the type of the column to an int.
# train['License #'].astype(int)
###Output
_____no_output_____
###Markdown
Encodings
###Code
train.Risk.str.contains('1')
#this is a mask... now I have to use it to replace the values in the columns
train.head()
###Output
_____no_output_____
###Markdown
Risk should be encoded matching the numbers but the other categories can be mapped anyhow (for now). I thought I would set up a dictionary here, and encode manually, but the ordinal encoder allows you to pass it a dictionary. I will encode using that feature. Wrangle Function
###Code
def wrangle(X):
# avoid set with copy warnings
X = X.copy()
#Drop city and state (basically all the same)
X = X.drop(['City','State'],1)
#Drop Violations (leaky data)
X = X.drop(['Violations'],1)
#Drop Location (information contained in lat/long columns + in dictionary format)
X = X.drop(['Location'],1)
#Drop inspection id (spurious correlations?, but it could be a matter of tuning my models)
X = X.drop(['Inspection ID'],1)
#HIGH CARDINALITY DROPS:
#Drop Address, DBA Name, and AKA name
X = X.drop(['Address','DBA Name','AKA Name'],1)
#Create a "time in days" column
time = pd.to_datetime(X['Inspection Date'],infer_datetime_format=True)
X['time_days']= time-min(time)
X['time_days'] = X.time_days.dt.days
#Drop the Inspection Date
X = X.drop(['Inspection Date'],1)
#Change some of the floats to ints:
return X
train = wrangle(train)
test = wrangle(test)
val = wrangle(val)
test.shape, val.shape, train.shape
###Output
_____no_output_____
###Markdown
Future WorkIf I had more time/ knew more nlp instead of dropping high cardinality columns like name I could instead find trends in the set of words that appear, and then create categories based on those words. A simple one would be "main food" for example a lot of restaurants have either the cuisine or the food in their names ("Burrito Beach", "Frank's Chicago Shrimp House"...). I could pull this information out, and use that to possibly help predict inspection result. Leaky Data:The "violations" column in this dataset isn't something that would be known before an inspection. It is a report produced after the inspection. This data is leaky, and if it were properly encoded as numbers you would find that it accuratley predicts inspection failure because inspection failure means that there was a "critical" violation. What you COULD do with this information is use it as a target. You could create a model that predicts the violations, and then run the predicted violations through an if statement to get pass/fail scores. This way you could create a model that gives more detailed data (at the possible cost of some accuracy). But you can't use this information to preidct other information. If you were predicting the violation you might find interesting trends. Maybe certain areas of the city usually have certain types of violations, If you had more categorical data about types of restaurants, you might be able to find trends in cuisine ( for example a vegetarian restaurant, vs a non-vegetarian restaurant, a chain store, vs a family owned store) Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.** Baseline
###Code
train.Fail.value_counts(normalize=True)
#74% accurate if you guessed majority class of "Pass"
###Output
_____no_output_____
###Markdown
Modeling
###Code
train.head()
train.dtypes
target = 'Fail'
features = train.columns.drop(target)
print(features)
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
#Found here: https://stackoverflow.com/questions/50092911/how-to-map-categorical-data-to-category-encoders-ordinalencoder-in-python-pandas
#Not working when I pass it as an argument to the ordinal encoder. I'll check it out later.
import numpy as np
ordinal_cols_mapping = [{
"col":"Risk",
"mapping": [
('Risk 1 (High)',3),
('Risk 2 (Medium)',2),
('Risk 3 (Low)',1),
('NaN',np.nan)
]},
]
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
#Some of the visualization libraries can't use pipelines so
#I'm going to just make one for the encoding/imputing
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
X_test_processed = processor.transform(X_test)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100,
max_depth = 15,
max_features =7,
n_jobs = -2)
model.fit(X_train_processed, y_train)
model.score(X_val_processed, y_val)
#Around baseline, BUT the ROC_AUC might be better than baseline
from sklearn.metrics import roc_auc_score
#The first column is the probability of 0 (pass)
#Second column is probability of 1 (fail)
model.predict_proba(X_val_processed)[:,1]
#Verification of class
model.predict(X_val_processed)
#It seems like you have to give the ROC_AUC metric the
#probabilities of the positive class. Its a little strange here
#because positve class (a result of 1 from the model) corresponds to
#"Fail"
y_pred_proba = model.predict_proba(X_val_processed)[:,1]
print("Baseline ROC_AUC should be 0.50")
print('My ROC_AUC Score',roc_auc_score(y_val,y_pred_proba))
###Output
Baseline ROC_AUC should be 0.50
My ROC_AUC Score 0.7276069551572897
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values Partial Dependence Plot
###Code
#for pdp I have to have one pipeline with everything in it
#This is because it expects the values of the input to be
#a dataframe, and encoding/imputing turns the df into an array
#I can try a workaround by creating a df from the array.
pdp_model = make_pipeline(processor,
model)
pdp_model.fit(X_train,y_train)
from pdpbox.pdp import pdp_isolate, pdp_plot
def my_pdp(feature,df,my_model):
isolated = pdp_isolate(model = my_model,
dataset = df,
model_features = df.columns,
feature= feature)
return pdp_plot(isolated,feature_name=feature)
#This only works on the int column in my dataset
#I'm not sure why...
my_pdp('time_days',X_val,pdp_model)
#Tried some debugging, but didn't work out.
# X_val.head()
# my_pdp('Inspection Type',X_val,model)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍕 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
train.isnull().sum()
train['State'].value_counts()
train['Violations'].value_counts()
def wrangle(X):
X = X.copy()
X.columns = X.columns.str.replace(' ', '_')
to_drop = ['Violations','Location',]
X = X.drop(columns=to_drop)
# Convert date_recorded to datetime
X['Inspection_Date'] = pd.to_datetime(X['Inspection_Date'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['Inspection_Date'].dt.year
X['month_recorded'] = X['Inspection_Date'].dt.month
X['day_recorded'] = X['Inspection_Date'].dt.day
X = X.drop(columns='Inspection_Date')
return X
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['Fail'], random_state=42)
train = wrangle(train)
test = wrangle(test)
val = wrangle(val)
target = 'Fail'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test.drop(columns=target)
y_test = test[target]
X_test.shape
import category_encoders as ce
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=220, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, pipeline.predict(X_val))
###Output
_____no_output_____
###Markdown
For some reason, I cannot get the roc_auc_score any better than this. The score actuakky went up as the model became less accurate.
###Code
ytest = y_test.to_list()
train.columns
column = 'month_recorded'
# Fit without column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train.drop(columns=column), y_train)
score_without = pipeline.score(X_val.drop(columns=column), y_val)
print(f'Validation Accuracy without {column}: {score_without}')
# Fit with column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
score_with = pipeline.score(X_val, y_val)
print(f'Validation Accuracy with {column}: {score_with}')
# Compare the error with & without column
print(f'Drop-Column Importance for {column}: {score_with - score_without}')
###Output
Validation Accuracy without month_recorded: 0.7482665639445301
Validation Accuracy with month_recorded: 0.7540446841294299
Drop-Column Importance for month_recorded: 0.005778120184899871
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color=['blue','green']);
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values().index
#Here we use ELI5 to display the weights
eli5.show_weights(
permuter,
top=None, # No limit: show permutation importances for all features
feature_names=feature_names # must be a list
)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
Requirement already satisfied: category_encoders==2.* in /usr/local/lib/python3.6/dist-packages (2.1.0)
Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.21.3)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.16.5)
Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.24.2)
Requirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1)
Requirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.3.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.14.0)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.5.3)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.*) (1.12.0)
Requirement already satisfied: eli5 in /usr/local/lib/python3.6/dist-packages (0.10.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.3.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.12.0)
Requirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (19.2.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.10.3)
Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.5)
Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.21.3)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.16.5)
Requirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.14.0)
Requirement already satisfied: pandas-profiling==2.* in /usr/local/lib/python3.6/dist-packages (2.3.0)
Requirement already satisfied: astropy in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.5)
Requirement already satisfied: matplotlib>=1.4 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.3)
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.24.2)
Requirement already satisfied: jinja2>=2.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.10.3)
Requirement already satisfied: confuse>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.0.0)
Requirement already satisfied: phik>=0.9.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.9.8)
Requirement already satisfied: htmlmin>=0.1.12 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.1.12)
Requirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: numpy>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from astropy->pandas-profiling==2.*) (1.16.5)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2018.9)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.8->pandas-profiling==2.*) (1.1.1)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.*) (3.13)
Requirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (1.3.1)
Requirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.6.0)
Requirement already satisfied: pytest>=4.0.2 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.2.1)
Requirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.40.1)
Requirement already satisfied: pytest-pylint>=0.13.0 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.14.1)
Requirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.3.3)
Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (0.9.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=1.4->pandas-profiling==2.*) (41.2.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=1.4->pandas-profiling==2.*) (1.12.0)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.1.3)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (3.1.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.8.4)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.3)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.5.0)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.3.3)
Requirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.8.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.1.7)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2)
Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.13.0)
Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.23)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.3.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (7.2.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2.0)
Requirement already satisfied: llvmlite>=0.25.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.8->pandas-profiling==2.*) (0.29.0)
Requirement already satisfied: pylint>=1.4.5 in /usr/local/lib/python3.6/dist-packages (from pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (4.5.3)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (17.0.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.5.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.2.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.6.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: mccabe<0.7,>=0.6 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (0.6.1)
Requirement already satisfied: isort<5,>=4.2.5 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (4.3.21)
Requirement already satisfied: astroid<2.4,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (2.3.1)
Requirement already satisfied: typed-ast<1.5,>=1.4.0; implementation_name == "cpython" and python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.4.0)
Requirement already satisfied: lazy-object-proxy==1.4.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: wrapt==1.11.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.11.2)
Requirement already satisfied: pdpbox in /usr/local/lib/python3.6/dist-packages (0.2.0)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.24.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.16.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.3.1)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.14.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.21.3)
Requirement already satisfied: matplotlib>=2.1.2 in /usr/local/lib/python3.6/dist-packages (from pdpbox) (3.0.3)
Requirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from pdpbox) (5.4.8)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2018.9)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2.5.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (1.1.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas->pdpbox) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.1.2->pdpbox) (41.2.0)
Requirement already satisfied: shap in /usr/local/lib/python3.6/dist-packages (0.30.2)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from shap) (3.0.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (0.24.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.3.1)
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from shap) (5.5.0)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from shap) (0.15.0)
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.28.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.21.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.16.5)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.5.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.4.2)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (41.2.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.7.5)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (1.0.18)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.3.3)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.4.0)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.7.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (2.1.3)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.8.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.3)
Requirement already satisfied: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.4.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (1.0.3)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (4.3.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.14.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->shap) (1.12.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->shap) (0.1.7)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->shap) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython->shap) (0.6.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->shap) (0.46)
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make all four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
train.head(2)
train.select_dtypes('object').nunique() > 50 #examining categorical features with more than 50 unique values
#columns with high cardinality to drop
columns_drop = ['DBA Name', 'AKA Name','Address', 'Inspection Date', 'Inspection Type', 'Violations', 'Location',
'Risk', 'Fail', #risk and fail (target) shouldn't leak into features
'Inspection ID', 'License #'] #IDs aren't useful
columns_drop
features = train.columns[~train.columns.isin(columns_drop)] #list of features to use for prediction
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
#train test split
from sklearn.model_selection import train_test_split
train, val = train_test_split(train,
train_size = 0.8, test_size=0.2,
stratify = train['Fail'],
random_state=42)
X_train = train[features]
X_val = val[features]
X_test = test[features]
y_train = train['Fail']
y_val = val['Fail']
y_test = test['Fail']
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from category_encoders import OrdinalEncoder
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
#make pipeline
pipeline = make_pipeline(OrdinalEncoder(),
SimpleImputer(),
StandardScaler(),
RandomForestClassifier(n_estimators = 500,
n_jobs=-1,
random_state=42))
#fit pipeline
pipeline.fit(X_train, y_train)
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
##RANDOM SEARCH
from sklearn.model_selection import RandomizedSearchCV
#select hyperparameters
hyperparameters = {'simpleimputer__strategy': ['mean', 'median'],
'randomforestclassifier__max_depth': range(0, 50, 2),
'randomforestclassifier__min_samples_split': range(0, 500, 5),
'randomforestclassifier__min_samples_leaf': range(0, 500, 5)}
#apply search
search = RandomizedSearchCV(pipeline,
hyperparameters,
random_state = 42,
n_iter = 20,
cv = 5)
#fit search to trian set
best_model = search.fit(X_train, y_train)
y_pred_proba = best_model.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
best_model.best_params_
###Output
_____no_output_____
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
##PERMUTATION IMPORTANCE W/ ELI5
#pipeline without model (to use eli5)
small_pipeline = make_pipeline(OrdinalEncoder(),
SimpleImputer(),
StandardScaler())
#transform X_train and X_test
X_train_transformed = small_pipeline.fit_transform(X_train)
X_val_transformed = small_pipeline.transform(X_val)
#isolated random forest
model = RandomForestClassifier(max_depth=12,
min_samples_leaf=45,
min_samples_split=395)
model.fit(X_train_transformed, y_train)
from eli5.sklearn import PermutationImportance
import eli5
#instantiate permuter
permuter = PermutationImportance(model,
scoring = 'accuracy',
n_iter = 5,
random_state=42)
#fit permuter to validation set
permuter.fit(X_val_transformed, y_val)
features = X_val.columns.tolist()
#show weights
eli5.show_weights(permuter,
top=None,
feature_names = features)
X_val.columns
## PARTIAL DEPENDENCE PLOTS
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'Facility Type'
isolated = pdp_isolate(
model = model,
dataset = X_val,
model_features = X_val.columns,
feature =feature)
pdp_plot(isolated, feature_name=feature)
###Output
_____no_output_____
###Markdown
Leak I believe the column that would hurt the model in the real world is the "Risk" column. The problem with this column is that it implies that the inspectors have prior insight into the cleanliness of the restaurant, while the purpose of the model is to predict the cleanliness of the restaurant (whether they pass the test or not). I had a similar problem in my Unit 2 Build project. I was building a model to test whether car accidents would result in major injury or fatality (Y/N), and a series of attributes counting the number of fatalities/major injuries/minor injuries per pedestrian/cyclist/driver leaked into my model, giving me a ver accruate model that was useless because the most important feature could not be retrieved prior to real world accidents.
###Code
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍕 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url, index_col=0)
test = pd.read_csv(test_url, index_col=0)
#assert train.shape == (51916, 16)
#assert test.shape == (17306, 16)
###Output
_____no_output_____
###Markdown
I stopped the shape assertion due to import errors. This is also why I deleted the index to each file. Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
test.shape
train.shape
test.head()
###Output
_____no_output_____
###Markdown
ErrorsI am consistently getting this type of error message. I have spent an hour and only been able to download the file a couple of times without issue. I should be able to run my cells over again without having to deal with this to complete the sprint challenge I think:Search Images Maps Play YouTube News Gmail Drive More »Settings | Help | Sign inGoogle Drive can't scan this file for viruses.We are experiencing technical difficulties. Would you still like to download this file? food-inspections-train.csv (61M)Download anyway© 2020 Google - Help - Privacy & Terms
###Code
train.head()
###Output
_____no_output_____
###Markdown
Examine Facility Type as a good feature to re-engineer. Looks like it could be a categorical value but is likely full of random stuff.
###Code
y = train['Facility Type']
print(y.nunique()) #this finds number of unique values
print(y.unique()) #this generates the contents of the array of unique values
train['Facility Type'].isnull().sum()
train.isnull().sum()
###Output
_____no_output_____
###Markdown
Dropping rows which contain Nan in 'Facility Type'
###Code
train = train[train['Facility Type'].notnull()]
columns = ['City','AKA Name','License #','State','Zip','Violations','Latitude','Longitude','Location']
###Output
_____no_output_____
###Markdown
Re-engineering the 'Facility Type' column. This was done hunt and peck. If I had time I'd build a function for this.
###Code
train['Facility Type'] = train['Facility Type'].str.lower()
daycare = train['Facility Type'].str.contains('daycare')
store = train['Facility Type'].str.contains('store')
restuarant = train['Facility Type'].str.contains('restuarant')
grocery = train['Facility Type'].str.contains('grocery')
cafeteria = train['Facility Type'].str.contains('cafeteria')
bar = train['Facility Type'].str.contains('bar')
coffee = train['Facility Type'].str.contains('coffee')
assisted = train['Facility Type'].str.contains('assist')
shop = train['Facility Type'].str.contains('shop')
care = train['Facility Type'].str.contains('care')
nursing = train['Facility Type'].str.contains('nursing')
club = train['Facility Type'].str.contains('club')
kitchen = train['Facility Type'].str.contains('kitchen')
cafe = train['Facility Type'].str.contains('cafe')
school = train['Facility Type'].str.contains('school')
dcare = train['Facility Type'].str.contains('day care')
hall = train['Facility Type'].str.contains('hall')
venue = train['Facility Type'].str.contains('venue')
diner = train['Facility Type'].str.contains('diner')
bakery = train['Facility Type'].str.contains('bakery')
rooftop = train['Facility Type'].str.contains('rooftop')
gasstation = train['Facility Type'].str.contains('gas station')
train.loc[daycare, 'Facility Type'] = 'Daycare'
train.loc[store, 'Facility Type'] = 'Store'
train.loc[restuarant, 'Facility Type'] = 'Restuarant'
train.loc[grocery, 'Facility Type'] = 'Grocery'
train.loc[cafeteria, 'Facility Type'] = 'Cafeteria'
train.loc[bar, 'Facility Type'] = 'Bar'
train.loc[coffee, 'Facility Type'] = 'Coffee'
train.loc[assisted, 'Facility Type'] = 'Assisted Living'
train.loc[shop, 'Facility Type'] = 'Shop'
train.loc[nursing, 'Facility Type'] = 'Assisted Living'
train.loc[club, 'Facility Type'] = 'Club'
train.loc[kitchen, 'Facility Type'] = 'Kitchen'
train.loc[cafe, 'Facility Type'] = 'Cafe'
train.loc[school, 'Facility Type'] = 'School'
train.loc[dcare, 'Facility Type'] = 'Daycare'
train.loc[hall, 'Facility Type'] = 'Venue'
train.loc[venue, 'Facility Type'] = 'Venue'
train.loc[diner, 'Facility Type'] = 'Diner'
train.loc[bakery, 'Facility Type'] = 'Bakery'
train.loc[rooftop, 'Facility Type'] = 'Rooftop'
train.loc[gasstation, 'Facility Type'] = 'Gas Station'
facilities = ['Gas Station','Rooftop','Bakery','Diner','Venue','Daycare',
'School','Cafe','Kitchen','Club','Assisted Living','Shop',
'Coffee','Bar','Cafeteria','Grocery','Restuarant','Store',
'Daycare']
###Output
_____no_output_____
###Markdown
Selecting only those rows in train which have been standardized through my re-engineering
###Code
train = train[train['Facility Type'].isin(facilities)]
train['Facility Type'].value_counts()
train.shape
###Output
_____no_output_____
###Markdown
Original shape of train was (51916, 16). Now it's (14239, 16). Shouldn't matter as I still have lots of data and my goal is to make as quick of a model as I can before fine tuning.
###Code
train.head()
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['Facility Type'], random_state=42)
print(train.shape, val.shape, test.shape)
target = 'Fail'
features = train.columns.drop(target)
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Baseline is the 74% of the time food handling entities pass their inspections.
###Code
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Train Accuracy', pipeline.score(X_train, y_train))
print('Validation Accuracy', pipeline.score(X_val, y_val))
###Output
Train Accuracy 1.0
Validation Accuracy 0.7212078651685393
###Markdown
Validation accuracy of the model is less than random throws of the dice.
###Code
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
column = 'Violations'
# Fit without column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train.drop(columns=column), y_train)
score_without = pipeline.score(X_val.drop(columns=column), y_val)
print(f'Validation Accuracy without {column}: {score_without}')
# Fit with column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
score_with = pipeline.score(X_val, y_val)
print(f'Validation Accuracy with {column}: {score_with}')
# Compare the error with & without column
print(f'Drop-Column Importance for {column}: {score_with - score_without}')
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
y_pred_proba = pipeline.predict_proba(X_val)[:, -1] # Probability for the last class
roc_auc_score(y_val, y_pred_proba)
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
import matplotlib.pyplot as plt
plt.scatter(fpr, tpr)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
###Output
_____no_output_____
###Markdown
As you can see the curve here is barely above the center. So there is some value in the ROC curve but not very much. Clearly more work needs to be done on the model. Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
pip install eli5
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
import eli5
from eli5.sklearn import PermutationImportance
# 1. Calculate permutation importances
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values()
eli5.show_weights(
permuter,
top=None, # show permutation importances for all features
feature_names=feature_names # must be a list
)
###Output
_____no_output_____
###Markdown
This seems to clearly show that none of the features really are that important. Inspection Type is the closest to having some importance but it's only just at the 50% threshold from the weight. Seems to me that a lot of detailed work needs to be done on the data set to clean and organize it better to build a better model.
###Code
pip install PDPbox
pip install xgboost
from sklearn.metrics import r2_score
from xgboost import XGBRegressor
gb = make_pipeline(
ce.OrdinalEncoder(),
XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
)
gb.fit(X_train, y_train)
y_pred = gb.predict(X_val)
print('Gradient Boosting R^2', r2_score(y_val, y_pred))
import matplotlib.pyplot as plt
features
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'License #'
isolated = pdp_isolate(
model=gb,
dataset=X_val,
model_features=X_val.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
Collecting category_encoders==2.*
[?25l Downloading https://files.pythonhosted.org/packages/a0/52/c54191ad3782de633ea3d6ee3bb2837bda0cf3bc97644bb6375cf14150a0/category_encoders-2.1.0-py2.py3-none-any.whl (100kB)
[K |████████████████████████████████| 102kB 3.1MB/s
[?25hRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.21.3)
Requirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1)
Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.24.2)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.3.1)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.16.5)
Requirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.14.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.*) (1.12.0)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9)
Installing collected packages: category-encoders
Successfully installed category-encoders-2.1.0
Collecting eli5
[?25l Downloading https://files.pythonhosted.org/packages/97/2f/c85c7d8f8548e460829971785347e14e45fa5c6617da374711dec8cb38cc/eli5-0.10.1-py2.py3-none-any.whl (105kB)
[K |████████████████████████████████| 112kB 3.5MB/s
[?25hRequirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.10.3)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.16.5)
Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.5)
Requirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (19.2.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.12.0)
Requirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.3.1)
Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.21.3)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.14.0)
Installing collected packages: eli5
Successfully installed eli5-0.10.1
Collecting pandas-profiling==2.*
[?25l Downloading https://files.pythonhosted.org/packages/2c/2f/aae19e2173c10a9bb7fee5f5cad35dbe53a393960fc91abc477dcc4661e8/pandas-profiling-2.3.0.tar.gz (127kB)
[K |████████████████████████████████| 133kB 3.5MB/s
[?25hRequirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.24.2)
Requirement already satisfied: matplotlib>=1.4 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.3)
Requirement already satisfied: jinja2>=2.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.10.3)
Requirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.4.2)
Collecting htmlmin>=0.1.12 (from pandas-profiling==2.*)
Downloading https://files.pythonhosted.org/packages/b3/e7/fcd59e12169de19f0131ff2812077f964c6b960e7c09804d30a7bf2ab461/htmlmin-0.1.12.tar.gz
Collecting phik>=0.9.8 (from pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/45/ad/24a16fa4ba612fb96a3c4bb115a5b9741483f53b66d3d3afd987f20fa227/phik-0.9.8-py3-none-any.whl (606kB)
[K |████████████████████████████████| 614kB 37.5MB/s
[?25hCollecting confuse>=1.0.0 (from pandas-profiling==2.*)
Downloading https://files.pythonhosted.org/packages/4c/6f/90e860cba937c174d8b3775729ccc6377eb91f52ad4eeb008e7252a3646d/confuse-1.0.0.tar.gz
Requirement already satisfied: astropy in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.5)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2018.9)
Requirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (1.16.5)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (1.1.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.8->pandas-profiling==2.*) (1.1.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (1.3.1)
Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (0.9.0)
Requirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.6.0)
Requirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.3.3)
Collecting pytest-pylint>=0.13.0 (from phik>=0.9.8->pandas-profiling==2.*)
Downloading https://files.pythonhosted.org/packages/64/dc/6f35f114844fb12e38d60c4f3d2441a55baff7043ad4e013777dff55746c/pytest_pylint-0.14.1-py3-none-any.whl
Requirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.40.1)
Collecting pytest>=4.0.2 (from phik>=0.9.8->pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/0c/91/d68f68ce54cd3e8afa1ef73ea1ad44df2438521b64c0820e5fd9b9f13b7d/pytest-5.2.1-py3-none-any.whl (226kB)
[K |████████████████████████████████| 235kB 21.4MB/s
[?25hRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.*) (3.13)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas>=0.19->pandas-profiling==2.*) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=1.4->pandas-profiling==2.*) (41.2.0)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.3)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.8.4)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.5.0)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.3.3)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.1.3)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (3.1.0)
Requirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (4.5.3)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (17.0.0)
Collecting pylint>=1.4.5 (from pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/ef/ed/1cb8e7b85a31807aa0bff8b3e60935370bed7e141df8b530aac6352bddff/pylint-2.4.2-py3-none-any.whl (302kB)
[K |████████████████████████████████| 307kB 30.7MB/s
[?25hRequirement already satisfied: llvmlite>=0.25.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.8->pandas-profiling==2.*) (0.29.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.8.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (7.2.0)
Collecting pluggy<1.0,>=0.12 (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*)
Downloading https://files.pythonhosted.org/packages/92/c7/48439f7d5fd6bddb4c04b850bb862b42e3e2b98570040dfaf68aedd8114b/pluggy-0.13.0-py2.py3-none-any.whl
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.3.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2)
Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.23)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.1.7)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.2.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.6.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.5.1)
Collecting isort<5,>=4.2.5 (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)
[K |████████████████████████████████| 51kB 14.5MB/s
[?25hCollecting mccabe<0.7,>=0.6 (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*)
Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl
Collecting astroid<2.4,>=2.3.0 (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/13/e1/74a63c85c501c29c52da5be604c025e368f4dd77daf1fa13c878a33e5a36/astroid-2.3.1-py3-none-any.whl (205kB)
[K |████████████████████████████████| 215kB 43.8MB/s
[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Collecting lazy-object-proxy==1.4.* (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/0e/26/534a6d32572a9dbca11619321535c0a7ab34688545d9d67c2c204b9e3a3d/lazy_object_proxy-1.4.2-cp36-cp36m-manylinux1_x86_64.whl (49kB)
[K |████████████████████████████████| 51kB 19.7MB/s
[?25hRequirement already satisfied: wrapt==1.11.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.11.2)
Collecting typed-ast<1.5,>=1.4.0; implementation_name == "cpython" and python_version < "3.8" (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*)
[?25l Downloading https://files.pythonhosted.org/packages/31/d3/9d1802c161626d0278bafb1ffb32f76b9d01e123881bbf9d91e8ccf28e18/typed_ast-1.4.0-cp36-cp36m-manylinux1_x86_64.whl (736kB)
[K |████████████████████████████████| 737kB 38.0MB/s
[?25hBuilding wheels for collected packages: pandas-profiling, htmlmin, confuse
Building wheel for pandas-profiling (setup.py) ... [?25l[?25hdone
Created wheel for pandas-profiling: filename=pandas_profiling-2.3.0-py2.py3-none-any.whl size=145035 sha256=b6d692c3b87ebf07173f6ee483fabcdb1cfae9666ad37dd2a278f5ace898ebcc
Stored in directory: /root/.cache/pip/wheels/ce/c7/f1/dbfef4848ebb048cb1d4a22d1ed0c62d8ff2523747235e19fe
Building wheel for htmlmin (setup.py) ... [?25l[?25hdone
Created wheel for htmlmin: filename=htmlmin-0.1.12-cp36-none-any.whl size=27084 sha256=9ed9338371234d13058f33b75458588b18cdc061a500669d6c21919db6bd1c01
Stored in directory: /root/.cache/pip/wheels/43/07/ac/7c5a9d708d65247ac1f94066cf1db075540b85716c30255459
Building wheel for confuse (setup.py) ... [?25l[?25hdone
Created wheel for confuse: filename=confuse-1.0.0-cp36-none-any.whl size=17486 sha256=f28fd19503a356746a0883cab05abf507b6c95e81792e98560a49a590fda6cc6
Stored in directory: /root/.cache/pip/wheels/b0/b2/96/2074eee7dbf7b7df69d004c9b6ac4e32dad04fb7666cf943bd
Successfully built pandas-profiling htmlmin confuse
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
Installing collected packages: htmlmin, isort, mccabe, lazy-object-proxy, typed-ast, astroid, pylint, pluggy, pytest, pytest-pylint, phik, confuse, pandas-profiling
Found existing installation: pluggy 0.7.1
Uninstalling pluggy-0.7.1:
Successfully uninstalled pluggy-0.7.1
Found existing installation: pytest 3.6.4
Uninstalling pytest-3.6.4:
Successfully uninstalled pytest-3.6.4
Found existing installation: pandas-profiling 1.4.1
Uninstalling pandas-profiling-1.4.1:
Successfully uninstalled pandas-profiling-1.4.1
Successfully installed astroid-2.3.1 confuse-1.0.0 htmlmin-0.1.12 isort-4.3.21 lazy-object-proxy-1.4.2 mccabe-0.6.1 pandas-profiling-2.3.0 phik-0.9.8 pluggy-0.13.0 pylint-2.4.2 pytest-5.2.1 pytest-pylint-0.14.1 typed-ast-1.4.0
Collecting pdpbox
[?25l Downloading https://files.pythonhosted.org/packages/87/23/ac7da5ba1c6c03a87c412e7e7b6e91a10d6ecf4474906c3e736f93940d49/PDPbox-0.2.0.tar.gz (57.7MB)
[K |████████████████████████████████| 57.7MB 291kB/s
[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.24.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.16.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.3.1)
Requirement already satisfied: matplotlib>=2.1.2 in /usr/local/lib/python3.6/dist-packages (from pdpbox) (3.0.3)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.14.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from pdpbox) (5.4.8)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.21.3)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2018.9)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (0.10.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas->pdpbox) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.1.2->pdpbox) (41.2.0)
Building wheels for collected packages: pdpbox
Building wheel for pdpbox (setup.py) ... [?25l[?25hdone
Created wheel for pdpbox: filename=PDPbox-0.2.0-cp36-none-any.whl size=57690723 sha256=5a8d8fe6590300bd2560ff9f94ccf871f4146b833fdfa23a1b2579f529a22fd0
Stored in directory: /root/.cache/pip/wheels/7d/08/51/63fd122b04a2c87d780464eeffb94867c75bd96a64d500a3fe
Successfully built pdpbox
Installing collected packages: pdpbox
Successfully installed pdpbox-0.2.0
Collecting shap
[?25l Downloading https://files.pythonhosted.org/packages/59/38/a2e125309a0146a8da317a7285ec36429a47461178f5cc91a5cb93cabdac/shap-0.30.2.tar.gz (244kB)
[K |████████████████████████████████| 245kB 3.3MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.16.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.3.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.21.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from shap) (3.0.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (0.24.2)
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.28.1)
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from shap) (5.5.0)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from shap) (0.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.14.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.4.2)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.5.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (1.1.0)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.4.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.8.1)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (2.1.3)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.3.3)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (41.2.0)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.7.0)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (1.0.18)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.7.5)
Requirement already satisfied: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.4.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (1.0.3)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.3)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (4.3.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->shap) (1.12.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->shap) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython->shap) (0.6.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->shap) (0.1.7)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->shap) (0.46)
Building wheels for collected packages: shap
Building wheel for shap (setup.py) ... [?25l[?25hdone
Created wheel for shap: filename=shap-0.30.2-cp36-cp36m-linux_x86_64.whl size=360637 sha256=ae09e4440eb5a69ebbf7ede7d1e4438b3e639873460d51ae7c2b84d963be1286
Stored in directory: /root/.cache/pip/wheels/97/46/c9/b9dc708249af7eaf0951b2be62e0f6191f4de385a48e1a4cb7
Successfully built shap
Installing collected packages: shap
Successfully installed shap-0.30.2
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make all four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
train.head()
train.isnull().sum()
def engineer_features(X):
X = X.copy()
X['Inspection Type'] = X['Inspection Type'].dropna()
X['Facility Type'] = X['Facility Type'].dropna()
X['Latitude'] = X['Latitude'].dropna()
X['Longitude'] = X['Longitude'].dropna()
X['Zip'] = X['Zip'].dropna()
X['Fail'] = X['Fail'].dropna()
X = X.drop(columns=['Violations', 'DBA Name', 'License #', 'Location', 'City', 'State', 'AKA Name'])
X['Inspection Date'] = pd.to_datetime(X['Inspection Date'], infer_datetime_format=True)
return X
train = engineer_features(train)
test = wrangle(test)
print(train.shape)
train.head(5)
from sklearn.model_selection import train_test_split
target = train['Fail']
X_trainval, X_test, y_trainval, y_test = train_test_split(
train, target, train_size =0.8, test_size=0.2, stratify=target, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, train_size =0.8, test_size=0.2,
stratify=y_trainval, random_state=42)
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
# I was getting 1 for ROC AUC score
def edit(X):
X = X.copy()
X = X.drop(columns='Inspection Date')
X = X.drop(columns='Fail')
return X
X_train = edit(X_train)
X_val = edit(X_val)
X_test = edit(X_test)
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
model = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=1000, n_jobs=-1, random_state=42)
)
model.fit(X_train, y_train);
model.named_steps
from sklearn.metrics import roc_auc_score
rf_y_pred = rf.predict(X_val)
roc_auc_score(y_val, rf_y_pred)
rf_y_pred_proba = rf.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, rf_y_pred_proba)
from xgboost import XGBClassifier
xgb = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
XGBClassifier(n_estimators=1000, n_jobs=-1)
)
xgb.fit(X_train, y_train);
xgb_y_pred = xgb.predict(X_val)
roc_auc_score(y_val, xgb_y_pred)
xgb_y_pred_proba = xgb.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, xgb_y_pred_proba)
###Output
_____no_output_____
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
import matplotlib.pyplot as plt
%matplotlib inline
rf = model.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy="median")
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.fit_transform(X_val)
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
model.fit(X_train_transformed, y_train)
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=2,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
eli5.show_weights(
permuter,
top=None,
feature_names = feature_names
)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍕 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
train.head()
train.describe()
train.isnull().sum()
train.shape
train.drop(columns = ['DBA Name', 'AKA Name', 'Address', 'Location'])
test.head()
test.drop(columns = ['DBA Name', 'AKA Name', 'Address', 'Location'])
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
from sklearn.impute import SimpleImputer
target = 'Fail'
features = train.columns.drop([target])
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
DecisionTreeClassifier(max_depth=3)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_test, y_test))
import graphviz
from sklearn.tree import export_graphviz
tree = pipeline.named_steps['decisiontreeclassifier']
dot_data = export_graphviz(
tree,
out_file=None,
feature_names=X_train.columns,
class_names=y_train.unique().astype(str),
filled=True,
impurity=False,
proportion=True
)
graphviz.Source(dot_data)
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
from sklearn.model_selection import train_test_split
train1, val = train_test_split(train, random_state = 42)
train1.shape, val.shape
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
target = 'Fail'
features = ['Inspection ID', 'Risk', 'Inspection Type', 'Violations', 'License #', 'Zip', ]
X_train = train1[features]
y_train = train1[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(max_depth=3)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
from sklearn.metrics import roc_auc_score
y_pred_proba = pipeline.predict_proba(X_test)[:, -1] # Probability for the last class
roc_auc_score(y_test, y_pred_proba)
###Output
_____no_output_____
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
import category_encoders as ce
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
df = train1
df = df.dropna()
target = 'Fail'
features = df.columns.drop(['Inspection Type', 'Violations'])
X = df[features]
y = df[target]
# Use Ordinal Encoder, outside of a pipeline
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_encoded, y)
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'Fail'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
from pdpbox.pdp import pdp_interact, pdp_interact_plot
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge To demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.To earn a score of "3", also do all the stretch goals.You are permitted and encouraged to do as much data exploration as you want. Part 1, Confusion Matrix- 1.1. Calculate accuracy- 1.2. Calculate precision- 1.3. Calculate recall Part 2, Log Transformation- 2.1. Log-transform the target- 2.2. Plot the target's distribution, before and after the transformation Part 3, ROC AUC- 3.1. Fit classification model- 3.2. Get ROC AUC score Part 4, Model interpretation visualizations- 4.1. Make _either_ a Partial Dependence Plot _or_ a Shapley Values Force Plot, for either model. Stretch Goals- Get a lower validation error than the example regression model provided in Part 2.- Find and explain leakage in the classification problem.- Make _both_ a Partial Dependence Plot _and_ a Shapley Values Force Plot.
###Code
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install packages in Colab
!pip install --upgrade category_encoders eli5 pandas-profiling pdpbox plotly shap
# Check category_encoders version
import category_encoders as ce
from distutils.version import StrictVersion
assert StrictVersion(ce.__version__) >= StrictVersion('2.0.0')
###Output
_____no_output_____
###Markdown
Part 1, Confusion MatrixImagine this is the confusion matrix for a binary classification model. Use the confusion matrix to calculate the model's accuracy, precision, and recall. Predicted Negative Positive Actual Negative 85 58 Positive 8 36 1.1. Calculate accuracy
###Code
# Accuracy = total % of correct predictions
accuracy = (85+36)/(85+58+8+36)
print(accuracy)
###Output
0.6470588235294118
###Markdown
1.2. Calculate precision
###Code
# Precision = % of positive predictions that were correct
precision = 36/(36+58)
print(precision)
###Output
0.3829787234042553
###Markdown
1.3. Calculate recall
###Code
# Recall = % of positive cases that were predicted as positive
recall = 36/(36+8)
print(recall)
###Output
0.8181818181818182
###Markdown
Part 2, Log TransformationThis part uses real-world sales data from a German drugstore chain, from Jan 2, 2013 — July 31, 2015.There are three dataframes:- **train**: historical sales data for 120 stores- **val**: historical sales data for 40 different stores- **test**: historical sales data for 40 different stores Run this cell to load the data
###Code
import pandas as pd
train = pd.read_csv('https://drive.google.com/uc?export=download&id=1YWiyOhY_BiECf-vO8_KrknsHd75HqTfs')
val = pd.read_csv('https://drive.google.com/uc?export=download&id=1Azi1KBv63GdzEn2M0x3eYRvekSaLFnPt')
test = pd.read_csv('https://drive.google.com/uc?export=download&id=1Ab7mg_Vt_bRL7ObiTLPLHbU3sTiVPzc3')
assert train.shape == (94080, 18)
assert val.shape == (31360, 18)
assert test.shape == (31360, 18)
###Output
_____no_output_____
###Markdown
2.1. Log-transform the target, for the train, validation, and test sets.
###Code
target = 'Sales'
y_train_stores = train[target]
y_val_stores = val[target]
y_test_stores = test[target]
import numpy as np
# Complete this code cell
y_train_log_stores = np.log1p(y_train_stores)
y_val_log_stores = np.log1p(y_val_stores)
y_test_log_stores = np.log1p(y_test_stores)
###Output
_____no_output_____
###Markdown
2.2. Plot the distribution of the train set target, before and after the transformation.
###Code
# Plotting distribution of target
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(y_train_stores);
plt.title('Original Target Distribution')
plt.show()
# Plotting distribution of target after taking the log of it
sns.distplot(y_train_log_stores);
plt.title('Log Target Distribution')
plt.show()
###Output
_____no_output_____
###Markdown
STRETCH GOAL: Get a lower validation error than this example regression model Can you improve on this validation error? Make any changes and use any tools or techniques you want.Data Dictionary:- **Store** - a unique Id for each store- **Year**, **Month**, **Day**, **DayOfWeek** - The date, from Jan 2, 2013 — July 31, 2015.- **Sales** - the units of inventory sold on a given date (this is the target)- **Customers** - the number of customers on a given date- **Promo** - indicates whether a store is running a promo on that day- **SchoolHoliday** - indicates the closure of public schools- **StoreType** - differentiates between 4 different store models: a, b, c, d- **Assortment** - describes an assortment level: a = basic, b = extra, c = extended- **CompetitionDistance** - distance in meters to the nearest competitor store- **CompetitionOpenSince[Month/Year]** - gives the approximate year and month of the time the nearest competitor was opened- **Promo2** - Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating- **Promo2Since[Year/Week]** - describes the year and calendar week when the store started participating in Promo2- **PromoInterval** - describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store(The train, validation, and test sets do _not_ have different date ranges. But they _do_ have different store ids. This problem is _not_ about forecasting future sales from past sales. This problem is about predicting sales at unknown stores, from sales at known stores.)
###Code
import category_encoders as ce
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.pipeline import make_pipeline
# Assign to X matrix
features = train.columns.drop([target, 'Store'])
X_train_stores = train[features]
X_val_stores = val[features]
X_test_stores = test[features]
# Define a pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
RandomForestRegressor(n_estimators=10, random_state=42, n_jobs=-1)
)
# Fit on train set, with log-transformed target
pipeline.fit(X_train_stores, y_train_log_stores)
# Predict for validation set
y_pred_log_stores = pipeline.predict(X_val_stores)
# Convert prediction's units, from log-sales to sales
y_pred_stores = np.expm1(y_pred_log_stores)
# Get validation mean absolute error
mae = mean_absolute_error(y_val_stores, y_pred_stores)
print(f'Validation Mean Absolute Error: +/− {mae:.0f} sales, on average')
# Making new model and getting better MAE on validation set
from sklearn.ensemble import GradientBoostingRegressor
pipeline2 = make_pipeline(
ce.OrdinalEncoder(),
GradientBoostingRegressor(random_state = 10)
)
pipeline2.fit(X_train_stores, y_train_log_stores)
y_pred_log_stores = pipeline2.predict(X_val_stores)
y_pred_stores = np.expm1(y_pred_log_stores)
mae = mean_absolute_error(y_val_stores,y_pred_stores)
print(f'My new model\'s Validation Mean Absolute Error: +/- {mae:.0f} sales, on average')
###Output
My new model's Validation Mean Absolute Error: +/- 719 sales, on average
###Markdown
Part 3, ROC AUCFor this part, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failed.The target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to load the data
###Code
import pandas as pd
train_total = pd.read_csv('https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5')
test_total = pd.read_csv('https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a')
assert train_total.shape == (51916, 17)
assert test_total.shape == (17306, 17)
train_total.head()
train = train_total.drop('Violations', axis = 1)
test = test_total.drop('Violations', axis = 1)
y_train = train['Fail']
y_train.head()
X_train = train.drop('Fail', axis = 1)
X_train.head()
###Output
_____no_output_____
###Markdown
3.1. Fit classification model.You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.Fit a model with the train set. Use cross-validation, or use a three-way split (by randomly splitting the train set into train and validation sets).
###Code
# Splitting data into training and validation sets
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, train_size = .8, test_size = .2, stratify = y_train)
# Making pipeline and fitting basic decision tree
!pip install --upgrade category_encoders
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
class_pipeline = make_pipeline(ce.OrdinalEncoder()
,IterativeImputer()
,RandomForestClassifier())
class_pipeline.fit(X_train, y_train)
print('Accuracy score: ', class_pipeline.score(X_val,y_val))
###Output
Requirement already up-to-date: category_encoders in /usr/local/lib/python3.6/dist-packages (2.0.0)
Requirement already satisfied, skipping upgrade: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1)
Requirement already satisfied, skipping upgrade: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.3.1)
Requirement already satisfied, skipping upgrade: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.10.1)
Requirement already satisfied, skipping upgrade: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.24.2)
Requirement already satisfied, skipping upgrade: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.16.4)
Requirement already satisfied, skipping upgrade: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.21.3)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders) (1.12.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.5.3)
Requirement already satisfied, skipping upgrade: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders) (0.13.2)
###Markdown
3.2. Get ROC AUC score. Use your model to predict probabilities that food establishments failed inspections.Get your Validation ROC AUC score. (Multiple times, if you try multiple iterations.)Get your Test ROC AUC score. (One time, at the end.)
###Code
from sklearn.metrics import roc_auc_score
y_pred_proba = class_pipeline.predict_proba(X_val)[:,1]
print('Validation ROC AUC Score: ', roc_auc_score(y_val, y_pred_proba))
y_test = test['Fail']
X_test = test.drop('Fail', axis = 1)
y_pred_proba = class_pipeline.predict_proba(X_test)[:,1]
print('Test ROC AUC Score: ', roc_auc_score(y_test,y_pred_proba))
###Output
Validation ROC AUC Score: 0.6578621906204172
Test ROC AUC Score: 0.654592626360129
###Markdown
STRETCH GOAL: Find and explain leakageThe dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections.You should be able to get an ROC AUC test score > 0.65 without using the feature with leakage.
###Code
# There is leakage in the 'Violations' feature.If we're trying to predict
# whether a place will pass or fail an inspection beforehand, you wouldn't know
# what violations the inpsection found. If we use the violations feature we
# would be leaking data from the future into our model.
# Engineering features to get the violation information
# This is bad practice, just to see how good of a model I could have gotten by 'cheating'
def count_critical_violations(violation):
violation_string = str(violation)
count = violation_string.count('CRITICAL VIOLATION')
return count
def count_serious_violations(violation):
violation_string = str(violation)
count = violation_string.count('SERIOUS VIOLATION')
return count
def count_minor_violations(violation):
violation_string = str(violation)
count = violation_string.count('MINOR VIOLATION')
return count
def get_violations(df):
df = df.copy()
df['critical_violation_count'] = df['Violations'].apply(count_critical_violations)
df['serious_violation_count'] = df['Violations'].apply(count_serious_violations)
df['minor_violation_count'] = df['Violations'].apply(count_minor_violations)
return df
train_total = get_violations(train_total)
train_total.head()
# Splitting train_total data into training and validation sets and getting accuracy score with the engineered violation information
X_train_cheat, X_val_cheat, y_train, y_val = train_test_split(train_total.drop('Fail', axis = 1), train_total['Fail'], train_size = .8, test_size = .2, stratify = train_total['Fail'])
cheat_class_pipeline = make_pipeline(ce.OrdinalEncoder()
,IterativeImputer()
,RandomForestClassifier())
cheat_class_pipeline.fit(X_train_cheat, y_train)
print('Accuracy score: ', cheat_class_pipeline.score(X_val_cheat,y_val))
# getting my cheat ROC AUC score
y_pred_proba = cheat_class_pipeline.predict_proba(X_val_cheat)[:,1]
print('Validation ROC AUC Score: ', roc_auc_score(y_val, y_pred_proba))
test_total = get_violations(test_total)
y_test = test_total['Fail']
X_test_cheat = test_total.drop('Fail', axis = 1)
y_pred_proba = cheat_class_pipeline.predict_proba(X_test_cheat)[:,1]
print('Test ROC AUC Score: ', roc_auc_score(y_test,y_pred_proba))
train_total.Violations[0]
train_total.Violations.describe()
###Output
_____no_output_____
###Markdown
Part 4 4.1. Make _either_ a Partial Dependence Plot _or_ a Shapley Values Force Plot, for either model.Partial Dependence Plot: 1 feature in isolation or 2 features in interaction.Shapley Values Force Plot: explain an individual prediction.
###Code
# getting Shapley plots
row = X_val_stores.iloc[[0]]
!pip install --upgrade shap
import shap
model = pipeline2.named_steps['gradientboostingregressor']
clean_pipeline = make_pipeline(
ce.OrdinalEncoder()
)
clean_pipeline.fit(X_train_stores)
explainer = shap.TreeExplainer(model)
row_processed = clean_pipeline.transform(row)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
shap.force_plot(
base_value = explainer.expected_value,
shap_values = shap_values,
features = row
)
###Output
Requirement already up-to-date: shap in /usr/local/lib/python3.6/dist-packages (0.29.3)
Requirement already satisfied, skipping upgrade: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.28.1)
Requirement already satisfied, skipping upgrade: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.3.1)
Requirement already satisfied, skipping upgrade: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (0.24.2)
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.16.4)
Requirement already satisfied, skipping upgrade: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.21.3)
Requirement already satisfied, skipping upgrade: ipython in /usr/local/lib/python3.6/dist-packages (from shap) (5.5.0)
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from shap) (0.15.0)
Requirement already satisfied, skipping upgrade: matplotlib in /usr/local/lib/python3.6/dist-packages (from shap) (3.0.3)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2.5.3)
Requirement already satisfied, skipping upgrade: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.13.2)
Requirement already satisfied, skipping upgrade: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.7.0)
Requirement already satisfied, skipping upgrade: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.8.1)
Requirement already satisfied, skipping upgrade: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (1.0.16)
Requirement already satisfied, skipping upgrade: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.4.0)
Requirement already satisfied, skipping upgrade: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (2.1.3)
Requirement already satisfied, skipping upgrade: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.3.2)
Requirement already satisfied, skipping upgrade: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (41.1.0)
Requirement already satisfied, skipping upgrade: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.7.5)
Requirement already satisfied, skipping upgrade: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (4.3.0)
Requirement already satisfied, skipping upgrade: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.4.1)
Requirement already satisfied, skipping upgrade: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.3)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (1.0.3)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (0.10.0)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (1.1.0)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.4.2)
Requirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas->shap) (1.12.0)
Requirement already satisfied, skipping upgrade: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython->shap) (0.6.0)
Requirement already satisfied, skipping upgrade: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->shap) (0.1.7)
Requirement already satisfied, skipping upgrade: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->shap) (0.2.0)
Requirement already satisfied, skipping upgrade: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->shap) (0.46)
###Markdown
STRETCH GOAL: Make _both_ a Partial Dependence Plot _and_ a Shapley Values Force Plot.
###Code
# creating partial dependence plot
!pip install --upgrade pdpbox
from pdpbox import pdp
feature = 'Inspection Type'
model = class_pipeline.named_steps['randomforestclassifier']
class_clean_pipeline = make_pipeline(ce.OrdinalEncoder()
,IterativeImputer()
)
X_train_cleaned = class_clean_pipeline.fit_transform(X_train)
X_train_cleaned = pd.DataFrame(X_train_cleaned)
X_train_cleaned.columns = X_train.columns
for item in class_clean_pipeline.named_steps['ordinalencoder'].mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
model_features = X_train.columns
pdp_dist = pdp.pdp_isolate(model = model, dataset = X_train_cleaned, model_features = model_features, feature = feature)
pdp.pdp_plot(pdp_dist,feature);
plt.xticks(category_codes, category_names, rotation = 'vertical')
X_train.columns
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
#Global Imports
import shap
import eli5
import category_encoders as ce
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline#needed?
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
from eli5.sklearn import PermutationImportance
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make all four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
#Violation == probable candidate for leakage. Can't know until after inspection
#First Step: Gain baisc understanding of dataset
train.head(5)
majority_class_baseline = train.Fail.sum()/len(train)
majority_class_baseline
#Seperate Target from features
y_train = train.Fail
X_train = train.drop(columns=['Fail'])
y_test = test.Fail
X_test = test.drop(columns=['Fail'])
numeric = train.select_dtypes(include= "number").columns
categorical = train.select_dtypes(exclude = "number").columns
categorical
X_train[categorical].nunique()
def wrangle(X):
#TODO
X = X.copy()
features = ['Facility Type', 'Risk', 'City', 'Inspection Type']
X = X[features]
for feature in features:
X[feature] = pd.Categorical(X[feature])
return X
X_train = wrangle(X_train[categorical])
X_test = wrangle(X_test[categorical])
#debug
X_train.head()
#Train/Val split on train, since we already have a test set
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.33, random_state=42)
processor = make_pipeline(
SimpleImputer(strategy='most_frequent'),
ce.OneHotEncoder(use_cat_names=True)
)
X_train_processed = processor.fit_transform(X_train)
X_test_processed = processor.transform(X_test)
X_val_processed = processor.transform(X_val)
#processor.named_steps
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
#Fit XGBoost on X_train
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
#Initialize model
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
#Fit model
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
y_val_pred_proba = model.predict_proba(X_val_processed)[:, 1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_val_pred_proba))
#commented out pending feature engineering
y_pred_proba = model.predict_proba(X_test_processed)[:, 1]
print('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))
###Output
Test ROC AUC: 0.6838040029624484
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=2,
random_state=42
)
permuter.fit(X_val_processed, y_val)
feature_names = X_val_processed.columns.tolist()
eli5.show_weights(
permuter,
top=None,
feature_names = feature_names
)
#choose some row to expain prediction
row = X_train.iloc[[1]]
row
explainer = shap.TreeExplainer(model)
row_process = processor.transform(row)
shap_values = explainer.shap_values(row_process)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row_process
)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
Requirement already satisfied: category_encoders==2.* in /usr/local/lib/python3.6/dist-packages (2.1.0)
Requirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.3.1)
Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.24.2)
Requirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.1)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.16.5)
Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.21.3)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.*) (1.12.0)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.14.0)
Requirement already satisfied: eli5 in /usr/local/lib/python3.6/dist-packages (0.10.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.12.0)
Requirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (19.2.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.10.3)
Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.21.3)
Requirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1)
Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.3.1)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.16.5)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.14.0)
Requirement already satisfied: pandas-profiling==2.* in /usr/local/lib/python3.6/dist-packages (2.3.0)
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.24.2)
Requirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: phik>=0.9.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.9.8)
Requirement already satisfied: astropy in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.5)
Requirement already satisfied: jinja2>=2.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.10.3)
Requirement already satisfied: matplotlib>=1.4 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.3)
Requirement already satisfied: confuse>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.0.0)
Requirement already satisfied: htmlmin>=0.1.12 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.1.12)
Requirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (1.16.5)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2018.9)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2.5.3)
Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (0.9.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (1.3.1)
Requirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.6.0)
Requirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.3.3)
Requirement already satisfied: pytest>=4.0.2 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.2.1)
Requirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.40.1)
Requirement already satisfied: pytest-pylint>=0.13.0 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.14.1)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.8->pandas-profiling==2.*) (1.1.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (1.1.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.*) (3.13)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas>=0.19->pandas-profiling==2.*) (1.12.0)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.3.3)
Requirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.5.0)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (3.1.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.8.4)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.3)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.1.3)
Requirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (4.5.3)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (17.0.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.8.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (7.2.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.3.0)
Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.13.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.1.7)
Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.23)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2)
Requirement already satisfied: llvmlite>=0.25.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.8->pandas-profiling==2.*) (0.29.0)
Requirement already satisfied: pylint>=1.4.5 in /usr/local/lib/python3.6/dist-packages (from pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=1.4->pandas-profiling==2.*) (41.2.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.2.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.6.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.5.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: isort<5,>=4.2.5 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (4.3.21)
Requirement already satisfied: astroid<2.4,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (2.3.1)
Requirement already satisfied: mccabe<0.7,>=0.6 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (0.6.1)
Requirement already satisfied: wrapt==1.11.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.11.2)
Requirement already satisfied: typed-ast<1.5,>=1.4.0; implementation_name == "cpython" and python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.4.0)
Requirement already satisfied: lazy-object-proxy==1.4.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: pdpbox in /usr/local/lib/python3.6/dist-packages (0.2.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.3.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.21.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.24.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.16.5)
Requirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from pdpbox) (5.4.8)
Requirement already satisfied: matplotlib>=2.1.2 in /usr/local/lib/python3.6/dist-packages (from pdpbox) (3.0.3)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.14.0)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2018.9)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (1.1.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas->pdpbox) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.1.2->pdpbox) (41.2.0)
Requirement already satisfied: shap in /usr/local/lib/python3.6/dist-packages (0.30.2)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.21.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (0.24.2)
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from shap) (5.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.3.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from shap) (3.0.3)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from shap) (0.15.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.16.5)
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.28.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.14.0)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.7.5)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.4.0)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.3.3)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (41.2.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.8.1)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (1.0.18)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (2.1.3)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.7.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.4.2)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (4.3.0)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (1.0.3)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.3)
Requirement already satisfied: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.4.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas->shap) (1.12.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->shap) (0.2.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->shap) (0.1.7)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython->shap) (0.6.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->shap) (0.46)
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make all four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
train.head()
# Check for nulls
train.isnull().sum()
# Explore features
for col in sorted(train.columns):
if train[col].nunique() < 12:
sns.catplot(x=col, y='Fail', data = train, kind = 'bar', color = 'grey')
plt.show()
# Wrangle the data for train and test
def engineer_features(X):
# Convert date_recorded to datetime
X['Inspection Date'] = pd.to_datetime(X['Inspection Date'], infer_datetime_format=True)
# Extract components from date_recorded and drop the original column
X['year_inspection'] = X['Inspection Date'].dt.year
X['month_inspection'] = X['Inspection Date'].dt.month
X['day_inspection'] = X['Inspection Date'].dt.day
X = X.drop(columns='Inspection Date')
X = X.drop(columns='AKA Name')
X = X.drop(columns='Location')
X = X.drop(columns='City')
X = X.drop(columns='State')
return X
train = engineer_features(train)
test = engineer_features(test)
print(train.shape)
train.head()
###Output
(51916, 15)
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
# Split training data into training and validation sets 80/20
train, val = train_test_split(train, train_size=0.80, test_size=0.20, random_state=42)
print(train.shape, val.shape, test.shape)
# Encode and fit a Random Forest Model - Optimization done at end and value used here
target = 'Fail'
features = train.columns.drop(target)
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, max_depth=2, random_state=42, verbose=1)
)
# Get validation score
pipeline.fit(X_train, y_train)
print ('Validation Accuracy', pipeline.score(X_val, y_val))
from sklearn.metrics import roc_auc_score
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
# Plot ROC curve
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val==1, y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
y_pred_proba = pipeline.predict_proba(X_test)[:, 1]
roc_auc_score(y_test, y_pred_proba)
# from sklearn.ensemble import RandomForestRegressor
# from sklearn.model_selection import RandomizedSearchCV
# # Number of trees in random forest
# n_estimators = [int(x) for x in np.linspace(start = 200, stop = 500, num = 10)]
# # Number of features to consider at every split
# max_features = ['auto', 'sqrt']
# # Maximum number of levels in tree
# max_depth = [int(x) for x in np.linspace(10, 50, num = 11)]
# max_depth.append(None)
# # Minimum number of samples required to split a node
# min_samples_split = [2, 5, 10]
# # Minimum number of samples required at each leaf node
# min_samples_leaf = [1, 2, 4]
# # Method of selecting samples for training each tree
# bootstrap = [True, False]
# # Create the random grid
# random_grid = {'n_estimators': n_estimators,
# 'max_features': max_features,
# 'max_depth': max_depth,
# 'min_samples_split': min_samples_split,
# 'min_samples_leaf': min_samples_leaf,
# 'bootstrap': bootstrap}
# print(random_grid)
# pipeline = make_pipeline (
# ce.OrdinalEncoder(),
# SimpleImputer(strategy='mean'),
# RandomizedSearchCV(estimator = RandomForestRegressor(),
# param_distributions = random_grid,
# n_iter = 5,
# verbose=2,
# random_state=42,
# n_jobs = -1)
# )
# pipeline.fit(X_train, y_train)
# pd.set_option('display.max_rows', 200)
# model = pipeline.named_steps['randomizedsearchcv']
# best = pd.Series(model.best_params_)
# print(best)
###Output
_____no_output_____
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
import eli5
from eli5.sklearn import PermutationImportance
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.fit_transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=2,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
eli5.show_weights(
permuter,
top=None,
feature_names = feature_names
)
import shap
row = X_test.iloc[[1]]
explainer = shap.TreeExplainer(model)
row_process = encoder.transform(row)
shap_values = explainer.shap_values(row_process)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[0],
shap_values=shap_values[0],
features=row
)
###Output
_____no_output_____
###Markdown
Shap Plot
###Code
import shap
#GOING TO PUT THE TWO TOGETHER LATER
#This currently only works for the train set explinations:
def shap_plot(df,row_number,pass_or_fail):
row = df.iloc[[row_number]]
row_processed = processor.transform(row)
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
column = pass_or_fail
prediction = 'Pass' if column == 0 else 'Fail'
print(f"This Shap Plot shows the probability of {prediction} ")
return shap.force_plot(
base_value = explainer.expected_value[0],
shap_values = shap_values[0],#Its producing shap values for both + and - predicitons
features = row,
link='logit'
)
# explainer = shap.TreeExplainer(model)
# shap_values = explainer.shap_values(X_val_processed[0])
# shap_values
shap_plot(X_val,250,1)
#First plot told me that inspection id seems to have a pretty big impact on the model
#I'm going to get rid of the column
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
train.head()
train['Violations_list'] = train['Violations'].astype(str).apply(lambda x : x.split('|'))
def check_seri_crit(l):
for x in l:
if('CRITICAL VIOLATION' in x or
'SERIOUS VIOLATION' in x or
'SERIOUS' in x or
'PRIORITY FOUNDATION VIOLATION' in x):
return True
return False
train['Crit_Violations'] = [check_seri_crit(x) for x in train['Violations_list']]
###Output
_____no_output_____
###Markdown
The violations column contains information that could be used to determine the fail column (the target)If a row contains 'Critical violation, 'Serious violation, 'Serious', or 'Priority Foundation Violation' in the violations column, it will fail. This makes the violations column a highly accurate representation of the fail column. Therefore the violation column would be an information leak if included in the features list for a training dataset. The violations feature shouldnt be used in a real-world model to predict future inspections because if the feature is used, the model will overfit the training data in primarily with that feature.
###Code
features = ['Facility Type', 'Inspection Type', 'Latitude', 'Longitude', 'Risk']
target = 'Fail'
import category_encoders as ce
from sklearn.impute import SimpleImputer
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
X_train
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
XGBClassifier(
n_estimators=200,
max_depth=5,
learning_rate=0.5,
n_jobs=-1,
random_state=0,
verbosity= 1,
verbose = True)
)
pipeline.fit(X_train, y_train)
scores = cross_val_score(pipeline, X_train, y_train, scoring='accuracy', cv=5)
scores
from sklearn.metrics import roc_auc_score
y_pred_proba = pipeline.predict_proba(X_test)[:, -1]
print('AUC_ROC score:\t'+str(roc_auc_score(y_test, y_pred_proba)))
###Output
AUC_ROC score: 0.7093273753466314
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
# Get feature importances
xgb = pipeline.named_steps['xgbclassifier']
importances = pd.Series(xgb.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
from pdpbox.pdp import pdp_isolate, pdp_plot
features_0 = 'Inspection Type'
isolated = pdp_isolate(
model=pipeline,
dataset=X_train,
model_features=X_train.columns,
feature=features_0
)
pdp_plot(isolated, feature_name=features_0)
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features_1 = ['Inspection Type', 'Facility Type']
interact = pdp_interact(
model=pipeline,
dataset=X_train,
model_features=X_train.columns,
features=features_1
)
pdp_interact_plot(interact, plot_type='grid', feature_names=features_1)
import shap
row = X_test.iloc[0]
model = pipeline.named_steps['xgbclassifier']
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row,
link='logit' # For classification, this shows predicted probabilities
)
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
import seaborn as sns
y_pred = pipeline.predict(X_test)
labels = unique_labels(y_test)
columns =[f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
df_cm = pd.DataFrame(confusion_matrix(y_test, y_pred), columns = columns, index = index)
sns.heatmap(df_cm, annot=True)
def predict(animal_type, gender, fixed_Fixed,
age, color, breed, season_arrived):
df= pd.DataFrame(
columns=['animal_type','gender','fixed_fixed', 'age', 'color', 'breed', 'season_arrived'],
data=[[animal_type, gender, fixed_Fixed, age, color, breed, season_arrived]]
)
y_pred=thepipeline.predict(df)[0]
if y_pred == 'Adopted':
y_pred_proba = thepipeline.predict_proba(df)[0][0]
return f'{y_pred_proba*100:.0f}% chance of {y_pred}'
else:
y_pred_proba = thepipeline.predict_proba(df)[0][1]
return f'{y_pred_proba*100:.0f}% chance of {y_pred}'
x = 1
if x == 1:
print("a")
###Output
a
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
Requirement already satisfied: category_encoders==2.* in /usr/local/lib/python3.6/dist-packages (2.1.0)
Requirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.1)
Requirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.3.1)
Requirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.21.3)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.16.5)
Requirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.24.2)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.*) (1.12.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.14.0)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.5.3)
Requirement already satisfied: eli5 in /usr/local/lib/python3.6/dist-packages (0.10.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.12.0)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.16.5)
Requirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.21.3)
Requirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.5)
Requirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (19.2.0)
Requirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.3.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.10.3)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.14.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1)
Requirement already satisfied: pandas-profiling==2.* in /usr/local/lib/python3.6/dist-packages (2.3.0)
Requirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: jinja2>=2.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (2.10.3)
Requirement already satisfied: confuse>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (1.0.0)
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.24.2)
Requirement already satisfied: htmlmin>=0.1.12 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.1.12)
Requirement already satisfied: astropy in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.5)
Requirement already satisfied: matplotlib>=1.4 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (3.0.3)
Requirement already satisfied: phik>=0.9.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.*) (0.9.8)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (1.3.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (1.16.5)
Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.*) (0.9.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.8->pandas-profiling==2.*) (1.1.1)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.*) (3.13)
Requirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2.5.3)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.*) (2018.9)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.*) (0.10.0)
Requirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.6.0)
Requirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.3.3)
Requirement already satisfied: pytest-pylint>=0.13.0 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.14.1)
Requirement already satisfied: pytest>=4.0.2 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (5.2.1)
Requirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.*) (0.40.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas>=0.19->pandas-profiling==2.*) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=1.4->pandas-profiling==2.*) (41.2.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.8.4)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.1.3)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.5.0)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (3.1.0)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.3)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.3.3)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.4.2)
Requirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (4.5.3)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.*) (17.0.0)
Requirement already satisfied: pylint>=1.4.5 in /usr/local/lib/python3.6/dist-packages (from pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (2.4.2)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.1.7)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (7.2.0)
Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.13.0)
Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.23)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (19.2)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.8.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (1.3.0)
Requirement already satisfied: llvmlite>=0.25.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.8->pandas-profiling==2.*) (0.29.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.5.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (0.2.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (4.4.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.*) (2.6.0)
Requirement already satisfied: isort<5,>=4.2.5 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (4.3.21)
Requirement already satisfied: astroid<2.4,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (2.3.1)
Requirement already satisfied: mccabe<0.7,>=0.6 in /usr/local/lib/python3.6/dist-packages (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (0.6.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.*) (0.6.0)
Requirement already satisfied: wrapt==1.11.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.11.2)
Requirement already satisfied: lazy-object-proxy==1.4.* in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.4.2)
Requirement already satisfied: typed-ast<1.5,>=1.4.0; implementation_name == "cpython" and python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from astroid<2.4,>=2.3.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.*) (1.4.0)
Requirement already satisfied: pdpbox in /usr/local/lib/python3.6/dist-packages (0.2.0)
Requirement already satisfied: matplotlib>=2.1.2 in /usr/local/lib/python3.6/dist-packages (from pdpbox) (3.0.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.24.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.3.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.16.5)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.14.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from pdpbox) (5.4.8)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.21.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.5.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.4.2)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=2.1.2->pdpbox) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.1.2->pdpbox) (41.2.0)
Requirement already satisfied: shap in /usr/local/lib/python3.6/dist-packages (0.30.2)
Requirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.28.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.3.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from shap) (3.0.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.16.5)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.21.3)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from shap) (0.15.0)
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from shap) (5.5.0)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (0.24.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (1.1.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.5.3)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->shap) (2.4.2)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.14.0)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (1.0.3)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (4.3.0)
Requirement already satisfied: imageio>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.4.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->shap) (2.3)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.7.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (2.1.3)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.7.5)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (0.8.1)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.3.3)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (4.4.0)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (41.2.0)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython->shap) (1.0.18)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib->shap) (1.12.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.3.0->scikit-image->shap) (0.46)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython->shap) (0.6.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->shap) (0.2.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->shap) (0.1.7)
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make all four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
# first look
train.head()
# create function to convert to DT
def to_DT(df):
# prevent warning
df = df.copy()
# convert to date-time
df['Inspection Date'] = pd.to_datetime(df['Inspection Date'], infer_datetime_format=True)
return df
# apply to_DO
train = to_DT(train)
test = to_DT(test)
# see train
train.head()
def wrangle(df):
# prevent warning
df = df.copy()
# Drop some columns
df = df.drop(columns='State') # Constant
# Extract components from date_recorded, then drop the original column
df['year_recorded'] = df['Inspection Date'].dt.year
df['month_recorded'] = df['Inspection Date'].dt.month
df['day_recorded'] = df['Inspection Date'].dt.day
df = df.drop(columns='Inspection Date')
return df
# wrangle train and val
train = wrangle(train)
test = wrangle(test)
train.shape, test.shape
# I think some inpections resulted in multiple violations
train['Violations'].value_counts().head()
# move each violation into a separate column
split = train['Violations'].str.split(pat = "|", expand=True)
split.head()
###Output
_____no_output_____
###Markdown
I don't really have time to see this through but I would like to engineer a feature counting violations. Also I am sure that some Violations result in an automatic fail. I actually just looked up the documentation and it looks like there are 45 different possible violations. Violations 1-14 are Critical and 15-29 are serious
###Code
from sklearn.model_selection import train_test_split
# 80/20 train test split
train, val = train_test_split(train, test_size=.20, stratify=train['Fail'],
random_state=11)
# confirm size
train.shape, val.shape
# create target
target = 'Fail'
# create X_features matrix and y_target vector for train
X_train = train.drop(columns=target)
y_train = train[target]
# create X_features matrix and y_target vector for val
X_val = val.drop(columns=target)
y_val = val[target]
# create X_features matrix and y_target vector for test
X_test = test.drop(columns=target)
y_test = test[target]
# imports for pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
# Make pipeline!
RF = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(random_state=11, n_jobs=-1)
)
# Fit on train, score on val
RF.fit(X_train, y_train)
y_pred = RF.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.7317026194144838
###Markdown
I expected to see leakage here but with a score that low I don't see where there would be leakage
###Code
from sklearn.metrics import roc_auc_score
# base ROC Score
y_pred_proba = RF.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Again, I was expecting, having not excluded any features for the ROC to be .90 due to leakage. Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
# Hyperparameter tuning
from scipy.stats import randint, uniform
from sklearn.model_selection import RandomizedSearchCV
# make pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=11)
)
# set parameter ranges
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'randomforestclassifier__n_estimators': randint(50, 500),
'randomforestclassifier__max_depth': [5, 10, 15, 20, None],
'randomforestclassifier__max_features': uniform(0, 1),
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=10,
cv=3,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Accuracy', search.best_score_)
# run on test
# Make pipeline!
hyper_RF = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(max_depth=10, max_features=0.3278342663656679,
n_estimators=426, random_state=11, n_jobs=-1)
)
# Fit on train, score on val
hyper_RF.fit(X_train, y_train)
y_pred = hyper_RF.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.7437403697996918
###Markdown
Not sure why this doesn't match the Hyperparameter score
###Code
# run on test score
y_pred = hyper_RF.predict(X_test)
print('Validation Accuracy', accuracy_score(y_test, y_pred))
# Test ROC Score
y_pred_proba = hyper_RF.predict_proba(X_test)[:, 1]
AUC_ROC = roc_auc_score(y_test, y_pred_proba)
print('ROC AUC Score: ', AUC_ROC)
###Output
ROC AUC Score: 0.6153379842551363
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
# Get feature importances
rand_f = RF.named_steps['randomforestclassifier']
importances = pd.Series(rand_f.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 15
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
# transform and set model for eli5
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean')
)
# apply transformation pipeline for X_train and X_val
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.fit_transform(X_val)
model = RandomForestClassifier(max_depth=10, max_features=0.3278342663656679, n_estimators=426, random_state=11, n_jobs=-1)
model.fit(X_train_transformed, y_train)
import eli5
from eli5.sklearn import PermutationImportance
# instantiate permuter
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=2,
random_state=11
)
# fit permuter
permuter.fit(X_val_transformed, y_val)
feature_names = X_val.columns.tolist()
# show weights
eli5.show_weights(
permuter,
top=None,
feature_names = feature_names
)
# Remove features with zero or less feature importance
minimum_importance = 0
mask = permuter.feature_importances_ > minimum_importance
features = X_train.columns[mask]
X_train = X_train[features]
X_train.shape
# apply to validation as well
X_val = X_val[features]
X_val.shape
# build new pipeline and use features without 0 weight features
RF_reformed = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(max_depth=10,max_features=0.3278342663656679,
n_estimators=426, random_state=11, n_jobs=-1)
)
# score
RF_reformed.fit(X_train, y_train)
print ('Validation Accuracy', RF_reformed.score(X_val, y_val))
# rerun ROC score
y_pred_proba = RF_reformed.predict_proba(X_val)[:, 1]
AUC_ROC = roc_auc_score(y_val, y_pred_proba)
print('ROC AUC Score: ', AUC_ROC)
###Output
ROC AUC Score: 0.61748520872502
###Markdown
hmm it is a little worse.
###Code
# single feature partial dependency plot
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'License #'
isolated = pdp_isolate(
model = RF_reformed,
dataset = X_val,
model_features=X_val.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature);
# multifeature partial dependency plots
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features = ['License #', 'Inspection ID']
interaction = pdp_interact(
model=RF_reformed,
dataset=X_val,
model_features=X_val.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
train.head(35)
n = train["Violations"].str.split(" - ", n = 1, expand = True)
train.drop(columns =["Violations"], inplace = True)
train['Violations'] = n[0]
train['Violations'].value_counts()
s = train['Violations'].str.split("|", n = 1, expand = True)
train['Violations'] = s[0]
train.head(1)
n = test["Violations"].str.split(" - ", n = 1, expand = True)
test.drop(columns =["Violations"], inplace = True)
test['Violations'] = n[0]
test['Facility Type'].value_counts()
s = test['Violations'].str.split("|", n = 1, expand = True)
test['Violations'] = s[0]
train.head(1)
target = 'Fail'
features = ['Facility Type', 'Risk', 'Inspection Type', 'Violations']
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['Fail'], random_state=42)
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer()
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
X_val_transformed = pd.DataFrame(X_val_transformed, columns=X_val.columns)
rf = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
rf.fit(X_train_transformed, y_train)
print('Validation Accuracy', rf.score(X_val_transformed, y_val))
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from xgboost import XGBClassifier
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
from sklearn.metrics import roc_auc_score
X_test_processed = processor.transform(X_test)
class_index = 1
y_pred_proba = model.predict_proba(X_test_processed)[:, class_index]
print(f'Test ROC AUC')
print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better
###Output
Test ROC AUC
0.9896730291381426
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
%matplotlib inline
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='Risk'
encoder = transformers.named_steps['ordinalencoder']
for item in encoder.mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
isolated = pdp_isolate(
model=rf,
dataset=X_val_transformed,
model_features=X_val.columns,
feature=feature,
cust_grid_points=category_codes
)
fig, axes = pdp_plot(isolated, feature_name=feature,
plot_lines=True, frac_to_plot=0.01)
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features = ['Risk', 'Inspection Type']
years_grid = [0, 5, 10, 15, 20, 25, 30]
interaction = pdp_interact(
model=rf,
dataset=X_val_transformed,
model_features=X_val.columns,
features=features,
cust_grid_points=[category_codes, years_grid]
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
from sklearn.metrics import accuracy_score
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
import seaborn as sns
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
train = train.dropna()
test = test.dropna()
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
import pandas_profiling
profile_report = train.profile_report(
check_correlation_pearson=False,
correlations={
'pearson': False,
'spearman': False,
'kendall': False,
'phi_k': False,
'cramers': False,
'recoded': False,
},
plot={'histogram': {'bayesian_blocks_bins': False}},
)
profile_report
train.dropna()
test.dropna()
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, test_size=0.2)
train.shape, val.shape, test.shape
train.reset_index()
test.reset_index()
val.reset_index()
val.head()
# The status_group column is the target
target = 'Fail'
# Get a dataframe with all train columns except the target & id
#train_features = train.drop(columns=[target, 'Address', 'AKA_Name', 'DBA_Name', 'Inspection_Date', 'License_#' ], axis=1)
train_features = train.drop(columns=[target, 'Address'], axis=1)
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
print(features)
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
# Try Random Forest
from sklearn.ensemble import RandomForestClassifier
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators = 500, random_state = 42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
#Improve accuracy of RF model using XGBoost
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=500, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
#ROC AUC score
from sklearn.metrics import roc_auc_score
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=500, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
X_test_processed = processor.transform(X_test)
class_index = 1
y_pred_proba = model.predict_proba(X_test_processed)[:, class_index]
print(f'Test ROC AUC for class {class_index}:')
print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better
###Output
_____no_output_____
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
#Visualization - 1
import eli5
from eli5.sklearn import PermutationImportance
# Random Forest outside of Pipeline for ELI5
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = transformers.fit_transform(X_train)
X_val_transformed = transformers.transform(X_val)
model = RandomForestClassifier(n_estimators=500, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
# Permuter
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
#visualize
feature_names = X_val.columns.tolist()
eli5.show_weights(
permuter,
top=None, # show permutation importances for all features
feature_names=feature_names
)
#To improve the scores, remove columns not important as shown by Permuter
minimum_importance = 0
mask = permuter.feature_importances_ > minimum_importance
features = X_train.columns[mask]
X_train = X_train[features]
X_test = X_test[features]
X_val = X_val[features]
X_val = X_val[features]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=500, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
#Improve accuracy of RF model using XGBoost (after removing features)
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=500, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
#ROC AUC score (after removing features)
from sklearn.metrics import roc_auc_score
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=500, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
X_test_processed = processor.transform(X_test)
class_index = 1
y_pred_proba = model.predict_proba(X_test_processed)[:, class_index]
print(f'Test ROC AUC for class {class_index}:')
print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better
#Visualization 2
from pdpbox.pdp import pdp_interact, pdp_interact_plot
# Model, outside of a pipeline
train = train.dropna()
test = test.dropna()
X = train[features]
y = train[target]
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = RandomForestClassifier(n_estimators=500, random_state=42, n_jobs=-1)
model.fit(X_encoded, y)
#PDP with one feature
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'Inspection Type'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
for item in encoder.mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
pdp.pdp_plot(pdp_dist, feature)
# Automatically change the xticks labels
plt.xticks(category_codes, category_names);
#PDP with two features
features = ['Inspection Type', 'Inspection ID']
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
import seaborn as sns
pdp = interaction.pdp.pivot_table(
values='preds',
columns=features[0], # First feature on x axis
index=features[1] # Next feature on y axis
)[::-1] # Reverse the index order so y axis is ascending
pdp = pdp.rename(columns=dict(zip(category_codes, category_names)))
plt.figure(figsize=(10,8))
sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis')
plt.title('Partial Dependence Inspection, based on Insp. Type & Insp. ID');
X_test.tail(20)
# Shaply Plots
import shap
row = X_test.iloc[[17296]]
explainer = shap.TreeExplainer(model)
row_processed = processor.transform(row)
shap_values = explainer.shap_values(row_processed)
shap_values
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[0],
shap_values=shap_values[0],
features=row,
#link='logit' # For classification, this shows predicted probabilities
)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge To demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.To earn a score of "3", also do all the stretch goals.You are permitted and encouraged to do as much data exploration as you want. Part 1, Confusion Matrix- 1.1. Calculate accuracy- 1.2. Calculate precision- 1.3. Calculate recall Part 2, Log Transformation- 2.1. Log-transform the target- 2.2. Plot the target's distribution, before and after the transformation Part 3, ROC AUC- 3.1. Fit classification model- 3.2. Get ROC AUC score Part 4, Model interpretation visualizations- 4.1. Make _either_ a Partial Dependence Plot _or_ a Shapley Values Force Plot, for either model. Stretch Goals- Get a lower validation error than the example regression model provided in Part 2.- Find and explain leakage in the classification problem.- Make _both_ a Partial Dependence Plot _and_ a Shapley Values Force Plot.
###Code
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install packages in Colab
!pip install --upgrade category_encoders eli5 pandas-profiling pdpbox plotly shap
# Check category_encoders version
import category_encoders as ce
from distutils.version import StrictVersion
assert StrictVersion(ce.__version__) >= StrictVersion('2.0.0')
###Output
_____no_output_____
###Markdown
Part 1, Confusion MatrixImagine this is the confusion matrix for a binary classification model. Use the confusion matrix to calculate the model's accuracy, precision, and recall. Predicted Negative Positive Actual Negative 85 58 Positive 8 36 1.1. Calculate accuracy
###Code
TP = 36
TN = 85
Total = 85 + 58 + 8 + 36
accuracy = (TP+TN)/Total
print('The accuracy of the confusion matrix is', accuracy)
###Output
_____no_output_____
###Markdown
1.2. Calculate precision
###Code
precision = TP/Total
print('The precision of the confusion matrix is', precision)
###Output
_____no_output_____
###Markdown
1.3. Calculate recall
###Code
tot_pos = 8 + 36
recall = TP/tot_pos
print('The recall of the confusion matrix is', recall)
###Output
_____no_output_____
###Markdown
Part 2, Log TransformationThis part uses real-world sales data from a German drugstore chain, from Jan 2, 2013 — July 31, 2015.There are three dataframes:- **train**: historical sales data for 120 stores- **val**: historical sales data for 40 different stores- **test**: historical sales data for 40 different stores Run this cell to load the data
###Code
import pandas as pd
train = pd.read_csv('https://drive.google.com/uc?export=download&id=1YWiyOhY_BiECf-vO8_KrknsHd75HqTfs')
val = pd.read_csv('https://drive.google.com/uc?export=download&id=1Azi1KBv63GdzEn2M0x3eYRvekSaLFnPt')
test = pd.read_csv('https://drive.google.com/uc?export=download&id=1Ab7mg_Vt_bRL7ObiTLPLHbU3sTiVPzc3')
assert train.shape == (94080, 18)
assert val.shape == (31360, 18)
assert test.shape == (31360, 18)
###Output
_____no_output_____
###Markdown
2.1. Log-transform the target, for the train, validation, and test sets.
###Code
target = 'Sales'
y_train = train[target]
y_val = val[target]
y_test = test[target]
# Complete this code cell
import numpy as np
y_train_log = np.log1p(y_train)
y_val_log = np.log1p(y_val)
y_test_log = np.log1p(y_test)
###Output
_____no_output_____
###Markdown
2.2. Plot the distribution of the train set target, before and after the transformation.
###Code
# Here's the before
import seaborn as sns
sns.distplot(train[target]);
# Here's the after
train['log'] = y_train_log
sns.distplot(train['log']);
###Output
_____no_output_____
###Markdown
STRETCH GOAL: Get a lower validation error than this example regression model Can you improve on this validation error? Make any changes and use any tools or techniques you want.Data Dictionary:- **Store** - a unique Id for each store- **Year**, **Month**, **Day**, **DayOfWeek** - The date, from Jan 2, 2013 — July 31, 2015.- **Sales** - the units of inventory sold on a given date (this is the target)- **Customers** - the number of customers on a given date- **Promo** - indicates whether a store is running a promo on that day- **SchoolHoliday** - indicates the closure of public schools- **StoreType** - differentiates between 4 different store models: a, b, c, d- **Assortment** - describes an assortment level: a = basic, b = extra, c = extended- **CompetitionDistance** - distance in meters to the nearest competitor store- **CompetitionOpenSince[Month/Year]** - gives the approximate year and month of the time the nearest competitor was opened- **Promo2** - Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating- **Promo2Since[Year/Week]** - describes the year and calendar week when the store started participating in Promo2- **PromoInterval** - describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store(The train, validation, and test sets do _not_ have different date ranges. But they _do_ have different store ids. This problem is _not_ about forecasting future sales from past sales. This problem is about predicting sales at unknown stores, from sales at known stores.)
###Code
import category_encoders as ce
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.pipeline import make_pipeline
# Assign to X matrix
features = train.columns.drop([target,'log'])
X_train = train[features]
X_val = val[features]
X_test = test[features]
# Define a pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
RandomForestRegressor(n_estimators=10, random_state=42, n_jobs=-1)
)
# Fit on train set, with log-transformed target
pipeline.fit(X_train, y_train_log)
# Predict for validation set
y_pred_log = pipeline.predict(X_val)
# Convert prediction's units, from log-sales to sales
y_pred = np.expm1(y_pred_log)
# Get validation mean absolute error
mae = mean_absolute_error(y_val, y_pred)
print(f'Validation Mean Absolute Error: +/− {mae:.0f} sales, on average')
###Output
_____no_output_____
###Markdown
BeforeVMAE: +/- 861 sales AfterVMAE: +/- 839 sales Part 3, ROC AUCFor this part, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failed.The target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to load the data
###Code
import pandas as pd
train = pd.read_csv('https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5')
test = pd.read_csv('https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a')
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
3.1. Fit classification model.You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.Fit a model with the train set. Use cross-validation, or use a three-way split (by randomly splitting the train set into train and validation sets).
###Code
from sklearn.model_selection import train_test_split
train, val = train_test_split(train,train_size=0.8,test_size=0.2,
stratify=train['Fail'], random_state = 42)
train.describe()
train.describe(exclude='number')
train.isna().sum()
test.isna().sum()
drop = ['Fail','AKA Name','Violations','Facility Type']
X_train = train.drop(columns=drop)
y_train = train['Fail']
X_val = val.drop(columns=drop)
y_val = val['Fail']
X_test = test.drop(columns=drop)
y_test = test['Fail']
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
import category_encoders as ce
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(n_estimators=100,random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train);
print('Validation score:', pipeline.score(X_val,y_val))
###Output
Validation score: 0.75115562403698
###Markdown
3.2. Get ROC AUC score. Use your model to predict probabilities that food establishments failed inspections.Get your Validation ROC AUC score. (Multiple times, if you try multiple iterations.)Get your Test ROC AUC score. (One time, at the end.)
###Code
from sklearn.metrics import roc_auc_score
y_val_pred_proba = pipeline.predict_proba(X_val)[:,1]
roc_auc_score(y_val, y_val_pred_proba)
y_test_pred_proba = pipeline.predict_proba(X_test)[:,1]
roc_auc_score(y_test, y_test_pred_proba)
###Output
_____no_output_____
###Markdown
STRETCH GOAL: Find and explain leakageThe dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections.You should be able to get an ROC AUC test score > 0.65 without using the feature with leakage. Is it the fail column?You shouldn't use it in future inspections becausethat's what we're trying to predict. Part 4 4.1. Make _either_ a Partial Dependence Plot _or_ a Shapley Values Force Plot, for either model.Partial Dependence Plot: 1 feature in isolation or 2 features in interaction.Shapley Values Force Plot: explain an individual prediction.
###Code
from xgboost import XGBClassifier
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer()
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
X_test_random = X_test.sample(n=1)
print(X_test_random)
row = X_test.iloc[[15352]]
y_test.iloc[[15352]]
import shap
explainer = shap.TreeExplainer(model)
row_processed = processor.transform(row)
shap_values = explainer.shap_values(row_processed)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row
)
###Output
_____no_output_____
###Markdown
STRETCH GOAL: Make _both_ a Partial Dependence Plot _and_ a Shapley Values Force Plot.
###Code
from xgboost import XGBRegressor
new_pipe = make_pipeline(
# ce.OrdinalEncoder(),
XGBRegressor(n_estimators=100, objective='reg:squarederror', n_jobs=-1)
)
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
import numpy as np
X_val['License #'][np.isnan(X_val['License #'])] = 0
X_val.isna().sum()
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'License #'
isolated = pdp_isolate(
model=new_pipe,
dataset=X_val,
model_features=X_val.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍔 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
_train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert _train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
_train.head()
print(_train.columns.to_list())
target = 'Fail'
features = ['DBA Name', 'Facility Type', 'Risk', 'Inspection Type', 'Violations', 'Zip']
_train['Inspection Date'].value_counts()
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.** Fit a model with the train set.
###Code
from sklearn.model_selection import train_test_split, cross_val_score
train, validate = train_test_split(_train)
train.head()
X_train = train[features]
y_train = train[target]
X_validate = validate[features]
y_validate = validate[target]
X_test = test[features]
y_test = test[target]
from sklearn.metrics import roc_auc_score, confusion_matrix, accuracy_score
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import sklearn
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(missing_values=np.NaN, strategy='most_frequent')
)
X_train_trans = pd.DataFrame(transformers.fit_transform(X_train, y_train), index=X_train.index, columns=X_train.columns)
X_validate_trans = pd.DataFrame(transformers.transform(X_validate), index=X_validate.index, columns=X_train_trans.columns)
model = RandomForestClassifier(n_estimators=100, max_depth=7, min_samples_leaf=2)
model.fit(X_train_trans, y_train)
###Output
_____no_output_____
###Markdown
Use your model to predict probabilities for the test set. Get an ROC AUC test score >= 0.60.
###Code
probas = model.predict_proba(X_validate_trans)[:,-1]
y_pred = model.predict(X_validate_trans)
roc_auc_score(y_validate, probas)
scores = cross_val_score(model, X_train_trans, y_train, cv=15, scoring='roc_auc')
scores.mean()
###Output
_____no_output_____
###Markdown
Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- [X] Permutation Importances- [X] Partial Dependence Plot, 1 feature isolation- [X] Partial Dependence Plot, 2 features interaction- [ ] Shapley Values Permutation Importances
###Code
import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(model, random_state=42)
perm.fit(X_train_trans, y_train);
eli5.show_weights(perm, feature_names=X_train_trans.columns.str.replace(' ', '_').to_list())
###Output
_____no_output_____
###Markdown
Partial Feature Dependence: Isolation
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot, pdp_interact, pdp_interact_plot
import matplotlib.pyplot as plt
X_validate_trans.columns
feature_isolate = 'Inspection Type'
print(X_validate_trans[feature_isolate].nunique())
X_validate_trans[feature_isolate].value_counts().sort_index()
isolator = pdp_isolate(
model=boost,
dataset=X_validate_trans,
model_features=X_validate_trans.columns,
feature=feature_isolate
)
pdp_plot(isolator, feature_name=feature_isolate, plot_lines=True, frac_to_plot=300); # plotted extra lines to see some variation
###Output
_____no_output_____
###Markdown
Partial Feature Dependence: Interaction
###Code
features_isolate = ['Risk', 'Violations']
interactor = pdp_interact(
boost,
X_validate_trans,
X_validate_trans.columns,
features_isolate
)
pdp_interact_plot(interactor, features_isolate, plot_type='grid');
###Output
_____no_output_____ |
deployments/yellowstone.pangeo.io/image/machine-learning.ipynb | ###Markdown
Dask for Machine Learning Dask integrates well with machine learning libraries like [scikit-learn](http://scikit-learn.org/).[Dask-ML](http://dask-ml.readthedocs.io/en/latest/index.html) implements scalable machine learning algorithms that are compatible with scikit-learn.
###Code
from dask_kubernetes import KubeCluster
cluster = KubeCluster(n_workers=10)
cluster
from dask.distributed import Client, progress
c = Client(cluster)
c
###Output
_____no_output_____
###Markdown
Distributed TrainingScikit-learn uses [joblib](http://joblib.readthedocs.io/) for single-machine parallelism. This lets you train most estimators (anything that accepts an `n_jobs` parameter) using all the cores of your laptop or workstation.Dask registers a joblib backend. This lets you train those estimators using all the cores of your *cluster*, by changing one line of code. This is most useful for training large models on medium-sized datasets. You may have a large model when searching over many hyper-parameters, or when using an ensemble method with many individual estimators. For too small datasets, training times will typically be small enough that cluster-wide parallelism isn't helpful. For too large datasets (larger than a single machine's memory), the scikit-learn estimators may not be able to cope (see below).
###Code
import dask_ml.joblib # register the distriubted backend
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
import pandas as pd
###Output
_____no_output_____
###Markdown
We'll use scikit-learn to create a pair of small random arrays, one for the features `X`, and one for the target `y`.
###Code
X, y = make_classification(n_samples=1000, random_state=0)
X[:5]
###Output
_____no_output_____
###Markdown
We'll fit a [Support Vector Classifier](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html), using [grid search](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to find the best combination of hyperparameters.
###Code
param_grid = {"C": [0.001, 0.01, 0.1, 0.5, 1.0, 2.0, 5.0, 10.0],
"kernel": ['rbf', 'poly', 'sigmoid'],
"shrinking": [True, False]}
grid_search = GridSearchCV(SVC(gamma='auto', random_state=0, probability=True),
param_grid=param_grid,
return_train_score=False,
iid=True,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
To fit that normally, we'd call```pythongrid_search.fit(X, y)```To fit it using the cluster, we just need to use a context manager provided by joblib.We'll pre-scatter the data to each worker, which can help with performance.
###Code
from sklearn.externals import joblib
with joblib.parallel_backend('dask', scatter=[X, y]):
grid_search.fit(X, y)
###Output
_____no_output_____
###Markdown
We fit 48 different models, one for each hyper-parameter combination in `param_grid`, distributed across the cluster. At this point, we have a regular scikit-learn model, which can be used for prediction, scoring, etc.
###Code
pd.DataFrame(grid_search.cv_results_).head()
grid_search.predict(X)[:5]
grid_search.score(X, y)
###Output
_____no_output_____
###Markdown
For more on training scikit-learn models with distributed joblib, see the [dask-ml documentation](http://dask-ml.readthedocs.io/en/latest/joblib.html). Training on Large DatasetsMost estimators in scikit-learn are designed to work on in-memory arrays. Training with larger datasets may require different algorithms.All of the algorithms implemented in Dask-ML work well on larger than memory datasets, which you might store in a [dask array](http://dask.pydata.org/en/latest/array.html) or [dataframe](http://dask.pydata.org/en/latest/dataframe.html).
###Code
%matplotlib inline
import dask_ml.datasets
import dask_ml.cluster
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
In this example, we'll use `dask_ml.datasets.make_blobs` to generate some random *dask* arrays.
###Code
X, y = dask_ml.datasets.make_blobs(n_samples=10000000,
chunks=1000000,
random_state=0,
centers=3)
X = X.persist()
X
###Output
_____no_output_____
###Markdown
We'll use the k-means implemented in Dask-ML to cluster the points. It uses the `k-means||` (read: "k-means parallel") initialization algorithm, which scales better than `k-means++`. All of the computation, both during and after initialization, can be done in parallel.
###Code
km = dask_ml.cluster.KMeans(n_clusters=3, init_max_iter=2, oversampling_factor=10)
km.fit(X)
###Output
_____no_output_____
###Markdown
We'll plot a sample of points, colored by the cluster each falls into.
###Code
fig, ax = plt.subplots()
ax.scatter(X[::10000, 0], X[::10000, 1], marker='.', c=km.labels_[::10000],
cmap='viridis', alpha=0.25);
###Output
_____no_output_____ |
Bob/Data/Untitled-Copy1.ipynb | ###Markdown
Prepare Data- Clean Up N/As - History and Future Lag (own Function)
###Code
data = pd.read_csv('data.csv')
data['MU'][:10] = None
data['MU'][15] = None
data
data = data.interpolate(limit_direction='both', limit_area='inside')
data = data.fillna(0)
data
for c in data.columns:
scaler = []
if c != 'Date':
sc = MinMaxScaler()
scaler.append(sc.fit(data[[c]]))
data[c] = sc.transform(data[[c]])
def transformData(data, pastlag, futurelag = 1, validation_span = 16, arraylength = None):
if arraylength is None:
arraylength = pastlag
cols = []
past_train = []
future_train = []
past_validate = []
future_validate = []
for c in data.columns:
if c != 'Date':
cols.append(c)
for c in cols:
ar = np.asarray(data[c])
l = len(ar)
iv = (l-futurelag) - validation_span
#iv = pastlag + int(((l-futurelag)-pastlag)*(1-validation_percentage))
for i in range(pastlag,len(ar)-futurelag + 1):
if i <= iv:
for j in range(futurelag):
if i-arraylength < 0:
p_ar = ar[0:i]
p_ar = np.pad(p_ar,(arraylength - len(p_ar),0),'constant')
else:
p_ar = ar[i-arraylength:i]
past_train.append(p_ar)
#future_train.append(ar[i:i+futurelag])
future_train.append(ar[i+j:i+j+1])
else:
for j in range(futurelag):
if i-arraylength < 0:
p_ar = ar[0:i]
p_ar = np.pad(p_ar,(arraylength - len(p_ar),0),'constant')
else:
p_ar = ar[i-arraylength:i]
past_validate.append(p_ar)
future_validate.append(ar[i+j:i+j+1])
#future_validate.append(ar[i:i+futurelag])
return np.asarray(past_train), np.asarray(future_train), np.asarray(past_validate), np.asarray(future_validate)
def transformDataByCols(data, pastlag, targetCol, ExoCols = [], futurelag = 1, validation_span = 16, arraylength = None):
if arraylength is None:
arraylength = pastlag
past_train = []
future_train = []
past_validate = []
future_validate = []
cols = [targetCol] + ExoCols
for c in cols:
p_t_c = []
f_t_c = []
p_v_c = []
f_v_c = []
ar = np.asarray(data[c])
l = len(ar)
iv = (l-futurelag) - validation_span
#iv = pastlag + int(((l-futurelag)-pastlag)*(1-validation_percentage))
for i in range(pastlag,len(ar)-futurelag+1):
if i <= iv:
if i-arraylength < 0:
p_ar = ar[0:i]
p_ar = np.pad(p_ar,(arraylength - len(p_ar),0),'constant')
else:
p_ar = ar[i-arraylength:i]
p_t_c.append(p_ar)
f_t_c.append(ar[i:i+futurelag])
else:
if i-arraylength < 0:
p_ar = ar[0:i]
p_ar = np.pad(p_ar,(arraylength - len(p_ar),0),'constant')
else:
p_ar = ar[i-arraylength:i]
p_v_c.append(p_ar)
f_v_c.append(ar[i:i+futurelag])
past_train.append(np.expand_dims(np.asarray(p_t_c),-1))
past_validate.append(np.expand_dims(np.asarray(p_v_c),-1))
if c == targetCol:
future_train = np.asarray(f_t_c)
future_validate = np.asarray(f_v_c)
return past_train, future_train, past_validate, future_validate
from sklearn.metrics import roc_auc_score
from keras.callbacks import Callback
def RMSEperLag(x_true,x_pred):
return np.sqrt(np.mean((x_true - x_pred)**2,axis= 0))
class IntervalEvaluation(Callback):
def __init__(self, validation_data=(), interval=10):
super(Callback, self).__init__()
self.interval = interval
self.X_val, self.y_val = validation_data
def on_epoch_start(self,epoch, logs={}):
K.set_value(self.model.optimizer.lr, lr)
def on_epoch_end(self, epoch, logs={}):
if epoch % self.interval == 0:
y_pred = self.model.predict(self.X_val, verbose=0)
score = RMSEperLag(self.y_val, y_pred)
for s in score:
print(s)
#print("interval evaluation - epoch: {:d} - score: {:.6f}".format(epoch, score))
class LRRestart(Callback):
def __init__(self, maxLR, maxEpoch, patience, minLR = 0.1e-5):
super(Callback, self).__init__()
self.maxLR = maxLR
self.patience = patience
self.minLR = minLR
self.restart = True
self.lastRestartEpoch = 1
self.best_val_loss = None
self.wait = 0
def schedule(epoch):
reductionRate = ((self.maxLR - self.MinLR)/self.maxEpoch) / self.maxLR
lr = self.maxLR - (epoch - lastRestartEpoch) *max(reductionRate*self.maxLR, self.minLR)
return lr
def on_training_begin(self, logs ={}):
self.wait = 0
def on_epoch_start(self,epoch, logs=None):
if self.restart or (epoch - self.lastRestartEpoch) > self.maxEpoch:
self.lastRestartEpoch = epoch
self.restart = False
lr = schedule(epoch)
K.set_value(self.model.optimizer.lr, lr)
def on_epoch_end(self, epoch,logs = None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
self.current_val_loss = logs.get('val_loss')
if self.best_val_loss is None:
self.best_val_loss = self.current_val_loss
if self.current_val_loss < self.best_val_loss:
self.best_val_loss = self.current_val_loss
self.wait = 0
else:
self.wait += 1
if self.wait >= self.patience:
self.restart = True
#self.model.stop_training = True
class UnfreezeLayer(Callback):
def __init__(self, layerNames = [], unfreezeSchedule = []):
super(Callback, self).__init__()
self.layerNames = layerNames
self.unfreezeSchedule = unfreezeSchedule
def on_epoch_start(self,epoch, logs=None):
for i, u in enumerate(self.unfreezeSchedule):
if u == epoch:
for ln in [l.name for l in g_lstm.layers if layerNames[u] + '.' in l.name ]:
self.model.get_layer(ln).trainable = True
from keras.layers.core import Lambda
from keras import backend as K
def PermaDropout(rate):
return Lambda(lambda x: K.dropout(x, level=rate))
def expand_dims(x):
return K.expand_dims(x,1)
def expand_dims_output_shape(input_shape):
return (input_shape[0],1,input_shape[1])
def expand_dims2(x):
return K.expand_dims(x,2)
def expand_dims_output_shape2(input_shape):
return (input_shape[0],1,1)
past_t, future_t, past_v, future_v = transformData(data,pastlag = 90, futurelag = 1, arraylength = 90)
###Output
_____no_output_____
###Markdown
General LSTM Model- define Model- Train Model on all Data - save trained weightslayer_conv_1d(filters=64, kernel_size=4, activation="relu", input_shape=c(lookback, dim(dm)[[-1]])) %>% layer_max_pooling_1d(pool_size=4) %>% layer_flatten() %>% layer_dense(units=lookback * dim(dm)[[-1]], activation="relu") %>% layer_dropout(rate=0.2) %>% layer_dense(units=1, activation="linear")
###Code
#### DECODER MODEL
decoder = Sequential()
dec_input = Input((90,1))
#dec_batchNorm = BatchNormalization()(dec_input)
dec_batchNorm = dec_input
### Conv
dec_conv = Conv1D(8,3,activation="relu")(dec_batchNorm)
dec_pool = MaxPooling1D(2)(dec_conv)
dec_conv2 = Conv1D(4,5,activation="relu")(dec_pool)
dec_pool2 = MaxPooling1D(4)(dec_conv)
#dec_pool2 = SeqSelfAttention(attention_activation='sigmoid')(dec_pool2)
#dec_lstm_conv = LSTM(10,dropout = 0.5 ,return_sequences = False)(dec_pool2)
dec_conv_flat = Flatten()(dec_pool2)
#dec_conv_flat = Lambda(lambda x: K.batch_flatten(x))(dec_pool2)
dec_conv_flat = Dropout(0.2)(dec_conv_flat)
#dec_conv_flat = dec_lstm_conv
### LSTM
#dec_attention = SeqSelfAttention(attention_activation='sigmoid')(dec_batchNorm)
dec_lstm = LSTM(10,dropout = 0.5 ,return_sequences = False)(dec_batchNorm)
#dec_attention = SeqSelfAttention(attention_activation='sigmoid')(dec_lstm)
#dec_lstm_flat = Flatten()(dec_attention)
dec_lstm_flat = dec_lstm
dec_output = concatenate([dec_conv_flat, dec_lstm_flat],axis = -1)
#dec_output = dec_lstm_flat
#dec_output = dec_conv_flat
decoder = Model(dec_input,dec_output)
g_lstm = Sequential()
g_lstm.add(decoder)
#g_lstm.add(Dense(90*4, activation="relu"))
#g_lstm.add(Dropout(0.2))
#g_lstm.add(Dense(30, name = 'glstm_dense'))
g_lstm.add(Dense(1))
g_lstm.summary()
g_lstm.compile(loss = 'mse',optimizer = 'adam')
ival = IntervalEvaluation(validation_data =(np.expand_dims(past_v,-1),future_v), interval=1)
lrre = LRRestart(maxLR = .5, maxEpoch = 10, patience = 5)
g_lstm.fit(np.expand_dims(past_t,-1), future_t, batch_size = 200, epochs=100, validation_data = (np.expand_dims(past_v,-1),future_v),
#callbacks=[lrre],
verbose = 1)
g_lstm.save('glstm.h5')
predA = decoder.predict(np.expand_dims(past_t,-1))
###Output
_____no_output_____
###Markdown
Estimate with Exogenen Variables- Embedding of each TS with general LSTM- Multi Attettion Head over all TS- Estimate Full future timespanTrain Model first with froozen LSTM Layer... unfreeze later in training
###Code
EXO = ["BAC", "MU","AKS","CLF","SKX","PBR","NFX","MS","CTL"]
past_t, future_t, past_v, future_v = transformDataByCols(data,targetCol = "ET", ExoCols = EXO, pastlag = 90, futurelag = 10, arraylength =90)
def attention_3d_block(inputs,TIME_STEPS, SINGLE_ATTENTION_VECTOR ):
# inputs.shape = (batch_size, time_steps, input_dim)
input_dim =186# int(inputs.shape[2])
a = Permute((2, 1))(inputs)
#a = Reshape((input_dim, TIME_STEPS))(a) # this line is not useful. It's just to know which dimension is what.
a = Dense(TIME_STEPS, activation='softmax')(a)
if SINGLE_ATTENTION_VECTOR:
a = Lambda(lambda x: K.mean(x, axis=1))(a)
a = RepeatVector(input_dim)(a)
a_probs = Permute((2, 1))(a)
output_attention_mul = Multiply()([inputs, a_probs])#, name='attention_mul', mode='mul')
return output_attention_mul
from keras.models import Model
from keras.layers import Dense ,LSTM,concatenate
STEPS = 2
in_in = []
out_in = []
for i in range(len(EXO)+1):
inp = Input((90,1))
in_in.append(inp)
inpp = inp
out_pp = []
for i in range(STEPS):
out_temp = g_lstm(inpp)
inpp = concatenate ([inpp[:, 1:],Lambda(expand_dims2, expand_dims_output_shape2)(out_temp)], axis=1)
out_pp.append(out_temp)
a = concatenate(out_pp,axis=-1)
a= Lambda(expand_dims, expand_dims_output_shape)(a)
out_in.append(a)
out_dec=concatenate(out_in,axis=1)
#out = SeqSelfAttention(attention_activation='sigmoid')(out_dec)
o2 = Flatten()(out_dec)
o2 = Dense(STEPS)(o2)
model = Model(inputs = in_in, outputs = [o2])
from keras.models import Model
from keras.layers import Dense ,LSTM,concatenate
in_in = []
out_in = []
#bb = BatchNormalization()
decoder.trainable = False
#a_layer = SeqSelfAttention(attention_activation='sigmoid', name = 'att.0', weights = glstm_a_weights)
#a_layer.trainable = False
for i in range(len(EXO)+1):
inp = Input((90,1))
in_in.append(inp)
a = decoder(inp)
#a = concatenate([a,Flatten()(inp)],axis=-1)
a= Lambda(expand_dims, expand_dims_output_shape)(a)
out_in.append(a)
if len(EXO) >= 1:
out_dec=concatenate(out_in,axis=1)
else:
out_dec = out_in
#out_dec = BatchNormalization()(out_dec)
out_ar = []
#out_dec = MultiHeadAttention(head_num=5)(out_dec)
#out = attention_3d_block(out_dec, len(EXO) + 1, False)
out = SeqSelfAttention(attention_activation='sigmoid')(out_dec)
'''
for j in range(16):
#out = attention_3d_block(out_dec, len(EXO) + 1, True)
#out = SeqSelfAttention(attention_activation='sigmoid')(out_dec)
#out = MultiHeadAttention(
# head_num=5)(out_dec)
out= Dropout(0.2)(out_dec)
out = Flatten()(out)
out = Dense(20, activation = "relu")(out)
out= Dropout(0.2)(out)
#out = Dense(30)(out)
out = Dense(1)(out)
out_ar.append(out)
oo = concatenate(out_ar,axis = -1)
'''
#o2 = attention_3d_block(out_dec, len(EXO), False)
o2 = Flatten()(out_dec)
o2 = Dense(10)(o2)
#ooo = Add()([oo,o2])
model = Model(inputs = in_in, outputs = [o2])
def weighted_mse(yTrue,yPred):
ones = K.ones_like(yTrue[0,:])
idx = K.cumsum(ones)
return K.mean((1/idx)*K.square(yTrue- yPred))
model.compile(loss = 'mse',optimizer = 'adam')
#ival = IntervalEvaluation(validation_data =(past_v,future_v), interval=1)
lrre = LRRestart(maxLR = .5, maxEpoch = 10, patience = 5)
unfreeze = UnfreezeLayer(['lstm'], [20])
model.fit(past_t, future_t, batch_size = 32, epochs=100, validation_data = (past_v,future_v))#, callbacks = [lrre,unfreeze])
model.summary()
predA = model.predict(past_v)
%matplotlib inline
I =-1
import matplotlib.pyplot as plt
past= np.squeeze(past_v[0][I])
future= future_v[I]
past_x =[i for i in range(len(past))]
future_x =[i for i in range(len(past),len(past)+len(future))]
pred = predA[I]
#pop_india = [449.48, 553.57, 696.783, 870.133, 1000.4, 1309.1]
plt.plot(past_x, past, color='blue')
plt.plot(future_x, future, color='g')
plt.plot(future_x, pred, color='orange')
plt.xlabel('Countries')
plt.ylabel('Population in million')
plt.title('Pakistan India Population till 2010')
plt.show()
past_t, future_t, past_v, future_v = transformDataByCols(data,targetCol = "IWM", ExoCols = [], pastlag = 30)
t_lstm = Sequential()
t_lstm.add(LSTM(20, dropout=0.2, name = 'glstm'))
t_lstm.add(Dense(1, name = 'glstm_dense'))
t_lstm.compile(loss = 'mse',optimizer = 'adam')
t_lstm.fit(past_t, future_t, batch_size = 10, epochs=100, validation_data = (past_v,future_v), verbose = 2)
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/text_classification/labs/word_embeddings.ipynb | ###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This appoach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0-dev20210114
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](../load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
embedding_layer = tf.keras.layers.Embedding(1000, 5)
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](../../guide/keras) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding tranformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](../../guide/keras/masking_and_padding).
###Code
embedding_dim=16
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(),
Dense(16, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____
###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0-dev20210114
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](../load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](../../guide/keras) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](../../guide/keras/masking_and_padding).
###Code
embedding_dim=16
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
# TODO: Your code goes here
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Visualize the model metrics in TensorBoard.
###Code
!tensorboard --bind_all --port=8081 --logdir logs
###Output
_____no_output_____
###Markdown
Run the following command in **Cloud Shell**:gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click **Web Preview** > **Change Port** and insert port number **8081**. Click **Change and Preview** to open the TensorBoard. ![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png) **To quit the TensorBoard, click Kernel > Interrupt kernel**. Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = # TODO: Your code goes here
vocab = # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____
###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). ![img](assets/embedding.jpg) Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.![img](assets/one-hot.png)To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.![img](assets/embedding2.png)Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.6.0
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](https://www.tensorflow.org/guide/keras/) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](https://www.tensorflow.org/guide/keras/masking_and_padding).
###Code
embedding_dim=16
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
# TODO: Your code goes here
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Visualize the model metrics in TensorBoard.
###Code
!tensorboard --bind_all --port=8081 --logdir logs
###Output
_____no_output_____
###Markdown
Run the following command in **Cloud Shell**:gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click **Web Preview** > **Change Port** and insert port number **8081**. Click **Change and Preview** to open the TensorBoard. ![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png) **To quit the TensorBoard, click Kernel > Interrupt kernel**. Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = # TODO: Your code goes here
vocab = # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____
###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). ![img](assets/embedding.jpg) Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.![img](assets/one-hot.png)To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.![img](assets/embedding2.png)Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0-dev20210114
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](https://www.tensorflow.org/guide/keras/) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](https://www.tensorflow.org/guide/keras/masking_and_padding).
###Code
embedding_dim=16
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
# TODO: Your code goes here
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Visualize the model metrics in TensorBoard.
###Code
!tensorboard --bind_all --port=8081 --logdir logs
###Output
_____no_output_____
###Markdown
Run the following command in **Cloud Shell**:gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click **Web Preview** > **Change Port** and insert port number **8081**. Click **Change and Preview** to open the TensorBoard. ![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png) **To quit the TensorBoard, click Kernel > Interrupt kernel**. Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = # TODO: Your code goes here
vocab = # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____
###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). ![img](assets/embedding.jpg) Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.![img](assets/one-hot.png)To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.![img](assets/embedding2.png)Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](https://www.tensorflow.org/guide/keras/) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](https://www.tensorflow.org/guide/keras/masking_and_padding).
###Code
embedding_dim=16
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
# TODO: Your code goes here
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Visualize the model metrics in TensorBoard.
###Code
!tensorboard --bind_all --port=8081 --logdir logs
###Output
_____no_output_____
###Markdown
Run the following command in **Cloud Shell**:gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click **Web Preview** > **Change Port** and insert port number **8081**. Click **Change and Preview** to open the TensorBoard. ![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png) **To quit the TensorBoard, click Kernel > Interrupt kernel**. Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = # TODO: Your code goes here
vocab = # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____
###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This appoach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0-dev20210114
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](../load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
embedding_layer = tf.keras.layers.Embedding(1000, 5)
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](../../guide/keras) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding tranformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](../../guide/keras/masking_and_padding).
###Code
embedding_dim=16
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(),
Dense(16, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Visualize the model metrics in TensorBoard.
###Code
!tensorboard --bind_all --port=8081 --logdir logs
###Output
_____no_output_____
###Markdown
Run the following command in **Cloud Shell**:gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click **Web Preview** > **Change Port** and insert port number **8081**. Click **Change and Preview** to open the TensorBoard. ![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png) **To quit the TensorBoard, click Kernel > Interrupt kernel**. Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____
###Markdown
Word Embeddings **Learning Objectives**You will learn:1. How to use Embedding layer1. How to create a classification model1. Compile and train the model1. How to retrieve the trained word embeddings, save them to disk and visualize it. Introduction This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the [Embedding Projector](http://projector.tensorflow.org) (shown in the image below). ![img](assets/embedding.jpg) Representing text as numbersMachine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so. One-hot encodingsAs a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.![img](assets/one-hot.png)To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero. Encode each word with a unique numberA second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).There are two downsides to this approach, however:* The integer-encoding is arbitrary (it does not capture any relationship between words).* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful. Word embeddingsWord embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.![img](assets/embedding2.png)Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.Each learning objective will correspond to a __TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word_embeddings.ipynb) for reference. Setup
###Code
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.6.0
###Markdown
Download the IMDb DatasetYou will use the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/) through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the [Loading text tutorial](https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/load_data/text.ipynb). Download the dataset using Keras file utility and take a look at the directories.
###Code
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
###Output
Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Markdown
Take a look at the `train/` directory. It has `pos` and `neg` folders with movie reviews labelled as positive and negative respectively. You will use reviews from `pos` and `neg` folders to train a binary classification model.
###Code
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
###Output
_____no_output_____
###Markdown
The `train` directory also has additional folders which should be removed before creating training dataset.
###Code
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
###Output
_____no_output_____
###Markdown
Next, create a `tf.data.Dataset` using `tf.keras.preprocessing.text_dataset_from_directory`. You can read more about using this utility in this [text classification tutorial](https://www.tensorflow.org/tutorials/keras/text_classification). Use the `train` directory to create both train and validation datasets with a split of 20% for validation.
###Code
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
###Output
Found 25000 files belonging to 2 classes.
Using 20000 files for training.
###Markdown
Take a look at a few movie reviews and their labels `(1: positive, 0: negative)` from the train dataset.
###Code
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
###Output
0 b"Oh My God! Please, for the love of all that is holy, Do Not Watch This Movie! It it 82 minutes of my life I will never get back. Sure, I could have stopped watching half way through. But I thought it might get better. It Didn't. Anyone who actually enjoyed this movie is one seriously sick and twisted individual. No wonder us Australians/New Zealanders have a terrible reputation when it comes to making movies. Everything about this movie is horrible, from the acting to the editing. I don't even normally write reviews on here, but in this case I'll make an exception. I only wish someone had of warned me before I hired this catastrophe"
1 b'This movie is SOOOO funny!!! The acting is WONDERFUL, the Ramones are sexy, the jokes are subtle, and the plot is just what every high schooler dreams of doing to his/her school. I absolutely loved the soundtrack as well as the carefully placed cynicism. If you like monty python, You will love this film. This movie is a tad bit "grease"esk (without all the annoying songs). The songs that are sung are likable; you might even find yourself singing these songs once the movie is through. This musical ranks number two in musicals to me (second next to the blues brothers). But please, do not think of it as a musical per say; seeing as how the songs are so likable, it is hard to tell a carefully choreographed scene is taking place. I think of this movie as more of a comedy with undertones of romance. You will be reminded of what it was like to be a rebellious teenager; needless to say, you will be reminiscing of your old high school days after seeing this film. Highly recommended for both the family (since it is a very youthful but also for adults since there are many jokes that are funnier with age and experience.'
0 b"Alex D. Linz replaces Macaulay Culkin as the central figure in the third movie in the Home Alone empire. Four industrial spies acquire a missile guidance system computer chip and smuggle it through an airport inside a remote controlled toy car. Because of baggage confusion, grouchy Mrs. Hess (Marian Seldes) gets the car. She gives it to her neighbor, Alex (Linz), just before the spies turn up. The spies rent a house in order to burglarize each house in the neighborhood until they locate the car. Home alone with the chicken pox, Alex calls 911 each time he spots a theft in progress, but the spies always manage to elude the police while Alex is accused of making prank calls. The spies finally turn their attentions toward Alex, unaware that he has rigged devices to cleverly booby-trap his entire house. Home Alone 3 wasn't horrible, but probably shouldn't have been made, you can't just replace Macauley Culkin, Joe Pesci, or Daniel Stern. Home Alone 3 had some funny parts, but I don't like when characters are changed in a movie series, view at own risk."
0 b"There's a good movie lurking here, but this isn't it. The basic idea is good: to explore the moral issues that would face a group of young survivors of the apocalypse. But the logic is so muddled that it's impossible to get involved.<br /><br />For example, our four heroes are (understandably) paranoid about catching the mysterious airborne contagion that's wiped out virtually all of mankind. Yet they wear surgical masks some times, not others. Some times they're fanatical about wiping down with bleach any area touched by an infected person. Other times, they seem completely unconcerned.<br /><br />Worse, after apparently surviving some weeks or months in this new kill-or-be-killed world, these people constantly behave like total newbs. They don't bother accumulating proper equipment, or food. They're forever running out of fuel in the middle of nowhere. They don't take elementary precautions when meeting strangers. And after wading through the rotting corpses of the entire human race, they're as squeamish as sheltered debutantes. You have to constantly wonder how they could have survived this long... and even if they did, why anyone would want to make a movie about them.<br /><br />So when these dweebs stop to agonize over the moral dimensions of their actions, it's impossible to take their soul-searching seriously. Their actions would first have to make some kind of minimal sense.<br /><br />On top of all this, we must contend with the dubious acting abilities of Chris Pine. His portrayal of an arrogant young James T Kirk might have seemed shrewd, when viewed in isolation. But in Carriers he plays on exactly that same note: arrogant and boneheaded. It's impossible not to suspect that this constitutes his entire dramatic range.<br /><br />On the positive side, the film *looks* excellent. It's got an over-sharp, saturated look that really suits the southwestern US locale. But that can't save the truly feeble writing nor the paper-thin (and annoying) characters. Even if you're a fan of the end-of-the-world genre, you should save yourself the agony of watching Carriers."
0 b'I saw this movie at an actual movie theater (probably the $2.00 one) with my cousin and uncle. We were around 11 and 12, I guess, and really into scary movies. I remember being so excited to see it because my cool uncle let us pick the movie (and we probably never got to do that again!) and sooo disappointed afterwards!! Just boring and not scary. The only redeeming thing I can remember was Corky Pigeon from Silver Spoons, and that wasn\'t all that great, just someone I recognized. I\'ve seen bad movies before and this one has always stuck out in my mind as the worst. This was from what I can recall, one of the most boring, non-scary, waste of our collective $6, and a waste of film. I have read some of the reviews that say it is worth a watch and I say, "Too each his own", but I wouldn\'t even bother. Not even so bad it\'s good.'
###Markdown
Configure the dataset for performanceThese are two important methods you should use when loading data to make sure that I/O does not become blocking.`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.`.prefetch()` overlaps data preprocessing and model execution while training. You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Using the Embedding layerKeras makes it easy to use word embeddings. Take a look at the [Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer.The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
###Code
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
###Code
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
###Output
_____no_output_____
###Markdown
For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape `(samples, sequence_length)`, where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes `(32, 10)` (batch of 32 sequences of length 10) or `(64, 15)` (batch of 64 sequences of length 15).The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a `(2, 3)` input batch and the output is `(2, 3, N)`
###Code
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
###Output
_____no_output_____
###Markdown
When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape `(samples, sequence_length, embedding_dimensionality)`. To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The [Text Classification with an RNN](text_classification_rnn.ipynb) tutorial is a good next step. Text preprocessing Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial.
###Code
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
###Output
_____no_output_____
###Markdown
Create a classification modelUse the [Keras Sequential API](https://www.tensorflow.org/guide/keras/) to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.* The [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) layer transforms strings into vocabulary indices. You have already initialized `vectorize_layer` as a TextVectorization layer and built it's vocabulary by calling `adapt` on `text_ds`. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.* The [`Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.* The [`GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.* The fixed-length output vector is piped through a fully-connected ([`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer with 16 hidden units.* The last layer is densely connected with a single output node. Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the [masking and padding guide](https://www.tensorflow.org/guide/keras/masking_and_padding).
###Code
embedding_dim=16
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model Create a `tf.keras.callbacks.TensorBoard`.
###Code
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Compile and train the model using the `Adam` optimizer and `BinaryCrossentropy` loss.
###Code
# TODO: Your code goes here
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
###Output
Epoch 1/10
###Markdown
With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer. You can look into the model summary to learn more about each layer of the model.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, 100) 0
_________________________________________________________________
embedding (Embedding) (None, 100, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
###Markdown
Visualize the model metrics in TensorBoard.
###Code
!tensorboard --bind_all --port=8081 --logdir logs
###Output
_____no_output_____
###Markdown
Run the following command in **Cloud Shell**:gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081 Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.In Cloud Shell, click **Web Preview** > **Change Port** and insert port number **8081**. Click **Change and Preview** to open the TensorBoard. ![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png) **To quit the TensorBoard, click Kernel > Interrupt kernel**. Retrieve the trained word embeddings and save them to diskNext, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape `(vocab_size, embedding_dimension)`. Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
###Code
weights = # TODO: Your code goes here
vocab = # TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Write the weights to disk. To use the [Embedding Projector](http://projector.tensorflow.org), you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
###Code
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
###Output
_____no_output_____
###Markdown
Two files will created as `vectors.tsv` and `metadata.tsv`. Download both files.
###Code
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
###Output
_____no_output_____ |
python/pillow-example/pillow-rabbit.ipynb | ###Markdown
![Image of rabbit_half](rabbit_half.jpg)
###Code
# Resize the image to 512x512, disregarding aspect ratio
small_size = [ 512, 512 ]
small_img = img.resize(small_size)
small_img.save('rabbit_512x512.jpg')
###Output
_____no_output_____
###Markdown
![Image of rabbit_512x512](rabbit_512x512.jpg)
###Code
# Make a thumbnail, keeping the same aspect ratio
# where the length and width are 534 pixels max
max_size = (534, 534)
small_img = img.copy()
small_img.thumbnail(max_size)
small_img.save('rabbit_thumb.jpg')
###Output
_____no_output_____
###Markdown
![Image of rabbit_thumb](rabbit_thumb.jpg)
###Code
# Use the smaller image from here on out
img = small_img
# Crop the image
# upper left x,y; lower right x,y
box = (0, 160, 356, 460)
small_img = img.crop(box)
small_img.save('rabbit_crop.jpg')
###Output
_____no_output_____
###Markdown
![Image of rabbit_crop](rabbit_crop.jpg)
###Code
# Add an IBM i watermark to the rabbit
position = ( \
(img.width - logo.width - 5), \
(img.height - logo.height - 5))
marked_image = img.copy()
marked_image.paste(logo, position, logo)
marked_image.save('rabbit_watermarked.jpg')
###Output
_____no_output_____ |
Lessons/Lesson16_Pandas-Subsetting-I.ipynb | ###Markdown
Subsetting Pandas DataFrames IYou now know how to read external datasets into `pandas`. Let's put those skills to use and read in the `tips` dataset again:
###Code
# import the pandas package
import pandas as pd
# set the path
path = 'https://raw.githubusercontent.com/GWC-DCMB/curriculum-notebooks/master/'
# load tips
tips = pd.read_csv(path + 'SampleData/tips.csv')
###Output
_____no_output_____
###Markdown
Take a look again at the beginning of the `tips` `DataFrame`:
###Code
# view the beginning of tips
###Output
_____no_output_____
###Markdown
What if we decided we didn't want to keep all of the data recorded in this dataset? To do that, we need to learn how to `subset` `DataFrames`. Subsetting means taking a dataset and pulling out a small portion of it that we're interested in. First, we'll look at a single column (you can use `head` to keep the printed result short):
###Code
# subset one column
###Output
_____no_output_____
###Markdown
We use the square brackets [ ] after the name of the `DataFrame` to tell `pandas` that we want to look at one of the columns. We put the name of the column in quotes to tell `pandas` exactly which column we want to look at. Try subsetting the `total_bill` column:
###Code
# subset the total_bill column
###Output
_____no_output_____
###Markdown
`pandas` simply showed us the result of subsetting the column, but it didn't save the result anywhere. Try saving the `total_bill` column to a new variable, `bills`:
###Code
# save the total_bill column to a variable
###Output
_____no_output_____
###Markdown
We can also pull out multiple columns at a time to create a new `DataFrame`. If we were only interested in the `total_bill` and `tip`, we can subset them like this:
###Code
# subset the columns total_bill and tip
###Output
_____no_output_____
###Markdown
Does that look familiar? Instead of putting a single string between the square brackets, we put a whole list of strings -- you can tell it's a list by the second set of square brackets. You can also create the list of columns you're interested in and subset the dataframe in two separate steps. This code works exactly the same as what we just did above.
###Code
columns = ['total_bill', 'tip']
tips[columns].head(10)
###Output
_____no_output_____
###Markdown
Now you try: subset the columns `total_bill`, `tip`, and `time` and save the result to a variable called `tips_subset`:
###Code
# subset three columns and save to a new variable
# take a look at the beginning of the new DataFrame
###Output
_____no_output_____
###Markdown
Now we've learned how to subset columns. How do we subset rows? We use a `method` of `DataFrame` called `iloc`. When you see `iloc`, think "index location" -- because we want to get the location where the row is a certain index. Let's try it:
###Code
# subset a row
###Output
_____no_output_____
###Markdown
That showed us the row with an index of 1. Similarly to subsetting columns, we can also subset multiple rows:
###Code
# subset multiple rows
###Output
_____no_output_____
###Markdown
That gave us a smaller `DataFrame` where the rows have an index of 0, 1, or 2. We can do the same thing with slicing syntax:
###Code
# subset multiple rows
###Output
_____no_output_____
###Markdown
Notice that this does the same thing as calling `head` with a value of 3:
###Code
# use head
###Output
_____no_output_____
###Markdown
What if we want to grab some rows in the middle of the `DataFrame`? Try subsetting rows 100 through 105:
###Code
# subset rows 100 through 105
###Output
_____no_output_____
###Markdown
Subsetting Pandas DataFrames IYou now know how to read external datasets into `pandas`. Let's put those skills to use and read in the `tips` dataset again:
###Code
# import the pandas package
import pandas as pd
# set the path
path = 'https://raw.githubusercontent.com/GWC-DCMB/curriculum-notebooks/master/'
# load tips
tips = pd.read_csv(path + 'SampleData/tips.csv')
###Output
_____no_output_____
###Markdown
Take a look again at the beginning of the `tips` `DataFrame`:
###Code
# view the beginning of tips
###Output
_____no_output_____
###Markdown
What if we decided we didn't want to keep all of the data recorded in this dataset? To do that, we need to learn how to `subset` `DataFrames`. Subsetting means taking a dataset and pulling out a small portion of it that we're interested in. First, we'll look at a single column (you can use `head` to keep the printed result short):
###Code
# subset one column
###Output
_____no_output_____
###Markdown
We use the square brackets [ ] after the name of the `DataFrame` to tell `pandas` that we want to look at one of the columns. We put the name of the column in quotes to tell `pandas` exactly which column we want to look at. Try subsetting the `total_bill` column:
###Code
# subset the total_bill column
###Output
_____no_output_____
###Markdown
`pandas` simply showed us the result of subsetting the column, but it didn't save the result anywhere. Try saving the `total_bill` column to a new variable, `bills`:
###Code
# save the total_bill column to a variable
###Output
_____no_output_____
###Markdown
We can also pull out multiple columns at a time to create a new `DataFrame`. If we were only interested in the `total_bill` and `tip`, we can subset them like this:
###Code
# subset the columns total_bill and tip
###Output
_____no_output_____
###Markdown
Does that look familiar? Instead of putting a single string between the square brackets, we put a whole list of strings -- you can tell it's a list by the second set of square brackets. You can also create the list of columns you're interested in and subset the dataframe in two separate steps. This code works exactly the same as what we just did above.
###Code
columns = ['total_bill', 'tip']
tips[columns].head(10)
###Output
_____no_output_____
###Markdown
Now you try: subset the columns `total_bill`, `tip`, and `time` and save the result to a variable called `tips_subset`:
###Code
# subset three columns and save to a new variable
# take a look at the beginning of the new DataFrame
###Output
_____no_output_____
###Markdown
Now we've learned how to subset columns. How do we subset rows? We use a `method` of `DataFrame` called `iloc`. When you see `iloc`, think "index location" -- because we want to get the location where the row is a certain index. Let's try it:
###Code
# subset a row
###Output
_____no_output_____
###Markdown
That showed us the row with an index of 1. Similarly to subsetting columns, we can also subset multiple rows:
###Code
# subset multiple rows
###Output
_____no_output_____
###Markdown
That gave us a smaller `DataFrame` where the rows have an index of 0, 1, or 2. We can do the same thing with slicing syntax:
###Code
# subset the first three rows
###Output
_____no_output_____
###Markdown
Notice that this does the same thing as calling `head` with a value of 3:
###Code
# use head
###Output
_____no_output_____
###Markdown
What if we want to grab some rows in the middle of the `DataFrame`? Try subsetting the 100th through 105th row. Hint: Don't forget that counting starts at `0` in Python!
###Code
# subset the 100th row through the 105th row
###Output
_____no_output_____ |
examples/sp500-components-timeseries/sp500-components-timeseries.ipynb | ###Markdown
S&P 500 Components Time SeriesGet time series of all S&P 500 components
###Code
from datetime import datetime
import pandas as pd
import pinkfish as pf
#pd.options.mode.chained_assignment = None # default='warn'
#pd.set_option('display.max_rows', 600)
# -*- encoding: utf-8 -*-
%matplotlib inline
#%%javascript
#IPython.OutputArea.prototype._should_scroll = function(lines) {
# return false;
#}
###Output
_____no_output_____
###Markdown
Current S&P500 symbols. See my SP500 project that generates the sp500.cvs file.
###Code
filename = 'sp500.csv'
symbols = pd.read_csv(filename)
symbols = sorted(list(symbols['Symbol']))
print(symbols)
###Output
['A', 'AAL', 'AAP', 'AAPL', 'ABBV', 'ABC', 'ABMD', 'ABT', 'ACN', 'ADBE', 'ADI', 'ADM', 'ADP', 'ADS', 'ADSK', 'AEE', 'AEP', 'AES', 'AFL', 'AGN', 'AIG', 'AIV', 'AIZ', 'AJG', 'AKAM', 'ALB', 'ALGN', 'ALK', 'ALL', 'ALLE', 'ALXN', 'AMAT', 'AMCR', 'AMD', 'AME', 'AMGN', 'AMP', 'AMT', 'AMZN', 'ANET', 'ANSS', 'ANTM', 'AON', 'AOS', 'APA', 'APD', 'APH', 'APTV', 'ARE', 'ATO', 'ATVI', 'AVB', 'AVGO', 'AVY', 'AWK', 'AXP', 'AZO', 'BA', 'BAC', 'BAX', 'BBY', 'BDX', 'BEN', 'BF.B', 'BIIB', 'BK', 'BKNG', 'BKR', 'BLK', 'BLL', 'BMY', 'BR', 'BRK.B', 'BSX', 'BWA', 'BXP', 'C', 'CAG', 'CAH', 'CARR', 'CAT', 'CB', 'CBOE', 'CBRE', 'CCI', 'CCL', 'CDNS', 'CDW', 'CE', 'CERN', 'CF', 'CFG', 'CHD', 'CHRW', 'CHTR', 'CI', 'CINF', 'CL', 'CLX', 'CMA', 'CMCSA', 'CME', 'CMG', 'CMI', 'CMS', 'CNC', 'CNP', 'COF', 'COG', 'COO', 'COP', 'COST', 'COTY', 'CPB', 'CPRI', 'CPRT', 'CRM', 'CSCO', 'CSX', 'CTAS', 'CTL', 'CTSH', 'CTVA', 'CTXS', 'CVS', 'CVX', 'CXO', 'D', 'DAL', 'DD', 'DE', 'DFS', 'DG', 'DGX', 'DHI', 'DHR', 'DIS', 'DISCA', 'DISCK', 'DISH', 'DLR', 'DLTR', 'DOV', 'DOW', 'DRE', 'DRI', 'DTE', 'DUK', 'DVA', 'DVN', 'DXC', 'EA', 'EBAY', 'ECL', 'ED', 'EFX', 'EIX', 'EL', 'EMN', 'EMR', 'EOG', 'EQIX', 'EQR', 'ES', 'ESS', 'ETFC', 'ETN', 'ETR', 'EVRG', 'EW', 'EXC', 'EXPD', 'EXPE', 'EXR', 'F', 'FANG', 'FAST', 'FB', 'FBHS', 'FCX', 'FDX', 'FE', 'FFIV', 'FIS', 'FISV', 'FITB', 'FLIR', 'FLS', 'FLT', 'FMC', 'FOX', 'FOXA', 'FRC', 'FRT', 'FTI', 'FTNT', 'FTV', 'GD', 'GE', 'GILD', 'GIS', 'GL', 'GLW', 'GM', 'GOOG', 'GOOGL', 'GPC', 'GPN', 'GPS', 'GRMN', 'GS', 'GWW', 'HAL', 'HAS', 'HBAN', 'HBI', 'HCA', 'HD', 'HES', 'HFC', 'HIG', 'HII', 'HLT', 'HOG', 'HOLX', 'HON', 'HP', 'HPE', 'HPQ', 'HRB', 'HRL', 'HSIC', 'HST', 'HSY', 'HUM', 'HWM', 'IBM', 'ICE', 'IDXX', 'IEX', 'IFF', 'ILMN', 'INCY', 'INFO', 'INTC', 'INTU', 'IP', 'IPG', 'IPGP', 'IQV', 'IR', 'IRM', 'ISRG', 'IT', 'ITW', 'IVZ', 'J', 'JBHT', 'JCI', 'JKHY', 'JNJ', 'JNPR', 'JPM', 'JWN', 'K', 'KEY', 'KEYS', 'KHC', 'KIM', 'KLAC', 'KMB', 'KMI', 'KMX', 'KO', 'KR', 'KSS', 'KSU', 'L', 'LB', 'LDOS', 'LEG', 'LEN', 'LH', 'LHX', 'LIN', 'LKQ', 'LLY', 'LMT', 'LNC', 'LNT', 'LOW', 'LRCX', 'LUV', 'LVS', 'LW', 'LYB', 'LYV', 'MA', 'MAA', 'MAR', 'MAS', 'MCD', 'MCHP', 'MCK', 'MCO', 'MDLZ', 'MDT', 'MET', 'MGM', 'MHK', 'MKC', 'MKTX', 'MLM', 'MMC', 'MMM', 'MNST', 'MO', 'MOS', 'MPC', 'MRK', 'MRO', 'MS', 'MSCI', 'MSFT', 'MSI', 'MTB', 'MTD', 'MU', 'MXIM', 'MYL', 'NBL', 'NCLH', 'NDAQ', 'NEE', 'NEM', 'NFLX', 'NI', 'NKE', 'NLOK', 'NLSN', 'NOC', 'NOV', 'NOW', 'NRG', 'NSC', 'NTAP', 'NTRS', 'NUE', 'NVDA', 'NVR', 'NWL', 'NWS', 'NWSA', 'O', 'ODFL', 'OKE', 'OMC', 'ORCL', 'ORLY', 'OTIS', 'OXY', 'PAYC', 'PAYX', 'PBCT', 'PCAR', 'PEAK', 'PEG', 'PEP', 'PFE', 'PFG', 'PG', 'PGR', 'PH', 'PHM', 'PKG', 'PKI', 'PLD', 'PM', 'PNC', 'PNR', 'PNW', 'PPG', 'PPL', 'PRGO', 'PRU', 'PSA', 'PSX', 'PVH', 'PWR', 'PXD', 'PYPL', 'QCOM', 'QRVO', 'RCL', 'RE', 'REG', 'REGN', 'RF', 'RHI', 'RJF', 'RL', 'RMD', 'ROK', 'ROL', 'ROP', 'ROST', 'RSG', 'RTX', 'SBAC', 'SBUX', 'SCHW', 'SEE', 'SHW', 'SIVB', 'SJM', 'SLB', 'SLG', 'SNA', 'SNPS', 'SO', 'SPG', 'SPGI', 'SRE', 'STE', 'STT', 'STX', 'STZ', 'SWK', 'SWKS', 'SYF', 'SYK', 'SYY', 'T', 'TAP', 'TDG', 'TEL', 'TFC', 'TFX', 'TGT', 'TIF', 'TJX', 'TMO', 'TMUS', 'TPR', 'TROW', 'TRV', 'TSCO', 'TSN', 'TT', 'TTWO', 'TWTR', 'TXN', 'TXT', 'UA', 'UAA', 'UAL', 'UDR', 'UHS', 'ULTA', 'UNH', 'UNM', 'UNP', 'UPS', 'URI', 'USB', 'V', 'VAR', 'VFC', 'VIAC', 'VLO', 'VMC', 'VNO', 'VRSK', 'VRSN', 'VRTX', 'VTR', 'VZ', 'WAB', 'WAT', 'WBA', 'WDC', 'WEC', 'WELL', 'WFC', 'WHR', 'WLTW', 'WM', 'WMB', 'WMT', 'WRB', 'WRK', 'WU', 'WY', 'WYNN', 'XEL', 'XLNX', 'XOM', 'XRAY', 'XRX', 'XYL', 'YUM', 'ZBH', 'ZBRA', 'ZION', 'ZTS']
###Markdown
Create cache directory for current sp500 symbol timeseries
###Code
now = datetime.now()
dt_string = now.strftime('%m-%d-%Y') # mm-dd-YYYY
dir_name = 'sp500-components-{}'.format(dt_string)
###Output
_____no_output_____
###Markdown
Update time series for the symbols below. Time series will be fetched for any symbols not already cached.
###Code
pf.update_cache_symbols(symbols=symbols, dir_name=dir_name,from_year=2018)
###Output
updating symbols:
A AAL AAP AAPL ABBV ABC ABMD ABT ACN ADBE ADI
ADM ADP ADS ADSK AEE AEP AES AFL AGN AIG
AIV AIZ AJG AKAM ALB ALGN ALK ALL ALLE ALXN
AMAT AMCR AMD AME AMGN AMP AMT AMZN ANET ANSS
ANTM AON AOS APA APD APH APTV ARE ATO ATVI
AVB AVGO AVY AWK AXP AZO BA BAC BAX BBY
BDX BEN BF.B BIIB BK BKNG BKR BLK BLL BMY
BR BRK.B BSX BWA BXP C CAG CAH CARR CAT
CB CBOE CBRE CCI CCL CDNS CDW CE CERN CF
CFG CHD CHRW CHTR CI CINF CL CLX CMA CMCSA
CME CMG CMI CMS CNC CNP COF COG COO COP
COST COTY CPB CPRI CPRT CRM CSCO CSX CTAS CTL
CTSH CTVA CTXS CVS CVX CXO D DAL DD DE
DFS DG DGX DHI DHR DIS DISCA DISCK DISH DLR
DLTR DOV DOW DRE DRI DTE DUK DVA DVN DXC
EA EBAY ECL ED EFX EIX EL EMN EMR EOG
EQIX EQR ES ESS ETFC ETN ETR EVRG EW EXC
EXPD EXPE EXR F FANG FAST FB FBHS FCX FDX
FE FFIV FIS FISV FITB FLIR FLS FLT FMC FOX
FOXA FRC FRT FTI FTNT FTV GD GE GILD GIS
GL GLW GM GOOG GOOGL GPC GPN GPS GRMN GS
GWW HAL HAS HBAN HBI HCA HD HES HFC HIG
HII HLT HOG HOLX HON HP HPE HPQ HRB HRL
HSIC HST HSY HUM HWM IBM ICE IDXX IEX IFF
ILMN INCY INFO INTC INTU IP IPG IPGP IQV IR
IRM ISRG IT ITW IVZ J JBHT JCI JKHY JNJ
JNPR JPM JWN K KEY KEYS KHC KIM KLAC KMB
KMI KMX KO KR KSS KSU L LB LDOS LEG
LEN LH LHX LIN LKQ LLY LMT LNC LNT LOW
LRCX LUV LVS LW LYB LYV MA MAA MAR MAS
MCD MCHP MCK MCO MDLZ MDT MET MGM MHK MKC
MKTX MLM MMC MMM MNST MO MOS MPC MRK MRO
MS MSCI MSFT MSI MTB MTD MU MXIM MYL NBL
NCLH NDAQ NEE NEM NFLX NI NKE NLOK NLSN NOC
NOV NOW NRG NSC NTAP NTRS NUE NVDA NVR NWL
NWS NWSA O ODFL OKE OMC ORCL ORLY OTIS OXY
PAYC PAYX PBCT PCAR PEAK PEG PEP PFE PFG PG
PGR PH PHM PKG PKI PLD PM PNC PNR PNW
PPG PPL PRGO PRU PSA PSX PVH PWR PXD PYPL
QCOM QRVO RCL RE REG REGN RF RHI RJF RL
RMD ROK ROL ROP ROST RSG RTX SBAC SBUX SCHW
SEE SHW SIVB SJM SLB SLG SNA SNPS SO SPG
SPGI SRE STE STT STX STZ SWK SWKS SYF SYK
SYY T TAP TDG TEL TFC TFX TGT TIF TJX
TMO TMUS TPR TROW TRV TSCO TSN TT TTWO TWTR
TXN TXT UA UAA UAL UDR UHS ULTA UNH UNM
UNP UPS URI USB V VAR VFC VIAC VLO VMC
VNO VRSK VRSN VRTX VTR VZ WAB WAT WBA WDC
WEC WELL WFC WHR WLTW WM WMB WMT WRB WRK
WU WY WYNN XEL XLNX XOM XRAY XRX XYL YUM
ZBH ZBRA ZION ZTS
###Markdown
S&P 500 Components Time SeriesGet time series of all S&P 500 components
###Code
from datetime import datetime
import pandas as pd
import pinkfish as pf
#pd.options.mode.chained_assignment = None # default='warn'
#pd.set_option('display.max_rows', 600)
# -*- encoding: utf-8 -*-
%matplotlib inline
#%%javascript
#IPython.OutputArea.prototype._should_scroll = function(lines) {
# return false;
#}
###Output
_____no_output_____
###Markdown
Current S&P500 symbols. See my SP500 project that generates the sp500.cvs file.
###Code
filename = 'sp500.csv'
symbols = pd.read_csv(filename)
symbols = sorted(list(symbols['Symbol']))
print(symbols)
###Output
['A', 'AAL', 'AAP', 'AAPL', 'ABBV', 'ABC', 'ABMD', 'ABT', 'ACN', 'ADBE', 'ADI', 'ADM', 'ADP', 'ADS', 'ADSK', 'AEE', 'AEP', 'AES', 'AFL', 'AGN', 'AIG', 'AIV', 'AIZ', 'AJG', 'AKAM', 'ALB', 'ALGN', 'ALK', 'ALL', 'ALLE', 'ALXN', 'AMAT', 'AMCR', 'AMD', 'AME', 'AMGN', 'AMP', 'AMT', 'AMZN', 'ANET', 'ANSS', 'ANTM', 'AON', 'AOS', 'APA', 'APD', 'APH', 'APTV', 'ARE', 'ATO', 'ATVI', 'AVB', 'AVGO', 'AVY', 'AWK', 'AXP', 'AZO', 'BA', 'BAC', 'BAX', 'BBY', 'BDX', 'BEN', 'BF.B', 'BIIB', 'BK', 'BKNG', 'BKR', 'BLK', 'BLL', 'BMY', 'BR', 'BRK.B', 'BSX', 'BWA', 'BXP', 'C', 'CAG', 'CAH', 'CARR', 'CAT', 'CB', 'CBOE', 'CBRE', 'CCI', 'CCL', 'CDNS', 'CDW', 'CE', 'CERN', 'CF', 'CFG', 'CHD', 'CHRW', 'CHTR', 'CI', 'CINF', 'CL', 'CLX', 'CMA', 'CMCSA', 'CME', 'CMG', 'CMI', 'CMS', 'CNC', 'CNP', 'COF', 'COG', 'COO', 'COP', 'COST', 'COTY', 'CPB', 'CPRI', 'CPRT', 'CRM', 'CSCO', 'CSX', 'CTAS', 'CTL', 'CTSH', 'CTVA', 'CTXS', 'CVS', 'CVX', 'CXO', 'D', 'DAL', 'DD', 'DE', 'DFS', 'DG', 'DGX', 'DHI', 'DHR', 'DIS', 'DISCA', 'DISCK', 'DISH', 'DLR', 'DLTR', 'DOV', 'DOW', 'DRE', 'DRI', 'DTE', 'DUK', 'DVA', 'DVN', 'DXC', 'EA', 'EBAY', 'ECL', 'ED', 'EFX', 'EIX', 'EL', 'EMN', 'EMR', 'EOG', 'EQIX', 'EQR', 'ES', 'ESS', 'ETFC', 'ETN', 'ETR', 'EVRG', 'EW', 'EXC', 'EXPD', 'EXPE', 'EXR', 'F', 'FANG', 'FAST', 'FB', 'FBHS', 'FCX', 'FDX', 'FE', 'FFIV', 'FIS', 'FISV', 'FITB', 'FLIR', 'FLS', 'FLT', 'FMC', 'FOX', 'FOXA', 'FRC', 'FRT', 'FTI', 'FTNT', 'FTV', 'GD', 'GE', 'GILD', 'GIS', 'GL', 'GLW', 'GM', 'GOOG', 'GOOGL', 'GPC', 'GPN', 'GPS', 'GRMN', 'GS', 'GWW', 'HAL', 'HAS', 'HBAN', 'HBI', 'HCA', 'HD', 'HES', 'HFC', 'HIG', 'HII', 'HLT', 'HOG', 'HOLX', 'HON', 'HP', 'HPE', 'HPQ', 'HRB', 'HRL', 'HSIC', 'HST', 'HSY', 'HUM', 'HWM', 'IBM', 'ICE', 'IDXX', 'IEX', 'IFF', 'ILMN', 'INCY', 'INFO', 'INTC', 'INTU', 'IP', 'IPG', 'IPGP', 'IQV', 'IR', 'IRM', 'ISRG', 'IT', 'ITW', 'IVZ', 'J', 'JBHT', 'JCI', 'JKHY', 'JNJ', 'JNPR', 'JPM', 'JWN', 'K', 'KEY', 'KEYS', 'KHC', 'KIM', 'KLAC', 'KMB', 'KMI', 'KMX', 'KO', 'KR', 'KSS', 'KSU', 'L', 'LB', 'LDOS', 'LEG', 'LEN', 'LH', 'LHX', 'LIN', 'LKQ', 'LLY', 'LMT', 'LNC', 'LNT', 'LOW', 'LRCX', 'LUV', 'LVS', 'LW', 'LYB', 'LYV', 'MA', 'MAA', 'MAR', 'MAS', 'MCD', 'MCHP', 'MCK', 'MCO', 'MDLZ', 'MDT', 'MET', 'MGM', 'MHK', 'MKC', 'MKTX', 'MLM', 'MMC', 'MMM', 'MNST', 'MO', 'MOS', 'MPC', 'MRK', 'MRO', 'MS', 'MSCI', 'MSFT', 'MSI', 'MTB', 'MTD', 'MU', 'MXIM', 'MYL', 'NBL', 'NCLH', 'NDAQ', 'NEE', 'NEM', 'NFLX', 'NI', 'NKE', 'NLOK', 'NLSN', 'NOC', 'NOV', 'NOW', 'NRG', 'NSC', 'NTAP', 'NTRS', 'NUE', 'NVDA', 'NVR', 'NWL', 'NWS', 'NWSA', 'O', 'ODFL', 'OKE', 'OMC', 'ORCL', 'ORLY', 'OTIS', 'OXY', 'PAYC', 'PAYX', 'PBCT', 'PCAR', 'PEAK', 'PEG', 'PEP', 'PFE', 'PFG', 'PG', 'PGR', 'PH', 'PHM', 'PKG', 'PKI', 'PLD', 'PM', 'PNC', 'PNR', 'PNW', 'PPG', 'PPL', 'PRGO', 'PRU', 'PSA', 'PSX', 'PVH', 'PWR', 'PXD', 'PYPL', 'QCOM', 'QRVO', 'RCL', 'RE', 'REG', 'REGN', 'RF', 'RHI', 'RJF', 'RL', 'RMD', 'ROK', 'ROL', 'ROP', 'ROST', 'RSG', 'RTX', 'SBAC', 'SBUX', 'SCHW', 'SEE', 'SHW', 'SIVB', 'SJM', 'SLB', 'SLG', 'SNA', 'SNPS', 'SO', 'SPG', 'SPGI', 'SRE', 'STE', 'STT', 'STX', 'STZ', 'SWK', 'SWKS', 'SYF', 'SYK', 'SYY', 'T', 'TAP', 'TDG', 'TEL', 'TFC', 'TFX', 'TGT', 'TIF', 'TJX', 'TMO', 'TMUS', 'TPR', 'TROW', 'TRV', 'TSCO', 'TSN', 'TT', 'TTWO', 'TWTR', 'TXN', 'TXT', 'UA', 'UAA', 'UAL', 'UDR', 'UHS', 'ULTA', 'UNH', 'UNM', 'UNP', 'UPS', 'URI', 'USB', 'V', 'VAR', 'VFC', 'VIAC', 'VLO', 'VMC', 'VNO', 'VRSK', 'VRSN', 'VRTX', 'VTR', 'VZ', 'WAB', 'WAT', 'WBA', 'WDC', 'WEC', 'WELL', 'WFC', 'WHR', 'WLTW', 'WM', 'WMB', 'WMT', 'WRB', 'WRK', 'WU', 'WY', 'WYNN', 'XEL', 'XLNX', 'XOM', 'XRAY', 'XRX', 'XYL', 'YUM', 'ZBH', 'ZBRA', 'ZION', 'ZTS']
###Markdown
Create cache directory for current sp500 symbol timeseries
###Code
now = datetime.now()
dt_string = now.strftime('%m-%d-%Y') # mm-dd-YYYY
dir_name = 'sp500-components-{}'.format(dt_string)
###Output
_____no_output_____
###Markdown
Update time series for the symbols below. Time series will be fetched for any symbols not already cached.
###Code
pf.update_cache_symbols(symbols=symbols, dir_name=dir_name,from_year=2018)
###Output
updating symbols:
A AAL AAP AAPL ABBV ABC ABMD ABT ACN ADBE ADI
ADM ADP ADS ADSK AEE AEP AES AFL AGN AIG
AIV AIZ AJG AKAM ALB ALGN ALK ALL ALLE ALXN
AMAT AMCR AMD AME AMGN AMP AMT AMZN ANET ANSS
ANTM AON AOS APA APD APH APTV ARE ATO ATVI
AVB AVGO AVY AWK AXP AZO BA BAC BAX BBY
BDX BEN BF.B BIIB BK BKNG BKR BLK BLL BMY
BR BRK.B BSX BWA BXP C CAG CAH CARR CAT
CB CBOE CBRE CCI CCL CDNS CDW CE CERN CF
CFG CHD CHRW CHTR CI CINF CL CLX CMA CMCSA
CME CMG CMI CMS CNC CNP COF COG COO COP
COST COTY CPB CPRI CPRT CRM CSCO CSX CTAS CTL
CTSH CTVA CTXS CVS CVX CXO D DAL DD DE
DFS DG DGX DHI DHR DIS DISCA DISCK DISH DLR
DLTR DOV DOW DRE DRI DTE DUK DVA DVN DXC
EA EBAY ECL ED EFX EIX EL EMN EMR EOG
EQIX EQR ES ESS ETFC ETN ETR EVRG EW EXC
EXPD EXPE EXR F FANG FAST FB FBHS FCX FDX
FE FFIV FIS FISV FITB FLIR FLS FLT FMC FOX
FOXA FRC FRT FTI FTNT FTV GD GE GILD GIS
GL GLW GM GOOG GOOGL GPC GPN GPS GRMN GS
GWW HAL HAS HBAN HBI HCA HD HES HFC HIG
HII HLT HOG HOLX HON HP HPE HPQ HRB HRL
HSIC HST HSY HUM HWM IBM ICE IDXX IEX IFF
ILMN INCY INFO INTC INTU IP IPG IPGP IQV IR
IRM ISRG IT ITW IVZ J JBHT JCI JKHY JNJ
JNPR JPM JWN K KEY KEYS KHC KIM KLAC KMB
KMI KMX KO KR KSS KSU L LB LDOS LEG
LEN LH LHX LIN LKQ LLY LMT LNC LNT LOW
LRCX LUV LVS LW LYB LYV MA MAA MAR MAS
MCD MCHP MCK MCO MDLZ MDT MET MGM MHK MKC
MKTX MLM MMC MMM MNST MO MOS MPC MRK MRO
MS MSCI MSFT MSI MTB MTD MU MXIM MYL NBL
NCLH NDAQ NEE NEM NFLX NI NKE NLOK NLSN NOC
NOV NOW NRG NSC NTAP NTRS NUE NVDA NVR NWL
NWS NWSA O ODFL OKE OMC ORCL ORLY OTIS OXY
PAYC PAYX PBCT PCAR PEAK PEG PEP PFE PFG PG
PGR PH PHM PKG PKI PLD PM PNC PNR PNW
PPG PPL PRGO PRU PSA PSX PVH PWR PXD PYPL
QCOM QRVO RCL RE REG REGN RF RHI RJF RL
RMD ROK ROL ROP ROST RSG RTX SBAC SBUX SCHW
SEE SHW SIVB SJM SLB SLG SNA SNPS SO SPG
SPGI SRE STE STT STX STZ SWK SWKS SYF SYK
SYY T TAP TDG TEL TFC TFX TGT TIF TJX
TMO TMUS TPR TROW TRV TSCO TSN TT TTWO TWTR
TXN TXT UA UAA UAL UDR UHS ULTA UNH UNM
UNP UPS URI USB V VAR VFC VIAC VLO VMC
VNO VRSK VRSN VRTX VTR VZ WAB WAT WBA WDC
WEC WELL WFC WHR WLTW WM WMB WMT WRB WRK
WU WY WYNN XEL XLNX XOM XRAY XRX XYL YUM
ZBH ZBRA ZION ZTS
|
Data Analysis/01.Data Analysis Process/05.Drawing Conclusions.ipynb | ###Markdown
Which store has the highest total sales for the final month of data?
###Code
# total sales for the last month
# Last month starts from Index 196 till end
df.iloc[196:, 1:].sum()
###Output
_____no_output_____
###Markdown
as per finding, Store A is the Top Store which sales are the largest for last month. Which store makes the most sales on average?
###Code
# average sales
df.iloc[:, 1:].mean()
###Output
_____no_output_____
###Markdown
As per finding, Store B made the most sales on average. Which store sells the most during the week of March 13th, 2016?
###Code
# sales on march 13, 2016
df.head()
df[df["week"] == "2016-03-13"]
###Output
_____no_output_____
###Markdown
As per finding, Store D sold the most during the week of March 13, 2016. In what week does store C make its worst sales?
###Code
# worst week for store C
df.sort_values(by=["storeC"], ascending=True)[["week", "storeC"]]
# worst week for store C
df[df["storeC"] == df["storeC"].min()][["week", "storeC"]]
###Output
_____no_output_____
###Markdown
As per finding, week of 6 July 2014 was the worst sales week for Store C. Which store has the most sales in the latest 3-month period?
###Code
# total sales during most recent 3 month period
df.tail()
last_three_months = df[df['week'] >= '2017-12-01']
last_three_months
last_three_months.iloc[:, 1:].sum()
###Output
_____no_output_____ |
examples/01-filter/cell-centers.ipynb | ###Markdown
Extract Cell Centers====================Extract the coordinates of the centers of all cells/faces in a mesh.Here we use `pyvista.DataSetFilters.cell_centers`{.interpreted-textrole="func"}
###Code
# sphinx_gallery_thumbnail_number = 3
from pyvista import examples
import pyvista as pv
###Output
_____no_output_____
###Markdown
First let\'s fetch the centers of a mesh with 2D geometries (a surface)
###Code
mesh = examples.download_teapot()
cpos = [(6.192871661244108, 5.687542355343226, -4.95345468836544),
(0.48853358141600634, 1.2019347531215714, 0.1656178278582367),
(-0.40642070472687936, 0.8621356761976646, 0.30256286387543047)]
centers = mesh.cell_centers()
p = pv.Plotter()
p.add_mesh(mesh, show_edges=True, line_width=1)
p.add_mesh(centers, color="r", point_size=8.0, render_points_as_spheres=True)
p.show(cpos=cpos)
###Output
_____no_output_____
###Markdown
We can also do this for full 3D meshes.
###Code
grid = examples.download_letter_a()
cpos = [(2.704583323659036, 0.7822568412034183, 1.7251126717482546),
(3.543391913452799, 0.31117673768140197, 0.16407006760146028),
(0.1481171795711516, 0.96599698246102, -0.2119224645762945)]
centers = grid.cell_centers()
p = pv.Plotter()
p.add_mesh(grid, show_edges=True, opacity=0.5, line_width=1)
p.add_mesh(centers, color="r", point_size=8.0, render_points_as_spheres=True)
p.show(cpos=cpos)
p = pv.Plotter()
p.add_mesh(grid.extract_all_edges(), color="k", line_width=1)
p.add_mesh(centers, color="r", point_size=8.0, render_points_as_spheres=True)
p.show(cpos=cpos)
###Output
_____no_output_____
###Markdown
Extract Cell Centers {cell_centers_example}====================Extract the coordinates of the centers of all cells/faces in a mesh.Here we use `pyvista.DataSetFilters.cell_centers`{.interpreted-textrole="func"}
###Code
import pyvista as pv
from pyvista import examples
###Output
_____no_output_____
###Markdown
First let\'s fetch the centers of a mesh with 2D geometries (a surface)
###Code
mesh = examples.download_teapot()
cpos = [
(6.192871661244108, 5.687542355343226, -4.95345468836544),
(0.48853358141600634, 1.2019347531215714, 0.1656178278582367),
(-0.40642070472687936, 0.8621356761976646, 0.30256286387543047),
]
centers = mesh.cell_centers()
p = pv.Plotter()
p.add_mesh(mesh, show_edges=True, line_width=1)
p.add_mesh(centers, color="r", point_size=8.0, render_points_as_spheres=True)
p.show(cpos=cpos)
###Output
_____no_output_____
###Markdown
We can also do this for full 3D meshes.
###Code
grid = examples.download_letter_a()
cpos = [
(2.704583323659036, 0.7822568412034183, 1.7251126717482546),
(3.543391913452799, 0.31117673768140197, 0.16407006760146028),
(0.1481171795711516, 0.96599698246102, -0.2119224645762945),
]
centers = grid.cell_centers()
p = pv.Plotter()
p.add_mesh(grid, show_edges=True, opacity=0.5, line_width=1)
p.add_mesh(centers, color="r", point_size=8.0, render_points_as_spheres=True)
p.show(cpos=cpos)
p = pv.Plotter()
p.add_mesh(grid.extract_all_edges(), color="k", line_width=1)
p.add_mesh(centers, color="r", point_size=8.0, render_points_as_spheres=True)
p.show(cpos=cpos)
###Output
_____no_output_____ |
Semantic_Group/New_variables.ipynb | ###Markdown
Columns used to classify comments**Sentiment comment variables:** + c_rating3 + c_rating**Comment related** c_text : contains the comments text c_rating : evaluation of the comment (positive, problematic, negative...) c_ratingCivile : is it a respectful comment (respectful/disrespectful) c_ratingPosNeg : positive or negative attitude wrt post c_category : topic of the comments (muslim refugees, muslims...)
###Code
database = pd.read_csv("database/id_lemmas.csv",index_col = 0 , sep=',', engine='python')
database['text_nlp'] = database.apply(lambda row: word_tokenize(row['text_nlp']), axis = 1)
database
attributes_to_keep = ['c_rating3', 'c_rating']
database_comments = pd.read_csv("database/com_liwc.csv", sep='\t', engine='python')
database_comments.head(2)
database_comments_attr = database_comments[attributes_to_keep]
#free memory
del database_comments
database_comments_attr.shape
###Output
_____no_output_____
###Markdown
Keep the comments that have the same index, since some of them might have been canceled during the text cleaning.
###Code
database_comments_attr = database_comments_attr.iloc[database.index,:]
print("Shape of the database:", database_comments_attr.shape)
database_comments_attr.head()
###Output
Shape of the database: (75775, 2)
###Markdown
C_rating3Using only c_rating3 as column of attributes to be kept
###Code
list_columns = list(database_comments_attr.c_rating3.unique())
list_columns.insert(0, 'word')
df_word_c_rating3 = pd.DataFrame(columns = list_columns)
df_word_c_rating3
database_attr = database.join(database_comments_attr)
database_attr.head()
unique_words = {}
words_hate = {}
words_neg = {}
words_pos = {}
for index, row in database_attr.iterrows():
if(len(row['text_nlp'])>0):
for single_word in row['text_nlp']:
unique_words.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
unique_words[single_word] += 1
if row['c_rating3'] == 'probl-hate':
words_hate.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_hate[single_word] += 1
elif row['c_rating3'] == 'positivo':
words_pos.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_pos[single_word] += 1
elif row['c_rating3'] == 'negativo':
words_neg.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_neg[single_word] += 1
###Output
_____no_output_____
###Markdown
For every word that is present in unique words dictionary, take the number of occurrences of that word in a given type of comment.
###Code
for word in unique_words.keys():
row_dict = {'word' : word,
'probl-hate': words_hate.get(word),
'positivo' : words_neg.get(word),
'negativo' : words_pos.get(word)}
df_word_c_rating3 = df_word_c_rating3.append(row_dict, ignore_index=True )
df_word_c_rating3 = df_word_c_rating3.fillna(0)
df_word_c_rating3
df_word_c_rating3.to_csv('words_with_ratings3.csv', index = False)
###Output
_____no_output_____
###Markdown
C_ratingUsing only c_rating3 as column of attributes to be kept
###Code
list_columns = list(database_comments_attr.c_rating.unique())
list_columns.insert(0, 'word')
df_word_c_rating = pd.DataFrame(columns = list_columns)
df_word_c_rating
database_attr = database.join(database_comments_attr)
database_attr.head()
unique_words = {}
words_hate = {}
words_ambig = {}
words_prob = {}
words_neg = {}
words_pos = {}
for index, row in database_attr.iterrows():
if(len(row['text_nlp'])>0):
for single_word in row['text_nlp']:
unique_words.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
unique_words[single_word] += 1
if row['c_rating'] == 'hate':
words_hate.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_hate[single_word] += 1
elif row['c_rating'] == 'positivo':
words_pos.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_pos[single_word] += 1
elif row['c_rating'] == 'negativo':
words_neg.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_neg[single_word] += 1
elif row['c_rating'] == 'ambiguo':
words_ambig.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_ambig[single_word] += 1
elif row['c_rating'] == 'problematico':
words_prob.setdefault(single_word, 0) ## setdefault() method returns the value of a key (if the key is in dictionary)
words_prob[single_word] += 1
for word in unique_words.keys():
row_dict = {'word' : word,
'problematico': words_prob.get(word),
'positivo' : words_hate.get(word),
'negativo' : words_pos.get(word),
'hate' : words_neg.get(word),
'ambiguo' : words_ambig.get(word)}
df_word_c_rating = df_word_c_rating.append(row_dict, ignore_index=True )
df_word_c_rating = df_word_c_rating.fillna(0)
df_word_c_rating
df_word_c_rating.to_csv('words_with_ratings5.csv', index = False)
###Output
_____no_output_____ |
bug-report.ipynb | ###Markdown
**Step 1: Please select a GPU runtime**![](https://i.imgur.com/RUIixAQ.png) ![image.png](https://i.imgur.com/zKc74pP.png) **Step 2: Install TVM by running the following block.**We have pre-compiled a tvm build for your convenience.![](https://i.imgur.com/k9U5WCB.png)
###Code
# Let's first install TVM!
# This TVM git tag is: 3b8715df7ea9263b71e30888f1aa112bd8cfcfdc
# which is prior to our 1st detected bug.
!pip install wget
import os
import wget
# Your Python should be Python 3.7
pyversion = os.popen('python3 --version').read()
print('Your python version is ', pyversion)
def install_tvm(pyv: int):
return wget.download(
'https://github.com/Tzer-AnonBot/tzer/releases/download/tvm-0.8.dev1040/' +
'tlcpack_nightly-0.8.dev1040+g3b8715df7-cp{}-cp{}'.format(pyv, pyv) +
'm-manylinux_2_17_x86_64.manylinux2014_x86_64.whl')
whl_name = None
if '3.8' in pyversion:
# TVM for Python 3.8
whl_name = install_tvm(38)
elif '3.7' in pyversion:
# TVM for Python 3.7
whl_name = install_tvm(37)
elif '3.6' in pyversion:
# TVM for Python 3.6
whl_name = install_tvm(36)
else:
print('Please make sure you have Python 3.6,7,8. Actually, 3.7+ is recommended.')
if whl_name:
os.system('python3 -m pip install ' + whl_name)
import tvm
print('Successfully installed TVM!')
else:
print('Failed to install tvm...')
###Output
Requirement already satisfied: wget in /usr/local/lib/python3.7/dist-packages (3.2)
Your python version is Python 3.7.12
Successfully installed TVM!
###Markdown
**Step 3: Click the bottoms to run bugs!**![](https://i.imgur.com/OG7YPlK.png) Notes: Bug symptomFor bugs whose symptom is crash (e.g., Bug 1), you will see:![](https://i.imgur.com/iRuFd1H.png) Bug 1
###Code
# Crash. You will see "Your session crashed for an unknown reason" after running this bug.
import tvm
from tvm import tir
v = tir.Cast('bool', tvm.runtime.convert("a"))
body = tir.stmt.While(v, body=tir.Evaluate(tir.const(0)))
func = tir.PrimFunc(params={}, body=body)
mod = tvm.lower(func)
nopt_mod = tvm.build(mod)
###Output
_____no_output_____
###Markdown
Bug 2 & Bug 3
###Code
import tvm
from tvm import ir, tir
a = tir.Var("a", "int32")
b = tir.Var("b", "handle")
iter_var = tir.IterVar(ir.Range(0,1 ), a, 1)
buffer = tir.buffer.decl_buffer((1,))
buffer_map = {b: buffer}
store = tir.Store(buffer.data, tir.const(1), tir.const(1))
attr_stmt = tir.AttrStmt(iter_var, "coproc_uop_scope", tir.const(1), store)
f = tir.PrimFunc({a, b}, body=attr_stmt, buffer_map=buffer_map)
mod = tvm.lower(f)
tvm.build(mod)
import tvm
from tvm import ir, tir
a = tir.Var("a", "int32")
b = tir.Var("b", "handle")
iter_var = tir.IterVar(ir.Range(0,1 ), a, 1)
buffer = tir.buffer.decl_buffer((1,))
buffer_map = {b: buffer}
store = tir.Store(buffer.data, tir.const(1), tir.const(1))
attr_stmt = tir.AttrStmt(iter_var, "compute_scope", tir.const(1), store)
f = tir.PrimFunc({a, b}, body=attr_stmt, buffer_map=buffer_map)
mod = tvm.lower(f)
tvm.build(mod)
###Output
_____no_output_____
###Markdown
Bug 4
###Code
import tvm
print(tvm.tir.Shuffle([1],[1]).dtype)
###Output
_____no_output_____
###Markdown
Bug 5 & Bug 6 & Bug 7 & Bug 8
###Code
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.Div(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.Mod(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.FloorDiv(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.FloorMod(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
###Output
_____no_output_____
###Markdown
Bug 9
###Code
import tvm
from tvm import relay
from tvm.relay.testing import create_workload
simple_net = relay.nn.conv2d(
data=relay.var("data", relay.TensorType((1, 3, 224, 224), "float32")),
weight=relay.var("weight"),
kernel_size=(5, 5),
channels=3,
padding=(1, 1),
)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
mod, _ = create_workload(simple_net)
old_mod = mod
with tvm.transform.PassContext(opt_level=4):
with tvm.target.Target("llvm"):
seq = tvm.transform.Sequential(passes=[relay.transform.ToBasicBlockNormalForm()], opt_level=4)
new_mod = seq(mod)
assert old_mod.astext() == mod.astext()
assert old_mod.astext() != new_mod.astext()
###Output
_____no_output_____
###Markdown
Bug 10
###Code
from tvm import tir
hash(tir.StringImm("s"))
###Output
_____no_output_____
###Markdown
Bug 11
###Code
import tvm
import numpy as np
from tvm import tir, te
n = te.size_var("n")
m = te.size_var("m")
A = te.placeholder((n, n), name="A", dtype="int32")
T = te.compute((m, m), lambda i, j: A[i][j])
s = te.create_schedule(T.op)
ir_m = tvm.lower(s, [A, T])
inputs = [tvm.nd.array(np.random.uniform(0, 100, size=(32, 32)).astype("int32"))]
output = tvm.nd.empty((32, 32), "int32")
with tvm.transform.PassContext(opt_level=4):
opt = tvm.transform.Sequential(
[tir.transform.DecorateDeviceScope()]
)
mod = opt(ir_m)
opt_execute = tvm.build(mod, [*inputs, output], tvm.target.Target("llvm"))
opt_execute(*[inputs[0], output])
###Output
_____no_output_____
###Markdown
Bug 12
###Code
import tvm
from tvm import tir
tvm.build(tir.PrimFunc([], tir.Evaluate(tir.ret(tir.const(0)))))
###Output
_____no_output_____
###Markdown
Bug 13
###Code
from tvm import tir
tir.Var(name=1, dtype='int')
###Output
_____no_output_____
###Markdown
Bug 14
###Code
from tvm import tir
print({tir.const(1), tir.const(True)})
###Output
_____no_output_____
###Markdown
Bug 15 & Bug 16
###Code
from tvm import tir
import tvm
zero = tir.const(0)
nop = tir.Evaluate(zero)
v = tir.Var("i1", "int32")
for_stmt = tir.For(v, zero, zero, tir.ForKind.SERIAL, nop)
load = tir.Evaluate(tir.Load("int32", v, zero))
seq = tir.SeqStmt([for_stmt, for_stmt, load])
func = tir.PrimFunc([], seq)
mod = tvm.IRModule({"main": func})
mod = tir.transform.InjectVirtualThread()(
mod
) # Use pass InjectVirtualThread to invoke ConvertSSA
from tvm import tir
import tvm
zero = tir.const(0)
nop = tir.Evaluate(zero)
v = tir.Var("i1", "int32")
for_stmt = tir.For(v, zero, zero, tir.ForKind.SERIAL, nop)
store = tir.Store(v, zero, zero)
seq = tir.SeqStmt([for_stmt, for_stmt, store])
func = tir.PrimFunc([], seq)
mod = tvm.IRModule({"main": func})
mod = tir.transform.InjectVirtualThread()(
mod
) # Use pass InjectVirtualThread to invoke ConvertSSA
###Output
_____no_output_____
###Markdown
Bug 17 & Bug 18
###Code
import tvm
array = tvm.runtime.convert([1, 2, 3])
print(array.type_key)
print(array.test_key)
import tvm
from tvm import te
a = te.var("a")
b = te.var("b")
amap = tvm.runtime.convert({a: 2, b: 3})
print(amap.type_key)
print(amap.test_key)
###Output
Map
###Markdown
Bug 19
###Code
import tvm
from tvm import tir
var = tir.Var('a',dtype='int32')
buf = tir.decl_buffer((1,), name='buf')
buf_load = tir.expr.BufferLoad(buffer=buf, indices=tvm.runtime.convert([0]))
buf_load_stmt = tir.stmt.Evaluate(buf_load)
for_loop = tir.stmt.For(loop_var=var, kind=1, min_val=1, extent=buf_load, body=buf_load_stmt)
buf_func = tir.PrimFunc(params={}, body=for_loop)
tvm.lower(buf_func)
###Output
_____no_output_____
###Markdown
Bug 20 & Bug 21 & Bug 22
###Code
# API Misuse in 3 of previous tutorials (bring_your_own_datatypes.py, from_keras.py, from_onnx.py )
# We only show one motivating example here.
import tvm
import tvm.relay as relay
from tvm.relay import testing
from tvm import IRModule
import time
shape = (1, 3, 100, 100)
def example():
return testing.squeezenet.get_workload(batch_size=1, num_classes=100, image_shape=shape[1:], dtype='float32')
data = relay.var("data", relay.TensorType(shape, "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(5, 5), channels=32, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.nn.relu(simple_net)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
def error_usage(): # Used in previous tutorial
mod, params = example()
target = tvm.target.Target('llvm')
dev = tvm.cpu()
with tvm.transform.PassContext(opt_level=4):
executor = relay.build_module.create_executor("vm", mod, dev, target)
evaluated = executor.evaluate()
t0 = time.time()
tvm_out = evaluated(
tvm.nd.empty(shape=shape,
device=dev,
dtype='float32'), **params)
print(f'Elapsed time by API-Misuse case: {time.time() - t0}')
def good_usage(): # After correction.
mod, params = example()
target = tvm.target.Target('llvm')
dev = tvm.cpu()
with tvm.transform.PassContext(opt_level=4):
mod = IRModule.from_expr(relay.build_module.bind_params_by_name(mod["main"], params))
executor = relay.build_module.create_executor("vm", mod, dev, target).evaluate()
t0 = time.time()
tvm_out = executor(
tvm.nd.empty(shape=shape,
device=dev,
dtype='float32'), **params)
print(f'Elapsed time by correct case: {time.time() - t0}')
if __name__ == '__main__':
error_usage()
good_usage()
###Output
Elapsed time by API-Misuse case: 0.22153639793395996
Elapsed time by correct case: 0.18967914581298828
###Markdown
Bug 23
###Code
# Imcompatible passes (but actually independent) introduced by inconsistency.
import tvm
import tvm.testing
from tvm import relay
from tvm.relay import testing
data = relay.var("data", relay.TensorType((1, 3, 64, 64), "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(3, 3), channels=3, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
module, params = testing.create_workload(simple_net)
# Apply some simple passes to legalize the IR.
with tvm.transform.PassContext(opt_level=0):
module, params = relay.optimize(module, tvm.testing.enabled_targets()[0][0], params)
seq = tvm.transform.Sequential([relay.transform.AnnotateSpans(), relay.transform.DefuseOps()])
with tvm.transform.PassContext(opt_level=3):
module = seq(module)
###Output
...100%, 0.47 MB, 1596 KB/s, 0 seconds passed
###Markdown
Bug 24 & Bug 25 & Bug 26
###Code
import tvm
tvm.tir.expr.Call(None, None, None, None)
import tvm
tvm.tir.generic.add(None, None)
import tvm
tvm.tir.stmt.Allocate(None, None, None, None, None, None)
###Output
_____no_output_____
###Markdown
Bug 27
###Code
# CuDNN context error. The script ended with a hang and segfault.
# This bug is reproducible on CentOS 7. Other platform might not be able to reproduce this.
!uname -a # Linux of Google Colab is not CentOS 7 and cannot reproduce this bug.
# The output of this program on CentOS 7
"""
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:245: CUDNN Found 8 fwd algorithms, choosing CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 0) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM - time: 0.06144 ms, Memory: 0
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 1) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM - time: 0.104448 ms, Memory: 304000
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 2) CUDNN_CONVOLUTION_FWD_ALGO_GEMM - time: 0.110592 ms, Memory: 5419008
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 3) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD - time: 0.146432 ms, Memory: 18176
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 4) CUDNN_CONVOLUTION_FWD_ALGO_FFT - time: 0.916384 ms, Memory: 26949312
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 5) CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING - time: 1.10106 ms, Memory: 374272
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 6) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED - time: 1.79712 ms, Memory: 137288448
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 7) CUDNN_CONVOLUTION_FWD_ALGO_DIRECT - time: -1 ms, Memory: 0
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
[1] 134780 segmentation fault (core dumped) python3 test.py
"""
content = """
import tvm
import tvm.relay as relay
import numpy as np
from tvm.relay import testing
def example():
out_channels = 16
batch_size = 1
data = relay.var("data", relay.TensorType((batch_size, 3, 224, 224), "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(3, 3), channels=out_channels, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.nn.relu(simple_net)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
def func():
data = np.zeros((1, 3, 224, 224))
mod, params = example()
target = tvm.target.Target('cuda -libs=cudnn')
dev = tvm.cuda()
with tvm.transform.PassContext(opt_level=3):
executor = relay.build_module.create_executor("graph", mod, dev, target)
tvm_out = executor.evaluate()(tvm.nd.array(data.astype('float32')), **params)
"""
with open('test.py', 'w') as f:
f.write(content)
!python3 test.py
###Output
Linux 26e8f5ac1921 5.4.104+ #1 SMP Sat Jun 5 09:50:34 PDT 2021 x86_64 x86_64 x86_64 GNU/Linux
###Markdown
Bug 28 & 29
###Code
# There are 2 bugs here.
# 1. One is about OOM.
# 2. Another is the incorrect exception. (The exception should be OOM not "device type = 0")
import tvm
import tvm.relay as relay
import numpy as np
from tvm.relay import testing
def example():
out_channels = 32
data = relay.var("data", relay.TensorType((relay.Any(), 3, 224, 224), "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(3, 3), channels=out_channels, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.nn.relu(simple_net)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
if __name__ == '__main__':
mod, params = example()
# compile the model
target = tvm.target.Target('cuda')
dev = tvm.cuda()
with tvm.transform.PassContext(opt_level=3):
executor = relay.build_module.create_executor("vm", mod, dev, target)
for i in range(100):
print(f'Running batch size = {i}') # Should be OOM error, but a later exception received.
tvm_out = executor.evaluate()(tvm.nd.empty(shape=(i, 3, 224, 224), device=dev, dtype='float32'), **params)
###Output
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32, workload=('conv2d_nchw.cuda', ('TENSOR', (any_dim, 3, 224, 224), 'float32'), ('TENSOR', (32, 3, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
###Markdown
Bug 30
###Code
# We tested tvm across various runtimes (vm, graph, debug, etc.)
# We found debug module fails for 0-batch input (viable for other runtime types)
# We observed CUDA internal error using cuda-gdb.
import tvm
import tvm.relay as relay
import numpy as np
from tvm.relay import testing
def example():
data = relay.var("data", relay.TensorType((relay.Any(), 3, 128, 128), "float32"))
simple_net = relay.nn.conv2d(
data=data, weight=relay.var("weight"), kernel_size=(3, 3), channels=8, padding=(1, 1)
)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
if __name__ == '__main__':
data = np.zeros((0, 3, 128, 128))
mod, params = example()
target = tvm.target.Target('cuda')
dev = tvm.cuda()
with tvm.transform.PassContext(opt_level=2):
executor = relay.build_module.create_executor("debug", mod, dev, target)
tvm_out = executor.evaluate()(tvm.nd.array(data.astype('float32')), **params)
###Output
Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32, workload=('conv2d_nchw.cuda', ('TENSOR', (any_dim, 3, 128, 128), 'float32'), ('TENSOR', (8, 3, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
###Markdown
Bug 31
###Code
from tvm import tir
import tvm
a = tir.Broadcast(tir.const(1), 2)
v = tir.Var('i1', 'int32')
stmt = tir.Store(v, a, a, None)
func = tir.PrimFunc([v], stmt)
tvm.build(func)
###Output
_____no_output_____
###Markdown
Bug 32
###Code
import tvm
buf = tvm.tir.buffer.decl_buffer((1,))
value = tvm.tir.IntImm('int32', 1)
i = tvm.tir.IntImm('int32x1', 1)
index = tvm.tir.Shuffle([i, i], [i])
s = tvm.tir.Store(buf.data, value, index)
f = tvm.tir.PrimFunc({buf.data}, s)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 33
###Code
import tvm
v = tvm.tir.Var('v', 'float32')
value = tvm.tir.isnan(v)
op = value.op
buf = tvm.tir.buffer.decl_buffer((1,))
value = tvm.tir.Call('int32', op, [0])
s = tvm.tir.Store(buf.data, value, 0)
f = tvm.tir.PrimFunc({buf.data}, s)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 34
###Code
import tvm
from tvm import tir
var = tir.Var(name='v', dtype='int32')
buf = tir.decl_buffer((1,), name='buf')
buf_load = tir.expr.BufferLoad(buffer=buf, indices=tvm.runtime.convert([0]))
then_case = tir.Store(buffer_var=var,value=buf_load,index=tvm.runtime.convert(0))
for_body = then_case
for_stmt = tir.For(loop_var=var, min_val=0, extent=0, kind=1,body=for_body)
y = tir.IfThenElse(then_case=then_case,else_case=for_stmt,condition=tvm.runtime.convert(False))
f=tir.PrimFunc(body=y,params=[var])
mod = tvm.IRModule({'main':f})
mod = tir.transform.PlanAndUpdateBufferAllocationLocation()(mod)
mod = tir.transform.CompactBufferAllocation()(mod)
mod = tir.transform.LowerMatchBuffer()(mod)
###Output
_____no_output_____
###Markdown
Bug 35
###Code
import tvm
from tvm import tir
# import os
# print(os.getpid())
# input()
v = tir.Broadcast(0, 8)
index = tir.Ramp(72,1,8)
buf = tir.buffer.decl_buffer((1, 0))
store = tir.Store(buf.data, v, index)
loop_var = tir.Var('v', 'int32')
for_loop = tir.For(loop_var, 0, 4, tir.ForKind.VECTORIZED, store)
f = tir.PrimFunc({buf.data}, for_loop)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 36
###Code
# This bug happens to Debug Mode
import tvm
from tvm import ir, tir
a = tir.Var("a", "int32")
iter_var = tir.IterVar(ir.Range(0,1 ), a, 1, "vthread")
attr_stmt = tir.AttrStmt(iter_var, "virtual_thread",tir.op.floormod(tir.ret(tir.IntImm('int32',0)), 3), tir.Evaluate(tir.const(0)))
f = tir.PrimFunc({a}, body=attr_stmt)#, buffer_map=buffer_map)
mod = tvm.lower(f)
tvm.build(mod)
###Output
_____no_output_____
###Markdown
Bug 37
###Code
import tvm
from tvm import tir
s_v = tir.Var('buf', 'handle')
buf = tir.buffer.decl_buffer((1, 0))
store = tir.Store(buf.data, tir.IntImm('int32', 0), 0, tvm.runtime.convert(32))
f = tir.PrimFunc({s_v}, store, buffer_map={s_v:buf})
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 38
###Code
import tvm
from tvm import tir
v = tir.Var('v', 'int32')
s = tir.Select(tir.Cast('bool',v), 150, 1)
let_stmt = tir.LetStmt(v, s, tir.Evaluate(v))
f = tir.PrimFunc({v},let_stmt)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 39
###Code
import tvm
from tvm import tir
expr = tir.Div(tir.FloorDiv(tir.Cast('int32', 2.95148e+28), tir.atan(tir.IntImm('int32',1))), tir.FloorMod(303, 32))
stmt = tir.Evaluate(expr)
func = tir.PrimFunc({},stmt)
tvm.build(func)
###Output
_____no_output_____
###Markdown
Bug 40
###Code
import tvm
from tvm import tir
var = tir.Var('var', 'int32')
false_value = tir.Cast('int32', tir.acosh(tir.Cast('float32', var)))
value = tir.Select(var > 0, tir.const(0), false_value)
let_stmt = tir.LetStmt(var, value, tir.Evaluate(var))
f = tir.PrimFunc({}, let_stmt)
tvm.build(f)
###Output
_____no_output_____
###Markdown
**Step 1: Please select a GPU runtime**![](https://i.imgur.com/RUIixAQ.png) ![image.png](https://i.imgur.com/zKc74pP.png) **Step 2: Install TVM by running the following block.**We have pre-compiled a tvm build for your convenience.![](https://i.imgur.com/k9U5WCB.png)
###Code
# Let's first install TVM!
# This TVM git tag is: 3b8715df7ea9263b71e30888f1aa112bd8cfcfdc
# which is prior to our 1st detected bug.
!pip install wget
import os
import wget
# Your Python should be Python 3.7
pyversion = os.popen('python3 --version').read()
print('Your python version is ', pyversion)
def install_tvm(pyv: int):
return wget.download(
'https://github.com/ise-uiuc/tzer/releases/download/tvm-0.8.dev1040/' +
'tlcpack_nightly-0.8.dev1040+g3b8715df7-cp{}-cp{}'.format(pyv, pyv) +
'm-manylinux_2_17_x86_64.manylinux2014_x86_64.whl')
whl_name = None
if '3.8' in pyversion:
# TVM for Python 3.8
whl_name = install_tvm(38)
elif '3.7' in pyversion:
# TVM for Python 3.7
whl_name = install_tvm(37)
elif '3.6' in pyversion:
# TVM for Python 3.6
whl_name = install_tvm(36)
else:
print('Please make sure you have Python 3.6,7,8. Actually, 3.7+ is recommended.')
if whl_name:
os.system('python3 -m pip install ' + whl_name)
import tvm
print('Successfully installed TVM!')
else:
print('Failed to install tvm...')
###Output
Requirement already satisfied: wget in /usr/local/lib/python3.7/dist-packages (3.2)
Your python version is Python 3.7.12
Successfully installed TVM!
###Markdown
**Step 3: Click the bottoms to run bugs!**![](https://i.imgur.com/OG7YPlK.png) Notes: Bug symptomFor bugs whose symptom is crash (e.g., Bug 1), you will see:![](https://i.imgur.com/iRuFd1H.png) Bug 1
###Code
# Crash. You will see "Your session crashed for an unknown reason" after running this bug.
import tvm
from tvm import tir
v = tir.Cast('bool', tvm.runtime.convert("a"))
body = tir.stmt.While(v, body=tir.Evaluate(tir.const(0)))
func = tir.PrimFunc(params={}, body=body)
mod = tvm.lower(func)
nopt_mod = tvm.build(mod)
###Output
_____no_output_____
###Markdown
Bug 2 & Bug 3
###Code
import tvm
from tvm import ir, tir
a = tir.Var("a", "int32")
b = tir.Var("b", "handle")
iter_var = tir.IterVar(ir.Range(0,1 ), a, 1)
buffer = tir.buffer.decl_buffer((1,))
buffer_map = {b: buffer}
store = tir.Store(buffer.data, tir.const(1), tir.const(1))
attr_stmt = tir.AttrStmt(iter_var, "coproc_uop_scope", tir.const(1), store)
f = tir.PrimFunc({a, b}, body=attr_stmt, buffer_map=buffer_map)
mod = tvm.lower(f)
tvm.build(mod)
import tvm
from tvm import ir, tir
a = tir.Var("a", "int32")
b = tir.Var("b", "handle")
iter_var = tir.IterVar(ir.Range(0,1 ), a, 1)
buffer = tir.buffer.decl_buffer((1,))
buffer_map = {b: buffer}
store = tir.Store(buffer.data, tir.const(1), tir.const(1))
attr_stmt = tir.AttrStmt(iter_var, "compute_scope", tir.const(1), store)
f = tir.PrimFunc({a, b}, body=attr_stmt, buffer_map=buffer_map)
mod = tvm.lower(f)
tvm.build(mod)
###Output
_____no_output_____
###Markdown
Bug 4
###Code
import tvm
print(tvm.tir.Shuffle([1],[1]).dtype)
###Output
_____no_output_____
###Markdown
Bug 5 & Bug 6 & Bug 7 & Bug 8
###Code
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.Div(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.Mod(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.FloorDiv(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
import tvm
analyzer = tvm.arith.Analyzer()
analyzer.rewrite_simplify(tvm.tir.FloorMod(tvm.tir.Ramp(1,1,2), tvm.tir.Broadcast(0, 2)))
###Output
_____no_output_____
###Markdown
Bug 9
###Code
import tvm
from tvm import relay
from tvm.relay.testing import create_workload
simple_net = relay.nn.conv2d(
data=relay.var("data", relay.TensorType((1, 3, 224, 224), "float32")),
weight=relay.var("weight"),
kernel_size=(5, 5),
channels=3,
padding=(1, 1),
)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
mod, _ = create_workload(simple_net)
old_mod = mod
with tvm.transform.PassContext(opt_level=4):
with tvm.target.Target("llvm"):
seq = tvm.transform.Sequential(passes=[relay.transform.ToBasicBlockNormalForm()], opt_level=4)
new_mod = seq(mod)
assert old_mod.astext() == mod.astext()
assert old_mod.astext() != new_mod.astext()
###Output
_____no_output_____
###Markdown
Bug 10
###Code
from tvm import tir
hash(tir.StringImm("s"))
###Output
_____no_output_____
###Markdown
Bug 11
###Code
import tvm
import numpy as np
from tvm import tir, te
n = te.size_var("n")
m = te.size_var("m")
A = te.placeholder((n, n), name="A", dtype="int32")
T = te.compute((m, m), lambda i, j: A[i][j])
s = te.create_schedule(T.op)
ir_m = tvm.lower(s, [A, T])
inputs = [tvm.nd.array(np.random.uniform(0, 100, size=(32, 32)).astype("int32"))]
output = tvm.nd.empty((32, 32), "int32")
with tvm.transform.PassContext(opt_level=4):
opt = tvm.transform.Sequential(
[tir.transform.DecorateDeviceScope()]
)
mod = opt(ir_m)
opt_execute = tvm.build(mod, [*inputs, output], tvm.target.Target("llvm"))
opt_execute(*[inputs[0], output])
###Output
_____no_output_____
###Markdown
Bug 12
###Code
import tvm
from tvm import tir
tvm.build(tir.PrimFunc([], tir.Evaluate(tir.ret(tir.const(0)))))
###Output
_____no_output_____
###Markdown
Bug 13
###Code
from tvm import tir
tir.Var(name=1, dtype='int')
###Output
_____no_output_____
###Markdown
Bug 14
###Code
from tvm import tir
print({tir.const(1), tir.const(True)})
###Output
_____no_output_____
###Markdown
Bug 15 & Bug 16
###Code
from tvm import tir
import tvm
zero = tir.const(0)
nop = tir.Evaluate(zero)
v = tir.Var("i1", "int32")
for_stmt = tir.For(v, zero, zero, tir.ForKind.SERIAL, nop)
load = tir.Evaluate(tir.Load("int32", v, zero))
seq = tir.SeqStmt([for_stmt, for_stmt, load])
func = tir.PrimFunc([], seq)
mod = tvm.IRModule({"main": func})
mod = tir.transform.InjectVirtualThread()(
mod
) # Use pass InjectVirtualThread to invoke ConvertSSA
from tvm import tir
import tvm
zero = tir.const(0)
nop = tir.Evaluate(zero)
v = tir.Var("i1", "int32")
for_stmt = tir.For(v, zero, zero, tir.ForKind.SERIAL, nop)
store = tir.Store(v, zero, zero)
seq = tir.SeqStmt([for_stmt, for_stmt, store])
func = tir.PrimFunc([], seq)
mod = tvm.IRModule({"main": func})
mod = tir.transform.InjectVirtualThread()(
mod
) # Use pass InjectVirtualThread to invoke ConvertSSA
###Output
_____no_output_____
###Markdown
Bug 17 & Bug 18
###Code
import tvm
array = tvm.runtime.convert([1, 2, 3])
print(array.type_key)
print(array.test_key)
import tvm
from tvm import te
a = te.var("a")
b = te.var("b")
amap = tvm.runtime.convert({a: 2, b: 3})
print(amap.type_key)
print(amap.test_key)
###Output
Map
###Markdown
Bug 19
###Code
import tvm
from tvm import tir
var = tir.Var('a',dtype='int32')
buf = tir.decl_buffer((1,), name='buf')
buf_load = tir.expr.BufferLoad(buffer=buf, indices=tvm.runtime.convert([0]))
buf_load_stmt = tir.stmt.Evaluate(buf_load)
for_loop = tir.stmt.For(loop_var=var, kind=1, min_val=1, extent=buf_load, body=buf_load_stmt)
buf_func = tir.PrimFunc(params={}, body=for_loop)
tvm.lower(buf_func)
###Output
_____no_output_____
###Markdown
Bug 20 & Bug 21 & Bug 22
###Code
# API Misuse in 3 of previous tutorials (bring_your_own_datatypes.py, from_keras.py, from_onnx.py )
# We only show one motivating example here.
import tvm
import tvm.relay as relay
from tvm.relay import testing
from tvm import IRModule
import time
shape = (1, 3, 100, 100)
def example():
return testing.squeezenet.get_workload(batch_size=1, num_classes=100, image_shape=shape[1:], dtype='float32')
data = relay.var("data", relay.TensorType(shape, "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(5, 5), channels=32, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.nn.relu(simple_net)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
def error_usage(): # Used in previous tutorial
mod, params = example()
target = tvm.target.Target('llvm')
dev = tvm.cpu()
with tvm.transform.PassContext(opt_level=4):
executor = relay.build_module.create_executor("vm", mod, dev, target)
evaluated = executor.evaluate()
t0 = time.time()
tvm_out = evaluated(
tvm.nd.empty(shape=shape,
device=dev,
dtype='float32'), **params)
print(f'Elapsed time by API-Misuse case: {time.time() - t0}')
def good_usage(): # After correction.
mod, params = example()
target = tvm.target.Target('llvm')
dev = tvm.cpu()
with tvm.transform.PassContext(opt_level=4):
mod = IRModule.from_expr(relay.build_module.bind_params_by_name(mod["main"], params))
executor = relay.build_module.create_executor("vm", mod, dev, target).evaluate()
t0 = time.time()
tvm_out = executor(
tvm.nd.empty(shape=shape,
device=dev,
dtype='float32'), **params)
print(f'Elapsed time by correct case: {time.time() - t0}')
if __name__ == '__main__':
error_usage()
good_usage()
###Output
Elapsed time by API-Misuse case: 0.22153639793395996
Elapsed time by correct case: 0.18967914581298828
###Markdown
Bug 23
###Code
# Imcompatible passes (but actually independent) introduced by inconsistency.
import tvm
import tvm.testing
from tvm import relay
from tvm.relay import testing
data = relay.var("data", relay.TensorType((1, 3, 64, 64), "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(3, 3), channels=3, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
module, params = testing.create_workload(simple_net)
# Apply some simple passes to legalize the IR.
with tvm.transform.PassContext(opt_level=0):
module, params = relay.optimize(module, tvm.testing.enabled_targets()[0][0], params)
seq = tvm.transform.Sequential([relay.transform.AnnotateSpans(), relay.transform.DefuseOps()])
with tvm.transform.PassContext(opt_level=3):
module = seq(module)
###Output
...100%, 0.47 MB, 1596 KB/s, 0 seconds passed
###Markdown
Bug 24 & Bug 25 & Bug 26
###Code
import tvm
tvm.tir.expr.Call(None, None, None, None)
import tvm
tvm.tir.generic.add(None, None)
import tvm
tvm.tir.stmt.Allocate(None, None, None, None, None, None)
###Output
_____no_output_____
###Markdown
Bug 27
###Code
# CuDNN context error. The script ended with a hang and segfault.
# This bug is reproducible on CentOS 7. Other platform might not be able to reproduce this.
!uname -a # Linux of Google Colab is not CentOS 7 and cannot reproduce this bug.
# The output of this program on CentOS 7
"""
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:245: CUDNN Found 8 fwd algorithms, choosing CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 0) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM - time: 0.06144 ms, Memory: 0
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 1) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM - time: 0.104448 ms, Memory: 304000
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 2) CUDNN_CONVOLUTION_FWD_ALGO_GEMM - time: 0.110592 ms, Memory: 5419008
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 3) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD - time: 0.146432 ms, Memory: 18176
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 4) CUDNN_CONVOLUTION_FWD_ALGO_FFT - time: 0.916384 ms, Memory: 26949312
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 5) CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING - time: 1.10106 ms, Memory: 374272
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 6) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED - time: 1.79712 ms, Memory: 137288448
[03:09:31] ../src/runtime/contrib/cudnn/conv_forward.cc:248: 7) CUDNN_CONVOLUTION_FWD_ALGO_DIRECT - time: -1 ms, Memory: 0
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
[1] 134780 segmentation fault (core dumped) python3 test.py
"""
content = """
import tvm
import tvm.relay as relay
import numpy as np
from tvm.relay import testing
def example():
out_channels = 16
batch_size = 1
data = relay.var("data", relay.TensorType((batch_size, 3, 224, 224), "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(3, 3), channels=out_channels, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.nn.relu(simple_net)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
def func():
data = np.zeros((1, 3, 224, 224))
mod, params = example()
target = tvm.target.Target('cuda -libs=cudnn')
dev = tvm.cuda()
with tvm.transform.PassContext(opt_level=3):
executor = relay.build_module.create_executor("graph", mod, dev, target)
tvm_out = executor.evaluate()(tvm.nd.array(data.astype('float32')), **params)
"""
with open('test.py', 'w') as f:
f.write(content)
!python3 test.py
###Output
Linux 26e8f5ac1921 5.4.104+ #1 SMP Sat Jun 5 09:50:34 PDT 2021 x86_64 x86_64 x86_64 GNU/Linux
###Markdown
Bug 28 & 29
###Code
# There are 2 bugs here.
# 1. One is about OOM.
# 2. Another is the incorrect exception. (The exception should be OOM not "device type = 0")
import tvm
import tvm.relay as relay
import numpy as np
from tvm.relay import testing
def example():
out_channels = 32
data = relay.var("data", relay.TensorType((relay.Any(), 3, 224, 224), "float32"))
weight = relay.var("weight")
bn_gamma = relay.var("bn_gamma")
bn_beta = relay.var("bn_beta")
bn_mmean = relay.var("bn_mean")
bn_mvar = relay.var("bn_var")
simple_net = relay.nn.conv2d(
data=data, weight=weight, kernel_size=(3, 3), channels=out_channels, padding=(1, 1)
)
simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
simple_net = relay.nn.relu(simple_net)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
if __name__ == '__main__':
mod, params = example()
# compile the model
target = tvm.target.Target('cuda')
dev = tvm.cuda()
with tvm.transform.PassContext(opt_level=3):
executor = relay.build_module.create_executor("vm", mod, dev, target)
for i in range(100):
print(f'Running batch size = {i}') # Should be OOM error, but a later exception received.
tvm_out = executor.evaluate()(tvm.nd.empty(shape=(i, 3, 224, 224), device=dev, dtype='float32'), **params)
###Output
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32, workload=('conv2d_nchw.cuda', ('TENSOR', (any_dim, 3, 224, 224), 'float32'), ('TENSOR', (32, 3, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
###Markdown
Bug 30
###Code
# We tested tvm across various runtimes (vm, graph, debug, etc.)
# We found debug module fails for 0-batch input (viable for other runtime types)
# We observed CUDA internal error using cuda-gdb.
import tvm
import tvm.relay as relay
import numpy as np
from tvm.relay import testing
def example():
data = relay.var("data", relay.TensorType((relay.Any(), 3, 128, 128), "float32"))
simple_net = relay.nn.conv2d(
data=data, weight=relay.var("weight"), kernel_size=(3, 3), channels=8, padding=(1, 1)
)
simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)
return testing.create_workload(simple_net)
if __name__ == '__main__':
data = np.zeros((0, 3, 128, 128))
mod, params = example()
target = tvm.target.Target('cuda')
dev = tvm.cuda()
with tvm.transform.PassContext(opt_level=2):
executor = relay.build_module.create_executor("debug", mod, dev, target)
tvm_out = executor.evaluate()(tvm.nd.array(data.astype('float32')), **params)
###Output
Cannot find config for target=cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32, workload=('conv2d_nchw.cuda', ('TENSOR', (any_dim, 3, 128, 128), 'float32'), ('TENSOR', (8, 3, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
###Markdown
Bug 31
###Code
from tvm import tir
import tvm
a = tir.Broadcast(tir.const(1), 2)
v = tir.Var('i1', 'int32')
stmt = tir.Store(v, a, a, None)
func = tir.PrimFunc([v], stmt)
tvm.build(func)
###Output
_____no_output_____
###Markdown
Bug 32
###Code
import tvm
buf = tvm.tir.buffer.decl_buffer((1,))
value = tvm.tir.IntImm('int32', 1)
i = tvm.tir.IntImm('int32x1', 1)
index = tvm.tir.Shuffle([i, i], [i])
s = tvm.tir.Store(buf.data, value, index)
f = tvm.tir.PrimFunc({buf.data}, s)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 33
###Code
import tvm
v = tvm.tir.Var('v', 'float32')
value = tvm.tir.isnan(v)
op = value.op
buf = tvm.tir.buffer.decl_buffer((1,))
value = tvm.tir.Call('int32', op, [0])
s = tvm.tir.Store(buf.data, value, 0)
f = tvm.tir.PrimFunc({buf.data}, s)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 34
###Code
import tvm
from tvm import tir
var = tir.Var(name='v', dtype='int32')
buf = tir.decl_buffer((1,), name='buf')
buf_load = tir.expr.BufferLoad(buffer=buf, indices=tvm.runtime.convert([0]))
then_case = tir.Store(buffer_var=var,value=buf_load,index=tvm.runtime.convert(0))
for_body = then_case
for_stmt = tir.For(loop_var=var, min_val=0, extent=0, kind=1,body=for_body)
y = tir.IfThenElse(then_case=then_case,else_case=for_stmt,condition=tvm.runtime.convert(False))
f=tir.PrimFunc(body=y,params=[var])
mod = tvm.IRModule({'main':f})
mod = tir.transform.PlanAndUpdateBufferAllocationLocation()(mod)
mod = tir.transform.CompactBufferAllocation()(mod)
mod = tir.transform.LowerMatchBuffer()(mod)
###Output
_____no_output_____
###Markdown
Bug 35
###Code
import tvm
from tvm import tir
# import os
# print(os.getpid())
# input()
v = tir.Broadcast(0, 8)
index = tir.Ramp(72,1,8)
buf = tir.buffer.decl_buffer((1, 0))
store = tir.Store(buf.data, v, index)
loop_var = tir.Var('v', 'int32')
for_loop = tir.For(loop_var, 0, 4, tir.ForKind.VECTORIZED, store)
f = tir.PrimFunc({buf.data}, for_loop)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 36
###Code
# This bug happens to Debug Mode
import tvm
from tvm import ir, tir
a = tir.Var("a", "int32")
iter_var = tir.IterVar(ir.Range(0,1 ), a, 1, "vthread")
attr_stmt = tir.AttrStmt(iter_var, "virtual_thread",tir.op.floormod(tir.ret(tir.IntImm('int32',0)), 3), tir.Evaluate(tir.const(0)))
f = tir.PrimFunc({a}, body=attr_stmt)#, buffer_map=buffer_map)
mod = tvm.lower(f)
tvm.build(mod)
###Output
_____no_output_____
###Markdown
Bug 37
###Code
import tvm
from tvm import tir
s_v = tir.Var('buf', 'handle')
buf = tir.buffer.decl_buffer((1, 0))
store = tir.Store(buf.data, tir.IntImm('int32', 0), 0, tvm.runtime.convert(32))
f = tir.PrimFunc({s_v}, store, buffer_map={s_v:buf})
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 38
###Code
import tvm
from tvm import tir
v = tir.Var('v', 'int32')
s = tir.Select(tir.Cast('bool',v), 150, 1)
let_stmt = tir.LetStmt(v, s, tir.Evaluate(v))
f = tir.PrimFunc({v},let_stmt)
tvm.build(f)
###Output
_____no_output_____
###Markdown
Bug 39
###Code
import tvm
from tvm import tir
expr = tir.Div(tir.FloorDiv(tir.Cast('int32', 2.95148e+28), tir.atan(tir.IntImm('int32',1))), tir.FloorMod(303, 32))
stmt = tir.Evaluate(expr)
func = tir.PrimFunc({},stmt)
tvm.build(func)
###Output
_____no_output_____
###Markdown
Bug 40
###Code
import tvm
from tvm import tir
var = tir.Var('var', 'int32')
false_value = tir.Cast('int32', tir.acosh(tir.Cast('float32', var)))
value = tir.Select(var > 0, tir.const(0), false_value)
let_stmt = tir.LetStmt(var, value, tir.Evaluate(var))
f = tir.PrimFunc({}, let_stmt)
tvm.build(f)
###Output
_____no_output_____ |
_notebooks/2022-01-21-graph-benchmarks.ipynb | ###Markdown
Graph Benchmarks Imports
###Code
import os
import numpy as np
import pandas as pd
import networkx as nx
from scipy.io import mmread
import matplotlib.pyplot as plt
from collections import Counter
%matplotlib inline
default_edge_color = 'gray'
default_node_color = '#407cc9'
enhanced_node_color = '#f5b042'
enhanced_edge_color = '#cc2f04'
output_dir = "/content"
###Output
_____no_output_____
###Markdown
Plot utils
###Code
def draw_graph(G, node_names={}, filename=None, node_size=50, layout = None):
pos_nodes = nx.spring_layout(G) if layout is None else layout(G)
nx.draw(G, pos_nodes, with_labels=False, node_size=node_size, edge_color='gray')
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, labels=node_names, font_family='serif')
plt.axis('off')
axis = plt.gca()
axis.set_xlim([1.2*x for x in axis.get_xlim()])
axis.set_ylim([1.2*y for y in axis.get_ylim()])
if filename:
plt.savefig(os.path.join(output_dir, filename), format="png")
# draw enhanced path on the graph
def draw_enhanced_path(G, path_to_enhance, node_names={}, filename=None, layout=None):
path_edges = list(zip(path,path[1:]))
pos_nodes = nx.spring_layout(G) if layout is None else layout(G)
plt.figure(figsize=(5,5),dpi=300)
pos_nodes = nx.spring_layout(G)
nx.draw(G, pos_nodes, with_labels=False, node_size=50, edge_color='gray')
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, labels=node_names, font_family='serif')
nx.draw_networkx_edges(G,pos_nodes,edgelist=path_edges, edge_color='#cc2f04', style='dashed', width=2.0)
plt.axis('off')
axis = plt.gca()
axis.set_xlim([1.2*x for x in axis.get_xlim()])
axis.set_ylim([1.2*y for y in axis.get_ylim()])
if filename:
plt.savefig(os.path.join(output_dir, filename), format="png")
def get_random_node(graph):
return np.random.choice(graph.nodes)
###Output
_____no_output_____
###Markdown
Simple Example of Graphs
###Code
complete = nx.complete_graph(n=7)
lollipop = nx.lollipop_graph(m=7, n=3)
barbell = nx.barbell_graph(m1=7, m2=4)
plt.figure(figsize=(15,6))
plt.subplot(1,3,1)
draw_graph(complete)
plt.title("Complete")
plt.subplot(1,3,2)
plt.title("Lollipop")
draw_graph(lollipop)
plt.subplot(1,3,3)
plt.title("Barbell")
draw_graph(barbell)
plt.savefig(os.path.join(output_dir, "SimpleGraphs.png"))
###Output
_____no_output_____
###Markdown
We compose simple graphs into one
###Code
complete = nx.relabel_nodes(nx.complete_graph(n=7), lambda x: x + 0)
lollipop = nx.relabel_nodes(nx.lollipop_graph(m=7, n=3), lambda x: x+100)
barbell = nx.relabel_nodes(nx.barbell_graph(m1=7, m2=4), lambda x: x+200)
allGraphs = nx.compose_all([complete, barbell, lollipop])
allGraphs.add_edge(get_random_node(lollipop), get_random_node(lollipop))
allGraphs.add_edge(get_random_node(complete), get_random_node(barbell))
draw_graph(allGraphs, layout=nx.kamada_kawai_layout)
###Output
_____no_output_____
###Markdown
Model Barabasi Albert In the following we create and analyse some simple graph generated by the Barabasi-Albert model
###Code
BA_graph_small = nx.extended_barabasi_albert_graph(n=20,m=1,p=0,q=0)
draw_graph(BA_graph_small, layout=nx.circular_layout)
###Output
_____no_output_____
###Markdown
We analyse large Barabasi-Albert graphs to investigate their ability to generate power-law distribution for the degree of node
###Code
n = 1E5
bag = nx.extended_barabasi_albert_graph(n,m=1,p=0,q=0)
degree = dict(nx.degree(bag)).values()
bins = np.round(np.logspace(np.log10(min(degree)), np.log10(max(degree)), 10))
cnt = Counter(np.digitize(np.array(list(degree)), bins))
plt.figure(figsize=(15,6))
plt.subplot(1,2,1)
draw_graph(BA_graph_small, layout=nx.circular_layout)
plt.subplot(1,2,2)
x, y = list(zip(*[(bins[k-1], v/n) for k, v in cnt.items()]))
plt.plot(x, y, 'o'); plt.xscale("log"); plt.yscale("log")
plt.xlabel("Degree k")
plt.ylabel("P(k)")
plt.savefig(os.path.join(output_dir, "Barabasi_Albert.png"))
plt.figure(figsize=(15, 6))
plt.hist(degree, bins=bins)
plt.xscale("log")
plt.yscale("log")
###Output
_____no_output_____
###Markdown
Other simple graph Benchmarks
###Code
graph = nx.florentine_families_graph()
nx.draw_kamada_kawai(graph, with_labels=True, node_size=20, font_size=14)
plt.savefig("Florentine.png")
###Output
_____no_output_____
###Markdown
Benchmarks from the Network Data Repository This dataset (and other) can be downloaded from http://networkrepository.com/. The datasets are generally in the MTX file format. In particular the dataset here presented is taken from the collaboration network of Arxiv Astro Physics, that can be downloaded from http://networkrepository.com/ca-AstroPh.php.Some of the files that can be downloaded from that source are somewhat non-standard and needs small fixes.> Note: Please make sure the header of the file has the following: `%%MatrixMarket matrix coordinate pattern symmetric`, with a double %.
###Code
!wget https://nrvis.com/download/data/ca/ca-AstroPh.zip
!unzip ca-AstroPh.zip
!head ca-AstroPh.mtx
!tail -n +2 ca-AstroPh.mtx > ca-AstroPh-mod.mtx
!sed -i -e '1i%%MatrixMarket matrix coordinate pattern symmetric\' ca-AstroPh-mod.mtx
!head ca-AstroPh-mod.mtx
file = "ca-AstroPh-mod.mtx"
adj_matrix = mmread(file)
graph = nx.from_scipy_sparse_matrix(adj_matrix)
degrees = dict(nx.degree(graph))
ci = nx.clustering(graph)
centrality = nx.centrality.eigenvector_centrality(graph)
stats = pd.DataFrame({
"centrality": centrality,
"C_i": ci,
"degree": degrees
})
stats.head()
###Output
_____no_output_____
###Markdown
Here we provide some simple analysis of the DataFrame we generated to see correlations between centrality, clustering coefficient and degree.
###Code
plt.plot(stats["centrality"], stats["degree"], 'o')
plt.xscale("log")
plt.yscale("log")
plt.plot(stats["centrality"], stats["C_i"], 'o')
plt.xscale("log")
plt.yscale("log")
###Output
_____no_output_____
###Markdown
Ego-network Here we plot the ego-network of the most-connected node, that has id 6933. However, even this network looks a bit messy since it has hundreds of nodes. We therefore sample randomly or based on centrality/clustering coefficient in order to plot a relevant subgraph.
###Code
neighbors = [n for n in nx.neighbors(graph, 6933)]
sampling = 0.1
nTop = round(len(neighbors)*sampling)
idx = {
"random": stats.loc[neighbors].sort_index().index[:nTop],
"centrality": stats.loc[neighbors].sort_values("centrality", ascending=False).index[:nTop],
"C_i": stats.loc[neighbors].sort_values("C_i", ascending=False).index[:nTop]
}
def plotSubgraph(graph, indices, center = 6933):
draw_graph(
nx.subgraph(graph, list(indices) + [center]),
layout = nx.kamada_kawai_layout
)
plt.figure(figsize=(15,6))
for ith, title in enumerate(["random", "centrality", "C_i"]):
plt.subplot(1,3,ith+1)
plotSubgraph(graph, idx[title])
plt.title(title)
plt.savefig(os.path.join(output_dir, "PhAstro"))
###Output
_____no_output_____
###Markdown
Data to Gephi Otherwise, we could also export the data from networkx in order to plot it and analyse it using the Gephi software.
###Code
nx.write_gexf(graph, 'ca-AstroPh.gexf')
###Output
_____no_output_____ |
aerial-cactus-identification/notebook/explore.ipynb | ###Markdown
preprocessing
###Code
def preprocessing(img):
img = Image.open(img)
# img_grey = img.convert('L')
return img
im = preprocessing(os.path.join(train_image_path, train_table.loc[1, "id"]))
np.array(im).shape
###Output
_____no_output_____
###Markdown
modeling
###Code
train_table = train_table.sample(frac=1)
X, y = train_table["id"].values, train_table["has_cactus"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
np.unique(y_train, return_counts=True)
np.unique(y_test, return_counts=True)
class CactusDataset(Dataset):
def __init__(self, X_train, X_test, y_train, y_test):
self.dataset = {
"train": (X_train, y_train, len(y_train)),
"test": (X_test, y_test, len(y_test))
}
self.set_split(split="train")
def set_split(self, split="train"):
self.data_x, self.data_y, self.length = self.dataset[split]
def preprocessing(self, filename):
path = os.path.join(train_image_path, filename)
img = Image.open(path)
im_arr = np.array(img) / 255
im_arr = im_arr.transpose(2,0,1)
return im_arr
def __getitem__(self, idx):
x = torch.Tensor(self.preprocessing(self.data_x[idx]))
y = torch.Tensor([self.data_y[idx]])
return x, y
def __len__(self):
return self.length
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.network = nn.Sequential(
nn.Conv2d(3, 16, 3),
nn.ReLU(),
nn.Conv2d(16, 128, 3),
nn.ReLU(),
nn.Conv2d(128, 256, 3),
nn.ReLU(),
nn.Conv2d(256, 256, 3),
nn.ReLU(),
nn.Conv2d(256, 1, 3),
nn.ReLU(),
nn.Flatten(),
nn.Linear(484, 1),
)
def forward(self, input_):
out = self.network(input_)
return out
dataset = CactusDataset(X_train, X_test, y_train, y_test)
model = Classifier().to(device)
optimizer = optim.Adam(model.parameters(), lr = 1e-4)
criterion = nn.BCELoss()
## sum weight + bias
num_param = sum(p.numel() for p in model.parameters())
print(f"Total number of parameters: {num_param:,}")
def compute_accuracy(y, out):
out_indicies = (out > 0.5).long()
y = y.long()
n_correct = torch.eq(y, out_indicies).sum().item()
accuracy = n_correct / y.shape[0]
return accuracy * 100
data_sample = DataLoader(dataset, batch_size = 2)
for data in data_sample:
break
for epoch in range(1, 101):
running_loss = 0
running_loss_v = 0
running_acc = 0
running_acc_v = 0
dataset.set_split("train")
data_gen = DataLoader(dataset, batch_size=1024, shuffle=True)
model.train()
for batch_index, (x, y) in enumerate(data_gen, 1):
optimizer.zero_grad()
x = x.to(device)
y = y.to(device)
out_logit = model(x)
out = torch.sigmoid(out_logit)
loss = criterion(out, y)
loss_train = loss.item()
running_loss += (loss_train - running_loss) / batch_index
accuracy = compute_accuracy(y, out)
running_acc += (accuracy - running_acc) / batch_index
loss.backward()
optimizer.step()
dataset.set_split("test")
data_gen = DataLoader(dataset, batch_size=1024)
model.eval()
for batch_index, (x, y) in enumerate(data_gen, 1):
x = x.to(device)
y = y.to(device)
with torch.no_grad():
out = model(x)
out = torch.sigmoid(out)
loss = criterion(out, y)
loss_val = loss.item()
running_loss_v += (loss_val - running_loss_v) / batch_index
accuracy = compute_accuracy(y, out)
running_acc_v += (accuracy - running_acc_v) / batch_index
print(f'epoch: {epoch}')
print(f'\ttrain loss: {running_loss:.2f} | accuracy: {running_acc:.2f}')
print(f'\tval loss: {running_loss_v:.2f} | accuracy: {running_acc_v:.2f}')
###Output
epoch: 1
train loss: 0.63 | accuracy: 74.94
val loss: 0.56 | accuracy: 76.15
epoch: 2
train loss: 0.58 | accuracy: 74.94
val loss: 0.56 | accuracy: 76.15
epoch: 3
train loss: 0.57 | accuracy: 75.00
val loss: 0.55 | accuracy: 76.15
epoch: 4
train loss: 0.55 | accuracy: 74.91
val loss: 0.53 | accuracy: 76.15
epoch: 5
train loss: 0.53 | accuracy: 74.97
val loss: 0.49 | accuracy: 76.15
epoch: 6
train loss: 0.47 | accuracy: 74.98
val loss: 0.41 | accuracy: 76.15
epoch: 7
train loss: 0.39 | accuracy: 75.11
val loss: 0.35 | accuracy: 76.41
epoch: 8
train loss: 0.35 | accuracy: 75.69
val loss: 0.32 | accuracy: 76.86
epoch: 9
train loss: 0.33 | accuracy: 75.85
val loss: 0.31 | accuracy: 76.86
epoch: 10
train loss: 0.32 | accuracy: 75.87
val loss: 0.31 | accuracy: 76.93
epoch: 11
train loss: 0.32 | accuracy: 76.08
val loss: 0.31 | accuracy: 77.02
epoch: 12
train loss: 0.31 | accuracy: 76.11
val loss: 0.30 | accuracy: 77.18
epoch: 13
train loss: 0.31 | accuracy: 76.09
val loss: 0.29 | accuracy: 77.11
epoch: 14
train loss: 0.30 | accuracy: 76.12
val loss: 0.30 | accuracy: 77.22
epoch: 15
train loss: 0.31 | accuracy: 76.07
val loss: 0.29 | accuracy: 77.60
epoch: 16
train loss: 0.30 | accuracy: 76.43
val loss: 0.28 | accuracy: 77.69
epoch: 17
train loss: 0.29 | accuracy: 76.41
val loss: 0.28 | accuracy: 77.67
epoch: 18
train loss: 0.28 | accuracy: 76.42
val loss: 0.28 | accuracy: 77.34
epoch: 19
train loss: 0.29 | accuracy: 76.34
val loss: 0.27 | accuracy: 77.86
epoch: 20
train loss: 0.28 | accuracy: 76.53
val loss: 0.26 | accuracy: 77.76
epoch: 21
train loss: 0.28 | accuracy: 76.63
val loss: 0.26 | accuracy: 77.62
epoch: 22
train loss: 0.27 | accuracy: 76.88
val loss: 0.26 | accuracy: 77.70
epoch: 23
train loss: 0.27 | accuracy: 76.85
val loss: 0.25 | accuracy: 77.93
epoch: 24
train loss: 0.27 | accuracy: 76.81
val loss: 0.26 | accuracy: 78.00
epoch: 25
train loss: 0.26 | accuracy: 77.07
val loss: 0.25 | accuracy: 78.37
epoch: 26
train loss: 0.25 | accuracy: 77.25
val loss: 0.25 | accuracy: 78.82
epoch: 27
train loss: 0.25 | accuracy: 77.71
val loss: 0.25 | accuracy: 89.78
epoch: 28
train loss: 0.25 | accuracy: 88.03
val loss: 0.26 | accuracy: 83.67
epoch: 29
train loss: 0.25 | accuracy: 88.61
val loss: 0.24 | accuracy: 88.33
epoch: 30
train loss: 0.25 | accuracy: 88.30
val loss: 0.24 | accuracy: 91.19
epoch: 31
train loss: 0.24 | accuracy: 89.44
val loss: 0.24 | accuracy: 91.09
epoch: 32
train loss: 0.24 | accuracy: 89.58
val loss: 0.24 | accuracy: 91.32
epoch: 33
train loss: 0.24 | accuracy: 89.60
val loss: 0.23 | accuracy: 88.00
epoch: 34
train loss: 0.24 | accuracy: 89.67
val loss: 0.24 | accuracy: 92.74
epoch: 35
train loss: 0.23 | accuracy: 90.32
val loss: 0.23 | accuracy: 88.40
epoch: 36
train loss: 0.24 | accuracy: 90.58
val loss: 0.24 | accuracy: 94.25
epoch: 37
train loss: 0.24 | accuracy: 91.16
val loss: 0.23 | accuracy: 93.73
epoch: 38
train loss: 0.24 | accuracy: 91.78
val loss: 0.22 | accuracy: 92.27
epoch: 39
train loss: 0.23 | accuracy: 91.87
val loss: 0.23 | accuracy: 93.66
epoch: 40
train loss: 0.23 | accuracy: 91.51
val loss: 0.22 | accuracy: 92.86
epoch: 41
train loss: 0.23 | accuracy: 91.56
val loss: 0.22 | accuracy: 91.52
epoch: 42
train loss: 0.23 | accuracy: 91.85
val loss: 0.22 | accuracy: 90.20
epoch: 43
train loss: 0.23 | accuracy: 92.03
val loss: 0.22 | accuracy: 90.71
epoch: 44
train loss: 0.22 | accuracy: 92.23
val loss: 0.22 | accuracy: 94.09
epoch: 45
train loss: 0.22 | accuracy: 92.30
val loss: 0.22 | accuracy: 94.64
epoch: 46
train loss: 0.22 | accuracy: 92.35
val loss: 0.21 | accuracy: 92.05
epoch: 47
train loss: 0.22 | accuracy: 92.61
val loss: 0.21 | accuracy: 94.01
epoch: 48
train loss: 0.22 | accuracy: 93.08
val loss: 0.21 | accuracy: 92.79
epoch: 49
train loss: 0.22 | accuracy: 92.89
val loss: 0.21 | accuracy: 93.87
epoch: 50
train loss: 0.22 | accuracy: 93.16
val loss: 0.22 | accuracy: 89.85
epoch: 51
train loss: 0.22 | accuracy: 93.08
val loss: 0.22 | accuracy: 89.67
epoch: 52
train loss: 0.22 | accuracy: 93.25
val loss: 0.21 | accuracy: 91.03
epoch: 53
train loss: 0.22 | accuracy: 93.26
val loss: 0.22 | accuracy: 96.44
epoch: 54
train loss: 0.22 | accuracy: 93.86
val loss: 0.21 | accuracy: 94.87
epoch: 55
train loss: 0.21 | accuracy: 93.93
val loss: 0.21 | accuracy: 93.50
epoch: 56
train loss: 0.22 | accuracy: 93.60
val loss: 0.21 | accuracy: 94.92
epoch: 57
train loss: 0.22 | accuracy: 94.35
val loss: 0.21 | accuracy: 91.81
epoch: 58
train loss: 0.22 | accuracy: 94.12
val loss: 0.20 | accuracy: 94.37
epoch: 59
train loss: 0.22 | accuracy: 94.09
val loss: 0.22 | accuracy: 96.50
epoch: 60
train loss: 0.22 | accuracy: 94.15
val loss: 0.21 | accuracy: 95.53
epoch: 61
train loss: 0.21 | accuracy: 94.34
val loss: 0.20 | accuracy: 92.42
epoch: 62
train loss: 0.21 | accuracy: 93.99
val loss: 0.20 | accuracy: 95.20
epoch: 63
train loss: 0.21 | accuracy: 94.14
val loss: 0.20 | accuracy: 95.93
epoch: 64
train loss: 0.21 | accuracy: 94.35
val loss: 0.20 | accuracy: 95.25
epoch: 65
train loss: 0.21 | accuracy: 94.40
val loss: 0.20 | accuracy: 93.89
epoch: 66
train loss: 0.21 | accuracy: 94.53
val loss: 0.20 | accuracy: 96.01
epoch: 67
train loss: 0.21 | accuracy: 94.84
val loss: 0.21 | accuracy: 96.64
epoch: 68
train loss: 0.21 | accuracy: 94.29
val loss: 0.20 | accuracy: 96.15
epoch: 69
train loss: 0.20 | accuracy: 94.86
val loss: 0.20 | accuracy: 93.77
epoch: 70
train loss: 0.21 | accuracy: 94.77
val loss: 0.20 | accuracy: 96.51
epoch: 71
train loss: 0.21 | accuracy: 94.76
val loss: 0.20 | accuracy: 96.45
epoch: 72
train loss: 0.21 | accuracy: 94.97
val loss: 0.20 | accuracy: 96.63
epoch: 73
train loss: 0.20 | accuracy: 95.10
val loss: 0.20 | accuracy: 96.82
epoch: 74
train loss: 0.21 | accuracy: 95.17
val loss: 0.20 | accuracy: 95.42
epoch: 75
train loss: 0.20 | accuracy: 95.22
val loss: 0.20 | accuracy: 95.24
epoch: 76
train loss: 0.20 | accuracy: 94.93
val loss: 0.20 | accuracy: 96.48
epoch: 77
train loss: 0.20 | accuracy: 95.21
val loss: 0.20 | accuracy: 96.96
epoch: 78
train loss: 0.21 | accuracy: 95.04
val loss: 0.21 | accuracy: 96.84
epoch: 79
train loss: 0.21 | accuracy: 95.60
val loss: 0.19 | accuracy: 95.66
epoch: 80
train loss: 0.20 | accuracy: 95.36
val loss: 0.19 | accuracy: 94.70
epoch: 81
train loss: 0.20 | accuracy: 95.17
val loss: 0.19 | accuracy: 95.65
epoch: 82
train loss: 0.20 | accuracy: 95.51
val loss: 0.20 | accuracy: 96.88
epoch: 83
train loss: 0.20 | accuracy: 95.57
val loss: 0.20 | accuracy: 96.62
epoch: 84
train loss: 0.20 | accuracy: 95.38
val loss: 0.19 | accuracy: 95.09
epoch: 85
train loss: 0.20 | accuracy: 95.65
val loss: 0.20 | accuracy: 93.90
epoch: 86
train loss: 0.20 | accuracy: 95.79
val loss: 0.19 | accuracy: 94.55
epoch: 87
train loss: 0.20 | accuracy: 95.50
val loss: 0.19 | accuracy: 96.92
epoch: 88
train loss: 0.20 | accuracy: 95.82
val loss: 0.20 | accuracy: 97.12
epoch: 89
train loss: 0.20 | accuracy: 95.65
val loss: 0.20 | accuracy: 97.05
epoch: 90
train loss: 0.20 | accuracy: 96.01
val loss: 0.19 | accuracy: 95.18
epoch: 91
train loss: 0.19 | accuracy: 95.83
val loss: 0.19 | accuracy: 96.86
epoch: 92
train loss: 0.19 | accuracy: 95.82
val loss: 0.19 | accuracy: 95.75
epoch: 93
train loss: 0.19 | accuracy: 95.66
val loss: 0.19 | accuracy: 96.81
epoch: 94
train loss: 0.20 | accuracy: 95.81
val loss: 0.21 | accuracy: 97.39
epoch: 95
train loss: 0.20 | accuracy: 96.15
val loss: 0.19 | accuracy: 96.80
epoch: 96
train loss: 0.19 | accuracy: 96.24
val loss: 0.19 | accuracy: 95.89
epoch: 97
train loss: 0.19 | accuracy: 95.77
val loss: 0.19 | accuracy: 96.27
epoch: 98
train loss: 0.19 | accuracy: 96.20
val loss: 0.19 | accuracy: 94.70
epoch: 99
train loss: 0.19 | accuracy: 96.25
val loss: 0.19 | accuracy: 95.09
epoch: 100
train loss: 0.19 | accuracy: 96.16
val loss: 0.19 | accuracy: 97.39
|
LSTM/LSTM - Test Model.ipynb | ###Markdown
LSTM - Test ModelSi eseguono i test a partire da un modello LSTM. Caricamento del dataframe
###Code
import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import Dataset, DataLoader, WeightedRandomSampler
import torch.nn as nn
import numpy as np
import pandas as pd
import sys
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from sklearn.metrics import confusion_matrix, classification_report, f1_score, accuracy_score
from torch.utils.tensorboard import SummaryWriter
from scipy.stats import mode
import os
import shutil
import glob
main_folder = "../data"
folder = "./runs/LSTM_1"
try:
shutil.rmtree(folder, ignore_errors=True)
except:
pass
try:
os.mkdir(folder)
except:
pass
writer = SummaryWriter(folder)
###Output
_____no_output_____
###Markdown
Train - Validation - Test
###Code
class2idx = {
"No_action": 0,
"Prendi": 1,
"Rilascia": 2,
"Premi": 3
}
idx2class = {v: k for k, v in class2idx.items()}
def create_set(folder_set):
csv:list = []
for file in glob.glob(folder_set + "/*.csv"):
csv.append(file)
data = []
target = []
for fcsv in csv:
data_video = pd.read_csv(fcsv, usecols = [i for i in range(156)]).to_numpy()
target_video = pd.read_csv(fcsv, usecols = ["TARGET"])
target_video["TARGET"].replace(class2idx, inplace=True)
data.append(data_video)
target.append(target_video.to_numpy())
return (np.array(data, dtype=object), np.array(target, dtype=object))
folder_set = [[main_folder + "/train_set", main_folder + "/train.csv"], [main_folder + "/test_set", main_folder + "/test.csv"], [main_folder + "/val_set", main_folder + "/val.csv"]]
train_array, train_label_array = create_set(folder_set[0][0])
test_array, test_label_array = create_set(folder_set[1][0])
val_array, val_label_array = create_set(folder_set[2][0])
###Output
_____no_output_____
###Markdown
Parametri del modello
###Code
class ClassifierDataset(Dataset):
def __init__(self, array, label, index_data, window):
self.index_data = index_data
self.array = array
self.label = label
self.window = window
def __getitem__(self, index):
file_index = self.index_data[index][0].tolist()
i = self.index_data[index][1]
j = self.index_data[index][2]
if i == j:
x_data = [self.array[file_index][i]] * self.window
y_data = self.label[file_index][i]
X = torch.from_numpy(np.array(x_data).astype(float)).float()
Y = torch.from_numpy(np.array([y_data]).astype(int)).long()
return X, Y
elif j-i < self.window-1:
x_data = [self.array[file_index][i]] * (self.window-j)
x_data.extend(self.array[file_index][(i+1):(j+1)])
x_data = np.array(x_data).astype(float)
else:
x_data = self.array[file_index][i:(j+1)]
k = j-1
while len(x_data) > self.window:
x_data = self.array[file_index][i:(k+1)]
k -= 1
X = torch.from_numpy(x_data).float()
y_data = self.label[file_index][i:(j+1)]
y_mode = mode(y_data)[0][0]
Y = torch.from_numpy(np.array([y_mode])).long()
return X, Y
def __len__ (self):
return len(self.index_data)
def create_dataset(array_data, label_data, window):
X = []
files_num = len(array_data)
for index, array, label in zip(range(files_num), array_data, label_data):
l = array.shape[0]
i = 0
while l - i >= window:
j = i + window
index_window = np.array([index, i, j]).astype(int)
X.append(index_window)
i += window
if i < l:
w = l - i
i = i - w - window
j = i + window
index_window = np.array([index, i, j]).astype(int)
X.append(index_window)
X_data = torch.from_numpy(np.array(X).astype(int))
return ClassifierDataset(array_data, label_data, X_data, window)
def create_dataset2(array_data, label_data, window):
X = []
files_num = len(array_data)
for index, array, label in zip(range(files_num), array_data, label_data):
l = array.shape[0]
i = -1
w = [0] * window
while i < l:
i += 1
w.pop(0)
w.append(i)
index_window = np.array([index, w[0], i]).astype(int)
X.append(index_window)
X_data = torch.from_numpy(np.array(X).astype(int))
return ClassifierDataset(array_data, label_data, X_data, window)
EPOCHS = 200
window = 182
BATCH_SIZE = 64
LEARNING_RATE = 0.005217779658903428
NUM_LAYER = 1
NUM_HIDDEN = 32
NUM_FEATURES = 156
NUM_CLASSES = 4
index_name = 2
esperimento = 1
model_name = "../data/modelli_senza_overlap/000" + str(index_name) + "_mymodel.pt"
train_dataset = create_dataset(train_array, train_label_array, window)
test_dataset = create_dataset2(test_array, test_label_array, window)
val_dataset = create_dataset2(val_array, val_label_array, window)
train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, drop_last=True)
val_loader = DataLoader(dataset=val_dataset, batch_size=1, shuffle=False)
test_loader = DataLoader(dataset=test_dataset, batch_size=1, shuffle=False)
###Output
_____no_output_____
###Markdown
Rete Neurale LSTM
###Code
class LSTM(nn.Module):
def __init__(self, input_size, window, output_size, hidden_layer_size, num_layers):
super(LSTM, self).__init__()
self.num_layers = num_layers
self.hidden_size = hidden_layer_size
self.output_size = output_size
self.lstm = nn.LSTM(input_size, hidden_layer_size, num_layers)
self.regressor = nn.Linear(hidden_layer_size, output_size)
def forward(self, x, hidden=None):
if hidden is not None:
h0 = hidden[0]
c0 = hidden [1]
else:
h0 = torch.zeros(self.num_layers, x.size()[0], self.hidden_size).to(device)
c0 = torch.zeros(self.num_layers, x.size()[0], self.hidden_size).to(device)
e = x.view(x.size(1), x.size(0), x.size(2))
h, hn = self.lstm(e, (h0, c0))
h = h.view(h.size(1), h.size(0), h.size(2))
h = h[:,-1,:]
h = self.regressor(h)
return h, hn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
def categoryFromOutput(output):
top_n, top_i = output.topk(1)
category_i = top_i[0].item()
return LABELS[category_i], category_i
def accuracy(y_pred, y_true):
return accuracy_score(y_true, y_pred) * 100
def f1_s(y_pred, y_true):
return f1_score(y_true, y_pred, average=None, zero_division=1, labels=[0,1,2,3])
###Output
_____no_output_____
###Markdown
Test the model
###Code
LABELS = [
"No_action",
"Prendi",
"Rilascia",
"Premi"
]
class2idx = {
"No_action": 0,
"Prendi": 1,
"Rilascia": 2,
"Premi": 3
}
idx2class = {v: k for k, v in class2idx.items()}
model = LSTM(input_size=NUM_FEATURES, window=window, output_size=NUM_CLASSES, hidden_layer_size=NUM_HIDDEN, num_layers=NUM_LAYER)
model.to(device)
model.load_state_dict(torch.load(model_name))
print(model)
y_pred_list = []
y_true_list = []
with torch.no_grad():
model.eval()
for X_batch, y_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred, _ = model(X_batch)
guess, guess_i = categoryFromOutput(y_test_pred)
y_pred_list.append(guess_i)
y_true_list.append(y_batch[0][0].tolist()[0])
Y_t = y_true_list
Y_p = y_pred_list
###Output
_____no_output_____
###Markdown
Confusion Matrix Creiamo un dataframe dalla matrice di confusione e lo tracciamo come una mappa di calore utilizzando la libreria Seaborn.
###Code
confusion_matrix_df = pd.DataFrame(confusion_matrix(y_true_list, y_pred_list, normalize='true')).rename(columns=idx2class, index=idx2class)
sns.heatmap(confusion_matrix_df, annot=True)
plt.savefig('Esperimento' + str(esperimento) + "_1.pdf", format='pdf')
print(classification_report(y_true_list, y_pred_list))
class2idx = {
"No_action": 0,
"Action": 1
}
idx2class = {v: k for k, v in class2idx.items()}
real2class = {
0:0,
1:1,
2:1,
3:1
}
y_pred_list = [(real2class[c]) for c in Y_p]
y_true_list = [(real2class[c]) for c in Y_t]
confusion_matrix_df = pd.DataFrame(confusion_matrix(y_true_list, y_pred_list, normalize='true')).rename(columns=idx2class, index=idx2class)
sns.heatmap(confusion_matrix_df, annot=True)
plt.savefig('Esperimento' + str(esperimento) + "_2.pdf", format='pdf')
print(classification_report(y_true_list, y_pred_list))
class2idx = {
"No_action": 0,
"P/R": 1,
"Premi": 2
}
idx2class = {v: k for k, v in class2idx.items()}
real2class = {
0:0,
1:1,
2:1,
3:2
}
y_pred_list = [(real2class[c]) for c in Y_p]
y_true_list = [(real2class[c]) for c in Y_t]
confusion_matrix_df = pd.DataFrame(confusion_matrix(y_true_list, y_pred_list, normalize='true')).rename(columns=idx2class, index=idx2class)
sns.heatmap(confusion_matrix_df, annot=True)
plt.savefig('Esperimento' + str(esperimento) + "_3.pdf", format='pdf')
print(classification_report(y_true_list, y_pred_list))
class2idx = {
"No_action": 0,
"P/P": 1,
"Rilascia": 2
}
idx2class = {v: k for k, v in class2idx.items()}
real2class = {
0:0,
1:1,
2:2,
3:1
}
y_pred_list = [(real2class[c]) for c in Y_p]
y_true_list = [(real2class[c]) for c in Y_t]
confusion_matrix_df = pd.DataFrame(confusion_matrix(y_true_list, y_pred_list, normalize='true')).rename(columns=idx2class, index=idx2class)
sns.heatmap(confusion_matrix_df, annot=True)
print(classification_report(y_true_list, y_pred_list))
class2idx = {
"No_action": 0,
"R/P": 1,
"Prendi": 2
}
idx2class = {v: k for k, v in class2idx.items()}
real2class = {
0:0,
1:2,
2:1,
3:1
}
y_pred_list = [(real2class[c]) for c in Y_p]
y_true_list = [(real2class[c]) for c in Y_t]
confusion_matrix_df = pd.DataFrame(confusion_matrix(y_true_list, y_pred_list, normalize='true')).rename(columns=idx2class, index=idx2class)
sns.heatmap(confusion_matrix_df, annot=True)
print(classification_report(y_true_list, y_pred_list))
###Output
_____no_output_____ |
c2-machine-learning-data-lifecycle-in-production/week2/C2W2_Assignment.ipynb | ###Markdown
Week 2 Assignment: Feature Engineering For this week's assignment, you will build a data pipeline using using [Tensorflow Extended (TFX)](https://www.tensorflow.org/tfx) to prepare features from the [Metro Interstate Traffic Volume dataset](https://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume). Try to only use the documentation and code hints to accomplish the tasks but feel free to review the 2nd ungraded lab this week in case you get stuck.Upon completion, you will have:* created an InteractiveContext to run TFX components interactively* used TFX ExampleGen component to split your dataset into training and evaluation datasets* generated the statistics and the schema of your dataset using TFX StatisticsGen and SchemaGen components* validated the evaluation dataset statistics using TFX ExampleValidator* performed feature engineering using the TFX Transform componentLet's begin! Table of Contents- [1 - Setup](1) - [1.1 - Imports](1-1) - [1.2 - Define Paths](1-2) - [1.3 - Preview the Dataset](1-3) - [1.4 - Create the InteractiveContext](1-4)- [2 - Run TFX components interactively](2) - [2.1 - ExampleGen](2-1) - [Exercise 1 - ExampleGen](ex-1) - [Exercise 2 - get_records()](ex-2) - [2.2 - StatisticsGen](2-2) - [Exercise 3 - StatisticsGen](ex-3) - [2.3 - SchemaGen](2-3) - [Exercise 4 - SchemaGen](ex-4) - [2.4 - ExampleValidator](2-4) - [Exercise 5 - ExampleValidator](ex-5) - [2.5 - Transform](2-5) - [Exercise 6 - preprocessing_fn()](ex-6) - [Exercise 7 - Transform](ex-7) 1 - SetupAs usual, you will first need to import the necessary packages. For reference, the lab environment uses *TensorFlow version: 2.6* and *TFX version: 1.3*. 1.1 Imports
###Code
import os
import tensorflow as tf
from tfx import v1 as tfx
import tensorflow_transform.beam as tft_beam
from google.protobuf.json_format import MessageToDict
from tensorflow_transform.tf_metadata import dataset_metadata, schema_utils
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
import tempfile
import pprint
import warnings
pp = pprint.PrettyPrinter()
# ignore tf warning messages
tf.get_logger().setLevel('ERROR')
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
1.2 - Define pathsYou will define a few global variables to indicate paths in the local workspace.
###Code
# location of the pipeline metadata store
_pipeline_root = './pipeline'
# directory of the raw data files
_data_root = './data'
# path to the raw training data
_data_filepath = os.path.join(_data_root, 'metro_traffic_volume.csv')
###Output
_____no_output_____
###Markdown
1.3 - Preview the datasetThe [Metro Interstate Traffic Volume dataset](https://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume) contains hourly traffic volume of a road in Minnesota from 2012-2018. With this data, you can develop a model for predicting the traffic volume given the date, time, and weather conditions. The attributes are:* **holiday** - US National holidays plus regional holiday, Minnesota State Fair* **temp** - Average temp in Kelvin* **rain_1h** - Amount in mm of rain that occurred in the hour* **snow_1h** - Amount in mm of snow that occurred in the hour* **clouds_all** - Percentage of cloud cover* **weather_main** - Short textual description of the current weather* **weather_description** - Longer textual description of the current weather* **date_time** - DateTime Hour of the data collected in local CST time* **traffic_volume** - Numeric Hourly I-94 ATR 301 reported westbound traffic volume* **month** - taken from date_time* **day** - taken from date_time* **day_of_week** - taken from date_time* **hour** - taken from date_time*Disclaimer: We added the last four attributes shown above (i.e. month, day, day_of_week, hour) to the original dataset to increase the features you can transform later.* Take a quick look at the first few rows of the CSV file.
###Code
# Preview the dataset
!head {_data_filepath}
###Output
holiday,temp,rain_1h,snow_1h,clouds_all,weather_main,weather_description,date_time,traffic_volume,month,day,day_of_week,hour
None,288.28,0.0,0.0,40,Clouds,scattered clouds,2012-10-02 09:00:00,5545,10,2,1,9
None,289.36,0.0,0.0,75,Clouds,broken clouds,2012-10-02 10:00:00,4516,10,2,1,10
None,289.58,0.0,0.0,90,Clouds,overcast clouds,2012-10-02 11:00:00,4767,10,2,1,11
None,290.13,0.0,0.0,90,Clouds,overcast clouds,2012-10-02 12:00:00,5026,10,2,1,12
None,291.14,0.0,0.0,75,Clouds,broken clouds,2012-10-02 13:00:00,4918,10,2,1,13
None,291.72,0.0,0.0,1,Clear,sky is clear,2012-10-02 14:00:00,5181,10,2,1,14
None,293.17,0.0,0.0,1,Clear,sky is clear,2012-10-02 15:00:00,5584,10,2,1,15
None,293.86,0.0,0.0,1,Clear,sky is clear,2012-10-02 16:00:00,6015,10,2,1,16
None,294.14,0.0,0.0,20,Clouds,few clouds,2012-10-02 17:00:00,5791,10,2,1,17
###Markdown
1.4 - Create the InteractiveContextYou will need to initialize the `InteractiveContext` to enable running the TFX components interactively. As before, you will let it create the metadata store in the `_pipeline_root` directory. You can safely ignore the warning about the missing metadata config file.
###Code
# Declare the InteractiveContext and use a local sqlite file as the metadata store.
# You can ignore the warning about the missing metadata config file
context = InteractiveContext(pipeline_root=_pipeline_root)
###Output
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at ./pipeline/metadata.sqlite.
###Markdown
2 - Run TFX components interactivelyIn the following exercises, you will create the data pipeline components one-by-one, run each of them, and visualize their output artifacts. Recall that we refer to the outputs of pipeline components as *artifacts* and these can be inputs to the next stage of the pipeline. 2.1 - ExampleGenThe pipeline starts with the [ExampleGen](https://www.tensorflow.org/tfx/guide/examplegen) component. It will:* split the data into training and evaluation sets (by default: 2/3 train, 1/3 eval).* convert each data row into `tf.train.Example` format. This [protocol buffer](https://developers.google.com/protocol-buffers) is designed for Tensorflow operations and is used by the TFX components.* compress and save the data collection under the `_pipeline_root` directory for other components to access. These examples are stored in `TFRecord` format. This optimizes read and write operations within Tensorflow especially if you have a large collection of data. Exercise 1: ExampleGenFill out the code below to ingest the data from the CSV file stored in the `_data_root` directory.
###Code
### START CODE HERE
# Instantiate ExampleGen with the input CSV dataset
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
# Run the component using the InteractiveContext instance
context.run(example_gen)
### END CODE HERE
###Output
_____no_output_____
###Markdown
You should see the output cell of the `InteractiveContext` above showing the metadata associated with the component execution. You can expand the items under `.component.outputs` and see that an `Examples` artifact for the train and eval split is created in `metro_traffic_pipeline/CsvExampleGen/examples/{execution_id}`. You can also check that programmatically with the following snippet. You can focus on the `try` block. The `except` and `else` block is needed mainly for grading. `context.run()` yields no operation when executed in a non-interactive environment (such as the grader script that runs outside of this notebook). In such scenarios, the URI must be manually set to avoid errors.
###Code
try:
# get the artifact object
artifact = example_gen.outputs['examples'].get()[0]
# print split names and uri
print(f'split names: {artifact.split_names}')
print(f'artifact uri: {artifact.uri}')
# for grading since context.run() does not work outside the notebook
except IndexError:
print("context.run() was no-op")
examples_path = './pipeline/CsvExampleGen/examples'
dir_id = os.listdir(examples_path)[0]
artifact_uri = f'{examples_path}/{dir_id}'
else:
artifact_uri = artifact.uri
###Output
split names: ["train", "eval"]
artifact uri: ./pipeline/CsvExampleGen/examples/6
###Markdown
The ingested data has been saved to the directory specified by the artifact Uniform Resource Identifier (URI). As a sanity check, you can take a look at some of the training examples. This requires working with Tensorflow data types, particularly `tf.train.Example` and `TFRecord` (you can read more about them [here](https://www.tensorflow.org/tutorials/load_data/tfrecord)). Let's first load the `TFRecord` into a variable:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(artifact_uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
###Output
_____no_output_____
###Markdown
Exercise 2: get_records()Complete the helper function below to return a specified number of examples.*Hints: You may find the [MessageToDict](https://googleapis.dev/python/protobuf/latest/google/protobuf/json_format.htmlgoogle.protobuf.json_format.MessageToDict) helper function and tf.train.Example's [ParseFromString()](https://googleapis.dev/python/protobuf/latest/google/protobuf/message.htmlgoogle.protobuf.message.Message.ParseFromString) method useful here. You can also refer [here](https://www.tensorflow.org/tutorials/load_data/tfrecord) for a refresher on TFRecord and tf.train.Example()*
###Code
def get_records(dataset, num_records):
'''Extracts records from the given dataset.
Args:
dataset (TFRecordDataset): dataset saved by ExampleGen
num_records (int): number of records to preview
'''
# initialize an empty list
records = []
### START CODE HERE
# Use the `take()` method to specify how many records to get
for tfrecord in dataset.take(num_records):
# Get the numpy property of the tensor
serialized_example = tfrecord.numpy()
# Initialize a `tf.train.Example()` to read the serialized data
example = tf.train.Example()
# Read the example data (output is a protocol buffer message)
example.ParseFromString(serialized_example)
# convert the protocol bufffer message to a Python dictionary
example_dict = (MessageToDict(example))
# append to the records list
records.append(example_dict)
### END CODE HERE
return records
# Get 3 records from the dataset
sample_records = get_records(dataset, 3)
# Print the output
pp.pprint(sample_records)
###Output
[{'features': {'feature': {'clouds_all': {'int64List': {'value': ['40']}},
'date_time': {'bytesList': {'value': ['MjAxMi0xMC0wMiAwOTowMDowMA==']}},
'day': {'int64List': {'value': ['2']}},
'day_of_week': {'int64List': {'value': ['1']}},
'holiday': {'bytesList': {'value': ['Tm9uZQ==']}},
'hour': {'int64List': {'value': ['9']}},
'month': {'int64List': {'value': ['10']}},
'rain_1h': {'floatList': {'value': [0.0]}},
'snow_1h': {'floatList': {'value': [0.0]}},
'temp': {'floatList': {'value': [288.28]}},
'traffic_volume': {'int64List': {'value': ['5545']}},
'weather_description': {'bytesList': {'value': ['c2NhdHRlcmVkIGNsb3Vkcw==']}},
'weather_main': {'bytesList': {'value': ['Q2xvdWRz']}}}}},
{'features': {'feature': {'clouds_all': {'int64List': {'value': ['75']}},
'date_time': {'bytesList': {'value': ['MjAxMi0xMC0wMiAxMDowMDowMA==']}},
'day': {'int64List': {'value': ['2']}},
'day_of_week': {'int64List': {'value': ['1']}},
'holiday': {'bytesList': {'value': ['Tm9uZQ==']}},
'hour': {'int64List': {'value': ['10']}},
'month': {'int64List': {'value': ['10']}},
'rain_1h': {'floatList': {'value': [0.0]}},
'snow_1h': {'floatList': {'value': [0.0]}},
'temp': {'floatList': {'value': [289.36]}},
'traffic_volume': {'int64List': {'value': ['4516']}},
'weather_description': {'bytesList': {'value': ['YnJva2VuIGNsb3Vkcw==']}},
'weather_main': {'bytesList': {'value': ['Q2xvdWRz']}}}}},
{'features': {'feature': {'clouds_all': {'int64List': {'value': ['90']}},
'date_time': {'bytesList': {'value': ['MjAxMi0xMC0wMiAxMTowMDowMA==']}},
'day': {'int64List': {'value': ['2']}},
'day_of_week': {'int64List': {'value': ['1']}},
'holiday': {'bytesList': {'value': ['Tm9uZQ==']}},
'hour': {'int64List': {'value': ['11']}},
'month': {'int64List': {'value': ['10']}},
'rain_1h': {'floatList': {'value': [0.0]}},
'snow_1h': {'floatList': {'value': [0.0]}},
'temp': {'floatList': {'value': [289.58]}},
'traffic_volume': {'int64List': {'value': ['4767']}},
'weather_description': {'bytesList': {'value': ['b3ZlcmNhc3QgY2xvdWRz']}},
'weather_main': {'bytesList': {'value': ['Q2xvdWRz']}}}}}]
###Markdown
You should see three of the examples printed above. Now that `ExampleGen` has finished ingesting the data, the next step is data analysis. 2.2 - StatisticsGenThe [StatisticsGen](https://www.tensorflow.org/tfx/guide/statsgen) component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.`StatisticsGen` takes as input the dataset ingested using `CsvExampleGen`. Exercise 3: StatisticsGenFill the code below to generate statistics from the output examples of `CsvExampleGen`.
###Code
### START CODE HERE
# Instantiate StatisticsGen with the ExampleGen ingested dataset
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# Run the component
context.run(statistics_gen)
### END CODE HERE
# Plot the statistics generated
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
2.3 - SchemaGenThe [SchemaGen](https://www.tensorflow.org/tfx/guide/schemagen) component also uses TFDV to generate a schema based on your data statistics. As you've learned previously, a schema defines the expected bounds, types, and properties of the features in your dataset.`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default. Exercise 4: SchemaGen
###Code
### START CODE HERE
# Instantiate SchemaGen with the output statistics from the StatisticsGen
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
)
# Run the component
context.run(schema_gen)
### END CODE HERE
###Output
_____no_output_____
###Markdown
If all went well, you can now visualize the generated schema as a table.
###Code
# Visualize the output
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
Each attribute in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.This schema will be used to detect anomalies in the next step. 2.4 - ExampleValidatorThe [ExampleValidator](https://www.tensorflow.org/tfx/guide/exampleval) component detects anomalies in your data based on the generated schema from the previous step. Like the previous two components, it also uses TFDV under the hood. `ExampleValidator` will take as input the statistics from `StatisticsGen` and the schema from `SchemaGen`. By default, it compares the statistics from the evaluation split to the schema from the training split. Exercise 5: ExampleValidatorFill the code below to detect anomalies in your datasets.
###Code
### START CODE HERE
# Instantiate ExampleValidator with the statistics and schema from the previous steps
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
# Run the component
context.run(example_validator)
### END CODE HERE
###Output
_____no_output_____
###Markdown
As with the previous steps, you can visualize the anomalies as a table.
###Code
# Visualize the output
context.show(example_validator.outputs['anomalies'])
###Output
_____no_output_____
###Markdown
If there are anomalies detected, you should examine how you should handle it. For example, you can relax distribution constraints or modify the domain of some features. You've had some practice with this last week when you used TFDV and you can also do that here. For this particular case, there should be no anomalies detected and we can proceed to the next step. 2.5 - TransformIn this section, you will use the [Transform](https://www.tensorflow.org/tfx/guide/transform) component to perform feature engineering.`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module containing the preprocessing function.The component expects an external module for your Transform code so you need to use the magic command `%% writefile` to save the file to disk. We have defined a few constants that group the data's attributes according to the transforms you will perform later. This file will also be saved locally.
###Code
# Set the constants module filename
_traffic_constants_module_file = 'traffic_constants.py'
%%writefile {_traffic_constants_module_file}
# Features to be scaled to the z-score
DENSE_FLOAT_FEATURE_KEYS = ['temp', 'snow_1h']
# Features to bucketize
BUCKET_FEATURE_KEYS = ['rain_1h']
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = {'rain_1h': 3}
# Feature to scale from 0 to 1
RANGE_FEATURE_KEYS = ['clouds_all']
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
# Features with string data types that will be converted to indices
VOCAB_FEATURE_KEYS = [
'holiday',
'weather_main',
'weather_description'
]
# Features with int data type that will be kept as is
CATEGORICAL_FEATURE_KEYS = [
'hour', 'day', 'day_of_week', 'month'
]
# Feature to predict
VOLUME_KEY = 'traffic_volume'
def transformed_name(key):
return key + '_xf'
###Output
Overwriting traffic_constants.py
###Markdown
Exercise 6 Next, you will fill out the transform module. As mentioned, this will also be saved to disk. Specifically, you will complete the `preprocessing_fn` which defines the transformations. See the code comments for instructions and refer to the [tft module documentation](https://www.tensorflow.org/tfx/transform/api_docs/python/tft) to look up which function to use for a given group of keys.For the label (i.e. `VOLUME_KEY`), you will transform it to indicate if it is greater than the mean of the entire dataset.
###Code
# Set the transform module filename
_traffic_transform_module_file = 'traffic_transform.py'
%%writefile {_traffic_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import traffic_constants
# Unpack the contents of the constants module
_DENSE_FLOAT_FEATURE_KEYS = traffic_constants.DENSE_FLOAT_FEATURE_KEYS
_RANGE_FEATURE_KEYS = traffic_constants.RANGE_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = traffic_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = traffic_constants.VOCAB_SIZE
_OOV_SIZE = traffic_constants.OOV_SIZE
_CATEGORICAL_FEATURE_KEYS = traffic_constants.CATEGORICAL_FEATURE_KEYS
_BUCKET_FEATURE_KEYS = traffic_constants.BUCKET_FEATURE_KEYS
_FEATURE_BUCKET_COUNT = traffic_constants.FEATURE_BUCKET_COUNT
_VOLUME_KEY = traffic_constants.VOLUME_KEY
_transformed_name = traffic_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
### START CODE HERE
# Scale these features to the z-score.
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Scale these features to the z-score.
outputs[_transformed_name(key)] = tft.scale_to_z_score(inputs[key])
# Scale these feature/s from 0 to 1
for key in _RANGE_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.scale_to_0_1(inputs[key])
# Transform the strings into indices
# hint: use the VOCAB_SIZE and OOV_SIZE to define the top_k and num_oov parameters
for key in _VOCAB_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(inputs[key])
# Bucketize the feature
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(inputs[key], _FEATURE_BUCKET_COUNT[key])
# Keep as is. No tft function needed.
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = inputs[key]
# Use `tf.cast` to cast the label key to float32 and fill in the missing values.
traffic_volume = tf.cast(inputs[_VOLUME_KEY], tf.float32)
# Create a feature that shows if the traffic volume is greater than the mean and cast to an int
outputs[_transformed_name(_VOLUME_KEY)] = tf.cast(
# Use `tf.greater` to check if the traffic volume in a row is greater than the mean of the entire traffic volumn column
tf.greater(traffic_volume, tft.mean(tf.cast(inputs[_VOLUME_KEY], tf.float32))),
tf.int64)
### END CODE HERE
return outputs
# Test your preprocessing_fn
import traffic_transform
from testing_values import feature_description, raw_data
# NOTE: These next two lines are for reloading your traffic_transform module in case you need to
# update your initial solution and re-run this cell. Please do not remove them especially if you
# have revised your solution. Else, your changes will not be detected.
import importlib
importlib.reload(traffic_transform)
raw_data_metadata = dataset_metadata.DatasetMetadata(schema_utils.schema_from_feature_spec(feature_description))
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transformed_dataset, _ = (
(raw_data, raw_data_metadata) | tft_beam.AnalyzeAndTransformDataset(traffic_transform.preprocessing_fn))
transformed_data, transformed_metadata = transformed_dataset
# Test that the transformed data matches the expected output
transformed_data
###Output
_____no_output_____
###Markdown
**Expected Output:**```[{'clouds_all_xf': 1.0, 'day_of_week_xf': 4, 'day_xf': 8, 'holiday_xf': 0, 'hour_xf': 15, 'month_xf': 1, 'rain_1h_xf': 2, 'snow_1h_xf': 0.0, 'temp_xf': 0.0, 'traffic_volume_xf': 0, 'weather_description_xf': 0, 'weather_main_xf': 0}]```
###Code
# Test that the transformed metadata's schema matches the expected output
MessageToDict(transformed_metadata.schema)
###Output
_____no_output_____
###Markdown
**Expected Output:**```{'feature': [{'name': 'clouds_all_xf', 'type': 'FLOAT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'day_of_week_xf', 'type': 'INT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'day_xf', 'type': 'INT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'holiday_xf', 'type': 'INT', 'intDomain': {'isCategorical': True}, 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'hour_xf', 'type': 'INT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'month_xf', 'type': 'INT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'rain_1h_xf', 'type': 'INT', 'intDomain': {'isCategorical': True}, 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'snow_1h_xf', 'type': 'FLOAT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'temp_xf', 'type': 'FLOAT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'traffic_volume_xf', 'type': 'INT', 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'weather_description_xf', 'type': 'INT', 'intDomain': {'isCategorical': True}, 'presence': {'minFraction': 1.0}, 'shape': {}}, {'name': 'weather_main_xf', 'type': 'INT', 'intDomain': {'isCategorical': True}, 'presence': {'minFraction': 1.0}, 'shape': {}}]}``` Exercise 7With the transform module defined, complete the code below to perform feature engineering on the raw data.
###Code
### START CODE HERE
# Instantiate the Transform component
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_traffic_transform_module_file))
# Run the component.
# The `enable_cache` flag is disabled in case you need to update your transform module file.
context.run(transform, enable_cache=False)
### END CODE HERE
###Output
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType], int] instead.
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_2/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_2/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType], int] instead.
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.8 interpreter.
###Markdown
You should see the output cell by `InteractiveContext` above and see the three artifacts in `.component.outputs`:* `transform_graph` is the graph that performs the preprocessing operations. This will be included during training and serving to ensure consistent transformations of incoming data.* `transformed_examples` points to the preprocessed training and evaluation data.* `updated_analyzer_cache` are stored calculations from previous runs. The `transform_graph` artifact URI should point to a directory containing:* The `metadata` subdirectory containing the schema of the original data.* The `transformed_metadata` subdirectory containing the schema of the preprocessed data. * The `transform_fn` subdirectory containing the actual preprocessing graph.Again, for grading purposes, we inserted an `except` and `else` below to handle checking the output outside the notebook environment.
###Code
try:
# Get the uri of the transform graph
transform_graph_uri = transform.outputs['transform_graph'].get()[0].uri
except IndexError:
print("context.run() was no-op")
transform_path = './pipeline/Transform/transformed_examples'
dir_id = os.listdir(transform_path)[0]
transform_graph_uri = f'{transform_path}/{dir_id}'
else:
# List the subdirectories under the uri
os.listdir(transform_graph_uri)
###Output
_____no_output_____
###Markdown
Lastly, you can also take a look at a few of the transformed examples.
###Code
try:
# Get the URI of the output artifact representing the transformed examples
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')
except IndexError:
print("context.run() was no-op")
train_uri = os.path.join(transform_graph_uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
transformed_dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Get 3 records from the dataset
sample_records_xf = get_records(transformed_dataset, 3)
# Print the output
pp.pprint(sample_records_xf)
###Output
[{'features': {'feature': {'clouds_all_xf': {'floatList': {'value': [0.4]}},
'day_of_week_xf': {'int64List': {'value': ['1']}},
'day_xf': {'int64List': {'value': ['2']}},
'holiday_xf': {'int64List': {'value': ['0']}},
'hour_xf': {'int64List': {'value': ['9']}},
'month_xf': {'int64List': {'value': ['10']}},
'rain_1h_xf': {'int64List': {'value': ['2']}},
'snow_1h_xf': {'floatList': {'value': [-0.027424404]}},
'temp_xf': {'floatList': {'value': [0.5336853]}},
'traffic_volume_xf': {'int64List': {'value': ['1']}},
'weather_description_xf': {'int64List': {'value': ['4']}},
'weather_main_xf': {'int64List': {'value': ['0']}}}}},
{'features': {'feature': {'clouds_all_xf': {'floatList': {'value': [0.75]}},
'day_of_week_xf': {'int64List': {'value': ['1']}},
'day_xf': {'int64List': {'value': ['2']}},
'holiday_xf': {'int64List': {'value': ['0']}},
'hour_xf': {'int64List': {'value': ['10']}},
'month_xf': {'int64List': {'value': ['10']}},
'rain_1h_xf': {'int64List': {'value': ['2']}},
'snow_1h_xf': {'floatList': {'value': [-0.027424404]}},
'temp_xf': {'floatList': {'value': [0.61569786]}},
'traffic_volume_xf': {'int64List': {'value': ['1']}},
'weather_description_xf': {'int64List': {'value': ['3']}},
'weather_main_xf': {'int64List': {'value': ['0']}}}}},
{'features': {'feature': {'clouds_all_xf': {'floatList': {'value': [0.9]}},
'day_of_week_xf': {'int64List': {'value': ['1']}},
'day_xf': {'int64List': {'value': ['2']}},
'holiday_xf': {'int64List': {'value': ['0']}},
'hour_xf': {'int64List': {'value': ['11']}},
'month_xf': {'int64List': {'value': ['10']}},
'rain_1h_xf': {'int64List': {'value': ['2']}},
'snow_1h_xf': {'floatList': {'value': [-0.027424404]}},
'temp_xf': {'floatList': {'value': [0.63240445]}},
'traffic_volume_xf': {'int64List': {'value': ['1']}},
'weather_description_xf': {'int64List': {'value': ['2']}},
'weather_main_xf': {'int64List': {'value': ['0']}}}}}]
|
Adaptive Sensitive Reweighting/Adaptive_Sensitive_Reweightening_Bank-dataset.ipynb | ###Markdown
1. Load and preprocess the datasets* In the article it was wentioned, that all categorical features were encoded using one-hot scheme, whereas all numeric attributes were normalized by dividing with their mean value.
###Code
def encode_and_bind(original_dataframe, feature_to_encode):
"""
To obtain dummy features from original feature `feature_to_encode`,
add them to original dataframe `original_dataframe`,
and drop original feature from it.
"""
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]], drop_first=True)
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
###Output
_____no_output_____
###Markdown
1.1 Bank dataset`data_bank` - [Bank marketing dataset](https://archive.ics.uci.edu/ml/datasets/Bank+Marketing) comprises 41188 samples with 20 features and a label `y` with two possible values `yes` and `no`. In the article the 'ages less than 25 and more than 60 years was considered as *sensitive*'.Let's make the following preprocessing steps:1. In the feature `y` replace `no` values with `0` and `yes` with `1`.2. Replace values of the feature `age` with `0` if the age of a client lies in $[25, 60]$ segment and with `1` otherwise.3. Divide the following numerical features with their mean values: `duration`, `campaign`, `pdays`, `previous`, `emp.var.rate`, `cons.price.idx`, `cons.conf.idx`, `euribor3m`, `nr.employed`.4. Encode the following categorical features with one-hot scheme: `job`, `marital`, `education`, `default`, `housing`, `loan`, `contact`, `month`, `day_of_week`, `poutcome`. After preprocessing we will perform five 70:30 random splits to obtain training and test data, as it was described in the article. Each split corrresponds to one experiment.
###Code
data_bank = pd.read_csv("bank-additional-full.csv", sep=';')
# Replace values in 'y'
data_bank.loc[data_bank['y'] == 'yes', 'y'] = 1
data_bank.loc[data_bank['y'] == 'no', 'y'] = 0
# Replace values in 'age'
data_bank.loc[(data_bank['age'] < 25) | (data_bank['age'] > 60), 'age'] = 1
data_bank.loc[(data_bank['age'] >= 25) & (data_bank['age'] <= 60), 'age'] = 0
# Normalize all numerical features by dividing with mean value
num_features_bank = ["duration", "campaign", "pdays", "previous", "emp.var.rate",
"cons.price.idx", "cons.conf.idx", "euribor3m", "nr.employed"]
for feature in num_features_bank:
mean = data_bank[feature].mean()
data_bank[feature] = data_bank[feature] / mean
# Make dummy features for all categorical features
cat_features_bank = ["job", "marital", "education", "default", "housing",
"loan", "contact", "month", "day_of_week", "poutcome"]
for feature in cat_features_bank:
data_bank = encode_and_bind(data_bank, feature)
###Output
_____no_output_____
###Markdown
2. Adaptive Sensitive Reweightening (ASR) + CULEP model 2.1 Adaptive Sensitive Reweightening (`ReweightedClassifier`)
###Code
class ReweightedClassifier:
def __init__(self, baze_clf, alpha, beta, params = {}):
"""
Input:
baze_clf - object from sklearn with methods .fit(sample_weight=), .predict(), .predict_proba()
alpha - list of alphas for sensitive and non-sensitive objects [alpha, alpha']
beta - list of betss for sensitive and non-sensitive objects [beta, beta']
params - **kwargs compatible with baze_clf
"""
self.baze_clf = baze_clf
self.model = None
self.alpha = np.array(alpha)
self.alphas = None
self.beta = np.array(beta)
self.betas = None
self.weights = None
self.prev_weights = None
self.params = params
def reweight_dataset(self, length, error, minority_idx):
"""
This function recalculates values of weights and saves their previous values
"""
if self.alphas is None or self.betas is None:
# If alpha_0, alpha_1 or beta_0, beta_1 are not defined,
# then define alpha_0 and beta_0 to every object from non-sensitive class,
# and alpha_1 and beta_1 to every object from sensitive class (minority).
self.alphas = np.ones(length) * self.alpha[0]
self.betas = np.ones(length) * self.beta[0]
self.alphas[minority_idx] = self.alpha[1]
self.betas[minority_idx] = self.beta[1]
# w_i_prev <- w_i for all i in dataset
self.prev_weights = self.weights.copy()
# w_i = alpha_i * L_{beta_i} (P'(y_pred_i =! y_true_i))
# + (1 - alpha_i) * L_{beta_i} (-P'(y_pred_i =! y_true_i)),
# where
# L_{beta_i} (x) = exp(beta_i * x)
self.weights = self.alphas * np.exp(self.betas * error) \
+ (1 - self.alphas) * np.exp(- self.betas * error)
def pRule(self, prediction, minority_idx):
"""
This function calculates
| P(y_pred_i = 1 | i in S) P(y_pred_i = 1 | i not in S) |
pRule = min { ---------------------------- , ---------------------------- }
| P(y_pred_i = 1 | i not in S) P(y_pred_i = 1 | i in S) |
S - the group of sensitive objects
---------
Input:
prediction - labels ({0,1}) of a sample for which pRule is calculated
minority_idx - indexes of objects from a sensitive group
"""
# majority indexes = set of all indexes / set of minority indexes,
# where set of all indexes = all numbers from 0 to size of sample (=len(prediction))
majority_idx = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx)
# minority = P(y_pred_i = 1 | i in minority)
# majority = P(y_pred_i = 1 | i in majority)
minority = prediction[minority_idx].mean()
majority = prediction[list(majority_idx)].mean()
minority = np.clip(minority, 1e-10, 1 - 1e-10)
majority = np.clip(majority, 1e-10, 1 - 1e-10)
return min(minority/majority, majority/minority)
def fit(self, X_train, y_train, X_test, y_test, minority_idx, verbose=True, max_iter=30):
# Initialize equal weights w_i = 1
self.weights = np.ones(len(y_train))
self.prev_weights = np.zeros(len(y_train))
# Lists for saving metrics
accuracys = []
pRules = []
differences = []
accuracy_plus_prule = []
# Adaptive Sensitive Reweighting
iteration = 0
while ((self.prev_weights - self.weights) ** 2).mean() > 10**(-6) and iteration < max_iter:
iteration += 1
# Train classifier on X_train with weights (w_i / sum(w_i))
self.model = self.baze_clf(**self.params)
self.model.fit(X_train, y_train,
sample_weight = self.weights)
# Use classifier to obtain P`(y_pred_i =! y_pred) (here it is called 'error')
prediction_proba = self.model.predict_proba(X_train)[:, 1]
error = prediction_proba - y_train
# Update weights
self.reweight_dataset(len(y_train), error, minority_idx)
# Get metrics on X_train
prediction = self.model.predict(X_train)
accuracys.append(accuracy_score(prediction, y_train))
pRules.append(self.pRule(prediction, minority_idx))
accuracy_plus_prule.append(accuracys[-1] + pRules[-1])
differences.append(((self.prev_weights - self.weights)**2).mean()**0.5)
# Visualize metrics if it's needed
if verbose:
display.clear_output(True)
fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(16, 7))
metrics = [accuracys, pRules, accuracy_plus_prule, differences]
metrics_names = ["Accuracy score", "pRule", "Accuracy + pRule", "Mean of weight edits"]
for name, metric, ax in zip(metrics_names, metrics, axes.flat):
ax.plot(metric, label='train')
ax.set_title(f'{name}, iteration {iteration}')
ax.legend()
if name == "Mean of weight edits":
ax.set_yscale('log')
plt.show()
return accuracys, pRules, accuracy_plus_prule, differences
def predict(self, X):
return self.model.predict(X)
def predict_proba(self, X):
return self.model.predict_proba(X)
def get_metrics_test(self, X_test, y_test, minority_idx_test):
"""
Obtain pRule and accuracy for trained model
"""
# Obtain predictions on X_test to calculate metrics
prediction_test = self.model.predict(X_test)
# Get metrics on test
accuracy_test = accuracy_score(prediction_test, y_test)
pRule_test = self.pRule(prediction_test, minority_idx_test)
return accuracy_test, pRule_test
###Output
_____no_output_____
###Markdown
2.2. Optimizing CULEP parameter (`train_model`)In the article CULEP parameters are $\alpha, \alpha', \beta, \beta'$. They searched for the optimal hyperparameters in the space $\left( \alpha, \alpha', \beta, \beta' \right) \in \left[ 0, 1 \right] ^2 \times \left[ 0, 3 \right] ^2$ and used for it DIvided RECTangles (DIRECT) method. Each combination of parameters is evaluated with a full run of Adaptive Sensitive Reweightening algorithm on the training set. After optimization of the objective function (in case of Bank dataset it is `accuracy + pRule`), we get optimal hyperparameters $\alpha, \alpha', \beta, \beta'$. Then trained model (on the training set) with these hyperparameters make predictions on test set and the obtained metrics (accuracy and pRule) are reported.The optimization of objective function is highly time-consuming. One experiment for this dataset takes in average **160 minutes**. In the article it was proposed to repeat the whole process for 5 different random splits on train and test sets. To be able to keep track of the process each split will be started in its own cell (instead of a loop). Each split will correspont to one experiment.
###Code
def prep_train_model(X_train, y_train, X_test, y_test, minority_idx):
def train_model(a):
"""
Function of 4 variables (a[0], a[1], a[2], a[3]) that will be minimized by DIRECT method.
a[0], a[1] = alpha, alpha'
a[2], a[3] = beta, beta'
"""
model = ReweightedClassifier(LogisticRegression, [a[0], a[1]], [a[2], a[3]], params = {"max_iter": 4000, 'solver':'liblinear'})
_, _, accuracy_plus_prule, _ = model.fit(X_train, y_train, X_test, y_test, minority_idx)
# We'll maximize [acc + pRule] which we get at the last iteration of Adaptive Sensitive Reweighting
return - accuracy_plus_prule[-1]
return train_model # return function for optimization
###Output
_____no_output_____
###Markdown
3. ExperimentsIn order to make all experiments independent from each other, all necessary variables will have name endings either `_1`, `_2`, `_3`, `_4` or `_5`. 3.1. Experiment 1 1) Obtain a split for the experiment.
###Code
# Split on train and test
labels_bank = data_bank["y"]
features_bank = data_bank.drop(columns=["y"])
X_train_1, X_test_1, y_train_1, y_test_1 = train_test_split(features_bank, labels_bank,
test_size=0.3, random_state=1)
y_train_1 = y_train_1.astype(int).values
y_test_1 = y_test_1.astype(int).values
# Obtain indexes of sensitive class
minority_idx_1 = X_train_1.reset_index(drop=True).index.values[X_train_1["age"] == 1]
minority_idx_test_1 = X_test_1.reset_index(drop=True).index.values[X_test_1["age"] == 1]
###Output
_____no_output_____
###Markdown
2) Perform ASR+CULEP.
###Code
objective_1 = prep_train_model(X_train_1, y_train_1, X_test_1, y_test_1, minority_idx_1)
start = time.time()
my_res_1 = minimize(objective_1, bounds=[[0.0, 1.0], [0.0, 1.0], [0.0, 3.0], [0.0, 3.0]], maxT=80, maxf=320)
stop = time.time()
print(f"Elapsed time: {stop - start} s")
print(f"Elapsed time: {(stop - start) // 60} min {(stop - start) % 60} s")
print(my_res_1)
###Output
_____no_output_____
###Markdown
3) Get necessary metrics on test set (for Bank dataset the metrics are accuracy and pRule).
###Code
# Create model with obtained hyperparameters alpha, alpha', beta, beta'
a_1 = my_res_1.x
model_1 = ReweightedClassifier(LogisticRegression, [a_1[0], a_1[1]], [a_1[2], a_1[3]], params = {"max_iter": 4000, 'solver':'liblinear'})
# Train model on X_train
model_1.fit(X_train_1, y_train_1, X_test_1, y_test_1, minority_idx_1, verbose=False)
# Calculate metrics (pRule, accuracy) on X_test
accuracy_test_1, pRule_test_1 = model_1.get_metrics_test(X_test_1, y_test_1, minority_idx_test_1)
print('ASR+CULEP for X_test')
print(f"prule = {pRule_test_1:.6}, accuracy = {accuracy_test_1:.6}")
print(f"prule + accuracy = {(pRule_test_1 + accuracy_test_1):.6}")
###Output
ASR+CULEP for X_test
prule = 0.985286, accuracy = 0.899328
prule + accuracy = 1.88461
###Markdown
4) For the same split train simple Logistic Regression (without ASR+CULEP) on the train set. Then obtain necessary metrics on the test set.
###Code
# Fit LogisticRegression on X_train
model_simple = LogisticRegression(max_iter = 4000, solver='liblinear')
model_simple.fit(X_train_1, y_train_1)
# Get predictions for X_test
prediction = model_simple.predict(X_test_1)
# Obtain indexes for sensitive and non-sensitive groups
majority_idx_test_1 = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx_test_1)
minority = prediction[minority_idx_test_1].mean()
majority = prediction[list(majority_idx_test_1)].mean()
# Calculate metrics on X_test
prule_simple = min(minority/majority, majority/minority)
accuracy_simple = accuracy_score(prediction, y_test_1)
print('Without ASR+CULEP for X_test')
print(f"prule = {prule_simple:.6}, accuracy = {accuracy_simple:.6}")
print(f"prule + accuracy = {(prule_simple + accuracy_simple):.6}")
###Output
Without ASR+CULEP for X_test
prule = 0.223585, accuracy = 0.90912
prule + accuracy = 1.13271
###Markdown
3.2. Experiment 2 1) Obtain a split for the experiment.
###Code
# Split on train and test
labels_bank = data_bank["y"]
features_bank = data_bank.drop(columns=["y"])
X_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(features_bank, labels_bank, test_size=0.3, random_state=2)
y_train_2 = y_train_2.astype(int).values
y_test_2 = y_test_2.astype(int).values
# Obtain indexes of sensitive class
minority_idx_2 = X_train_2.reset_index(drop=True).index.values[X_train_2["age"] == 1]
minority_idx_test_2 = X_test_2.reset_index(drop=True).index.values[X_test_2["age"] == 1]
###Output
_____no_output_____
###Markdown
2) Perform ASR+CULEP.
###Code
objective_2 = prep_train_model(X_train_2, y_train_2, X_test_2, y_test_2, minority_idx_2)
start = time.time()
my_res_2 = minimize(objective_2, bounds=[[0.0, 1.0], [0.0, 1.0], [0.0, 3.0], [0.0, 3.0]], maxT=80, maxf=320)
stop = time.time()
print(f"Elapsed time: {stop - start} s")
print(f"Elapsed time: {(stop - start) // 60} min {(stop - start) % 60} s")
print(my_res_2)
###Output
_____no_output_____
###Markdown
3) Get necessary metrics on test set (for Bank dataset the metrics are accuracy and pRule).
###Code
# Create model with obtained hyperparameters alpha, alpha', beta, beta'
a_2 = my_res_2.x
model_2 = ReweightedClassifier(LogisticRegression, [a_2[0], a_2[1]], [a_2[2], a_2[3]], params = {'solver':'liblinear'})
# Train model on X_train
model_2.fit(X_train_2, y_train_2, X_test_2, y_test_2, minority_idx_2, verbose=False)
# Calculate metrics (pRule, accuracy) on X_test
accuracy_test_2, pRule_test_2 = model_2.get_metrics_test(X_test_2, y_test_2, minority_idx_test_2)
print('ASR+CULEP for X_test')
print(f"prule = {pRule_test_2:.6}, accuracy = {accuracy_test_2:.6}")
print(f"prule + accuracy = {(pRule_test_2 + accuracy_test_2):.6}")
###Output
ASR+CULEP for X_test
prule = 0.881633, accuracy = 0.904427
prule + accuracy = 1.78606
###Markdown
4) For the same split train simple Logistic Regression (without ASR+CULEP) on the train set. Then obtain necessary metrics on the test set.
###Code
# Fit LogisticRegression on X_train
model_simple = LogisticRegression(max_iter=4000, solver='liblinear')
model_simple.fit(X_train_2, y_train_2)
# Get predictions for X_test
prediction = model_simple.predict(X_test_2)
# Obtain indexes for sensitive and non-sensitive groups
majority_idx_test_2 = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx_test_2)
minority = prediction[minority_idx_test_2].mean()
majority = prediction[list(majority_idx_test_2)].mean()
# Calculate metrics on X_test
prule_simple = min(minority/majority, majority/minority)
accuracy_simple = accuracy_score(prediction, y_test_2)
print('Without ASR+CULEP for X_test')
print(f"prule = {prule_simple:.6}, accuracy = {accuracy_simple:.6}")
print(f"prule + accuracy = {(prule_simple + accuracy_simple):.6}")
###Output
Without ASR+CULEP for X_test
prule = 0.208027, accuracy = 0.908878
prule + accuracy = 1.11691
###Markdown
3.3. Experiment 3 1) Obtain a split for the experiment.
###Code
# Split on train and test
labels_bank = data_bank["y"]
features_bank = data_bank.drop(columns=["y"])
X_train_3, X_test_3, y_train_3, y_test_3 = train_test_split(features_bank, labels_bank, test_size=0.3, random_state=3)
y_train_3 = y_train_3.astype(int).values
y_test_3 = y_test_3.astype(int).values
# Obtain indexes of sensitive class
minority_idx_3 = X_train_3.reset_index(drop=True).index.values[X_train_3["age"] == 1]
minority_idx_test_3 = X_test_3.reset_index(drop=True).index.values[X_test_3["age"] == 1]
###Output
_____no_output_____
###Markdown
2) Perform ASR+CULEP.
###Code
objective_3 = prep_train_model(X_train_3, y_train_3, X_test_3, y_test_3, minority_idx_3)
start = time.time()
my_res_3 = minimize(objective_3, bounds=[[0.0, 1.0], [0.0, 1.0], [0.0, 3.0], [0.0, 3.0]], maxT=80, maxf=320)
stop = time.time()
print(f"Elapsed time: {stop - start} s")
print(f"Elapsed time: {(stop - start) // 60} min {(stop - start) % 60} s")
print(my_res_3)
###Output
_____no_output_____
###Markdown
3) Get necessary metrics on test set (for Bank dataset the metrics are accuracy and pRule).
###Code
# Create model with obtained hyperparameters alpha, alpha', beta, beta'
a_3 = my_res_3.x
model_3 = ReweightedClassifier(LogisticRegression, [a_3[0], a_3[1]], [a_3[2], a_3[3]], params = {'solver':'liblinear'})
# Train model on X_train
model_3.fit(X_train_3, y_train_3, X_test_3, y_test_3, minority_idx_3, verbose=False)
# Calculate metrics (pRule, accuracy) on X_test
accuracy_test_3, pRule_test_3 = model_3.get_metrics_test(X_test_3, y_test_3, minority_idx_test_3)
print('ASR+CULEP for X_test')
print(f"prule = {pRule_test_3:.6}, accuracy = {accuracy_test_3:.6}")
print(f"prule + accuracy = {(pRule_test_3 + accuracy_test_3):.6}")
###Output
ASR+CULEP for X_test
prule = 0.983714, accuracy = 0.89682
prule + accuracy = 1.88053
###Markdown
4) For the same split train simple Logistic Regression (without ASR+CULEP) on the train set. Then obtain necessary metrics on the test set.
###Code
# Fit LogisticRegression on X_train
model_simple = LogisticRegression(max_iter=4000, solver='liblinear')
model_simple.fit(X_train_3, y_train_3)
# Get predictions for X_test
prediction = model_simple.predict(X_test_3)
# Obtain indexes for sensitive and non-sensitive groups
majority_idx_test_3 = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx_test_3)
minority = prediction[minority_idx_test_3].mean()
majority = prediction[list(majority_idx_test_3)].mean()
# Calculate metrics on X_test
prule_simple = min(minority/majority, majority/minority)
accuracy_simple = accuracy_score(prediction, y_test_3)
print('Without ASR+CULEP for X_test')
print(f"prule = {prule_simple:.6}, accuracy = {accuracy_simple:.6}")
print(f"prule + accuracy = {(prule_simple + accuracy_simple):.6}")
###Output
Without ASR+CULEP for X_test
prule = 0.231843, accuracy = 0.90734
prule + accuracy = 1.13918
###Markdown
3.4. Experiment 4 1) Obtain a split for the experiment.
###Code
# Split on train and test
labels_bank = data_bank["y"]
features_bank = data_bank.drop(columns=["y"])
X_train_4, X_test_4, y_train_4, y_test_4 = train_test_split(features_bank, labels_bank, test_size=0.3, random_state=4)
y_train_4 = y_train_4.astype(int).values
y_test_4 = y_test_4.astype(int).values
# Obtain indexes of sensitive class
minority_idx_4 = X_train_4.reset_index(drop=True).index.values[X_train_4["age"] == 1]
minority_idx_test_4 = X_test_4.reset_index(drop=True).index.values[X_test_4["age"] == 1]
###Output
_____no_output_____
###Markdown
2) Perform ASR+CULEP.
###Code
objective_4 = prep_train_model(X_train_4, y_train_4, X_test_4, y_test_4, minority_idx_4)
start = time.time()
my_res_4 = minimize(objective_4, bounds=[[0.0, 1.0], [0.0, 1.0], [0.0, 3.0], [0.0, 3.0]], maxT=80, maxf=320)
stop = time.time()
print(f"Elapsed time: {stop - start} s")
print(f"Elapsed time: {(stop - start) // 60} min {(stop - start) % 60} s")
print(my_res_4)
###Output
_____no_output_____
###Markdown
3) Get necessary metrics on test set (for Bank dataset the metrics are accuracy and pRule).
###Code
# Create model with obtained hyperparameters alpha, alpha', beta, beta'
a_4 = my_res_4.x
model_4 = ReweightedClassifier(LogisticRegression, [a_4[0], a_4[1]], [a_4[2], a_4[3]], params = {'solver':'liblinear'})
# Train model on X_train
model_4.fit(X_train_4, y_train_4, X_test_4, y_test_4, minority_idx_4, verbose=False)
# Calculate metrics (pRule, accuracy) on X_test
accuracy_test_4, pRule_test_4 = model_4.get_metrics_test(X_test_4, y_test_4, minority_idx_test_4)
print('ASR+CULEP for X_test')
print(f"prule = {pRule_test_4:.6}, accuracy = {accuracy_test_4:.6}")
print(f"prule + accuracy = {(pRule_test_4 + accuracy_test_4):.6}")
###Output
ASR+CULEP for X_test
prule = 0.945559, accuracy = 0.89949
prule + accuracy = 1.84505
###Markdown
4) For the same split train simple Logistic Regression (without ASR+CULEP) on the train set. Then obtain necessary metrics on the test set.
###Code
# Fit LogisticRegression on X_train
model_simple = LogisticRegression(max_iter=4000, solver='liblinear')
model_simple.fit(X_train_4, y_train_4)
# Get predictions for X_test
prediction = model_simple.predict(X_test_4)
# Obtain indexes for sensitive and non-sensitive groups
majority_idx_test_4 = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx_test_4)
minority = prediction[minority_idx_test_4].mean()
majority = prediction[list(majority_idx_test_4)].mean()
# Calculate metrics on X_test
prule_simple = min(minority/majority, majority/minority)
accuracy_simple = accuracy_score(prediction, y_test_4)
print('Without ASR+CULEP for X_test')
print(f"prule = {prule_simple:.6}, accuracy = {accuracy_simple:.6}")
print(f"prule + accuracy = {(prule_simple + accuracy_simple):.6}")
###Output
Without ASR+CULEP for X_test
prule = 0.206042, accuracy = 0.907016
prule + accuracy = 1.11306
###Markdown
3.5. Experiment 5 1) Obtain a split for the experiment.
###Code
# Split on train and test
labels_bank = data_bank["y"]
features_bank = data_bank.drop(columns=["y"])
X_train_5, X_test_5, y_train_5, y_test_5 = train_test_split(features_bank, labels_bank, test_size=0.3, random_state=5)
y_train_5 = y_train_5.astype(int).values
y_test_5 = y_test_5.astype(int).values
# Obtain indexes of sensitive class
minority_idx_5 = X_train_5.reset_index(drop=True).index.values[X_train_5["age"] == 1]
minority_idx_test_5 = X_test_5.reset_index(drop=True).index.values[X_test_5["age"] == 1]
###Output
_____no_output_____
###Markdown
2) Perform ASR+CULEP.
###Code
objective_5 = prep_train_model(X_train_5, y_train_5, X_test_5, y_test_5, minority_idx_5)
start = time.time()
my_res_5 = minimize(objective_5, bounds=[[0.0, 1.0], [0.0, 1.0], [0.0, 3.0], [0.0, 3.0]], maxT=80, maxf=320)
stop = time.time()
print(f"Elapsed time: {stop - start} s")
print(f"Elapsed time: {(stop - start) // 60} min {(stop - start) % 60} s")
print(my_res_5)
###Output
_____no_output_____
###Markdown
3) Get necessary metrics on test set (for Bank dataset the metrics are accuracy and pRule).
###Code
# Create model with obtained hyperparameters alpha, alpha', beta, beta'
a_5 = my_res_5.x
model_5 = ReweightedClassifier(LogisticRegression, [a_5[0], a_5[1]], [a_5[2], a_5[3]], params = {'solver':'liblinear'})
# Train model on X_train
model_5.fit(X_train_5, y_train_5, X_test_5, y_test_5, minority_idx_5, verbose=False)
# Calculate metrics (pRule, accuracy) on X_test
accuracy_test_5, pRule_test_5 = model_5.get_metrics_test(X_test_5, y_test_5, minority_idx_test_5)
print('ASR+CULEP for X_test')
print(f"prule = {pRule_test_5:.6}, accuracy = {accuracy_test_5:.6}")
print(f"prule + accuracy = {(pRule_test_5 + accuracy_test_5):.6}")
###Output
ASR+CULEP for X_test
prule = 0.720326, accuracy = 0.901109
prule + accuracy = 1.62143
###Markdown
4) For the same split train simple Logistic Regression (without ASR+CULEP) on the train set. Then obtain necessary metrics on the test set.
###Code
# Fit LogisticRegression on X_train
model_simple = LogisticRegression(max_iter=4000, solver='liblinear')
model_simple.fit(X_train_5, y_train_5)
# Get predictions for X_test
prediction = model_simple.predict(X_test_5)
# Obtain indexes for sensitive and non-sensitive groups
majority_idx_test_5 = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx_test_5)
minority = prediction[minority_idx_test_5].mean()
majority = prediction[list(majority_idx_test_5)].mean()
# Calculate metrics on X_test
prule_simple = min(minority/majority, majority/minority)
accuracy_simple = accuracy_score(prediction, y_test_5)
print('Without ASR+CULEP for X_test')
print(f"prule = {prule_simple:.6}, accuracy = {accuracy_simple:.6}")
print(f"prule + accuracy = {(prule_simple + accuracy_simple):.6}")
###Output
Without ASR+CULEP for X_test
prule = 0.203835, accuracy = 0.906207
prule + accuracy = 1.11004
###Markdown
Results
###Code
results = {'prule': [], 'accuracy': [], 'a': []}
results['accuracy'] = [accuracy_test_1, accuracy_test_2, accuracy_test_3, accuracy_test_4, accuracy_test_5]
results['prule'] = [pRule_test_1, pRule_test_2, pRule_test_3, pRule_test_4, pRule_test_5]
results['a'] = [a_1, a_2, a_3, a_4, a_5]
pd.DataFrame(results).to_csv("./results/bank_results.csv")
###Output
_____no_output_____ |
group2/Centralized_model_LSTM.ipynb | ###Markdown
LSTM implementation for the centralized model
###Code
# Dataset - 2019
# Imputation tech - KNN for both air pollutants and meteorological data
# Evaluation metric - MAE while training and SMAPE metric for validating test data
# Negative values where not replaced
import os
import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from numpy import split
from numpy import array
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM,GRU,Conv2D,MaxPool1D,Conv1D,MaxPooling1D,AveragePooling1D
from keras.layers import Bidirectional
from keras.layers import TimeDistributed,Dropout,RepeatVector
#Read the dataset based on the station id
path = "Uppsala\\Sem 3\\Dataset\\2014-2018_combined\\18644\\KNN_18644_2015_2018.xlsx"
file_name = os.path.join("C:\\",path)
df_2015_2019_data = pd.read_excel(file_name)
df_2015_2019_data
# Dropping some weather features, adding time features and converting them into one-hot encoded values
df_2015_2019 = df_2015_2019_data.copy()
df_2015_2019 = df_2015_2019.drop(columns=['Relative humidity','Air pressure','Wind speed','Wind direction'])
lstm_df = df_2015_2019.copy()
# Add time related features
lstm_df['Weekday'] = df_2015_2019['Start'].dt.day_name()
#lstm_df['Day_num'] = df_2017_2019['Start'].dt.day
lstm_df['Hour'] = df_2015_2019['Start'].dt.hour
#lstm_df['Quarter'] = df_2017_2019['Start'].dt.quarter
label_encoder_1 = LabelEncoder()
onehot_encoder_1 = OneHotEncoder(sparse=False)
#label_encoder_2 = LabelEncoder()
#onehot_encoder_2 = OneHotEncoder(sparse=False)
integer_encoded = label_encoder_1.fit_transform(lstm_df['Weekday'])
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
lstm_df['Weekday'] = onehot_encoder_1.fit_transform(integer_encoded)
integer_encoded = label_encoder_1.fit_transform(lstm_df['Hour'])
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
lstm_df['Hour'] = onehot_encoder_1.fit_transform(integer_encoded)
# Remove timestamp attribute
date_time = lstm_df.pop('Start')
# Replacing negative values with zeros
#lstm_df.PM10=lstm_df.PM10.mask(lstm_df.PM10.lt(0),0)
# Use first 9 months for training and validation sets for 2019
train_data = lstm_df[0:41616] #lstm_df[0:24072] #train_data = lstm_df[0:6552]
train_df = train_data[0:32856] #train_data[0:35064] #train_data[0:19680] #train_data[0:5088] # Training set - January - August months
val_df = train_data[32856:] #train_data[35064:] #train_data[19680:] #train_data[5088:] # Validation set - September month
# Use Oct-Dec months for Testing set
test_df = lstm_df[41616:] #lstm_df[24072:] #lstm_df[6552:]
num_features = lstm_df.shape[1]
lstm_df.describe().transpose()
# Normalize the datasets using mean and std_deviation
lst = list(df_2015_2019.columns) #list(combined_data.columns)
lst.remove('Start')
print(lst)
train_mean = train_df[lst].mean()
train_std = train_df[lst].std()
training_set = (train_df[lst] - train_mean) / train_std
validation_set = (val_df[lst] - train_mean) / train_std
testing_set = (test_df[lst] - train_mean) / train_std
time_ftrs = set(list(lstm_df.columns)) - set(lst)
time_ftrs = list(time_ftrs)
training_set[time_ftrs] = train_df[time_ftrs]
validation_set[time_ftrs] = val_df[time_ftrs]
testing_set[time_ftrs] = test_df[time_ftrs]
columnsTitles = list(sorted(set(lst), key=lst.index) + sorted(set(time_ftrs), key=time_ftrs.index))
training_set = training_set.reindex(columns=columnsTitles)
validation_set = validation_set.reindex(columns=columnsTitles)
testing_set = testing_set.reindex(columns=columnsTitles)
print(training_set.shape,validation_set.shape,testing_set.shape)
training_set.describe().transpose()
# Visualization of normalized values
df_std = (lstm_df - train_mean) / train_std
df_std = df_std.melt(var_name='Column', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(lstm_df.keys(), rotation=90)
# Metric - Mean Absolute Percentage Error suggested by Shengui Li
def smape(y_true, y_pred):
return tf.reduce_mean(2 * tf.abs(y_true - y_pred)
/ (tf.abs(y_pred) + tf.abs(y_true)) , axis=-1)
# Group the timestamp dataset into days format
def split_dataset(train,val,test):
# split into days and restructure into windows of daily data
train = array(split(train, len(train)/24))
val = array(split(val, len(val)/24))
test = array(split(test, len(test)/24))
return train,val,test
train, val, test = split_dataset(training_set.values,validation_set.values,testing_set.values)
print("Data format: [Samples,Timesteps,Features]")
print('Training data:',train.shape,'\nValidation data:',val.shape,'\nTesting data:',test.shape)
# convert history into inputs and outputs format of 24 hours history and 24 hours forecast
def to_supervised(train, n_input=24, n_out=24):
# flatten data into timestamps data format
data = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))
X, y = list(), list()
in_start = 0
# step over the entire history one time step at a time
for _ in range(len(data)):
# define the end of the input sequence
in_end = in_start + n_input
out_end = in_end + n_out
# ensure we have enough data for this instance
if out_end <= len(data):
X.append(data[in_start:in_end, :])
y.append(data[in_end:out_end, 0:4]) # Change the 2nd dimension according to the num of features to be predicted
# move along one time step
in_start += 1
return array(X), array(y)
# Build the LSTM model architecture and train the model
def build_model(train,val,n_input,n_out_features):
# prepare the training and validation data by sequencing them as window of 24 hours data
train_x, train_y = to_supervised(train, n_input)
val_x, val_y = to_supervised(val,n_input)
# define the model parameters
verbose, epochs, batch_size = 2, 30, 80
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# define the model architecture
model = Sequential()
# Single feature prediction
#model.add(LSTM(50, activation='relu',input_shape=(n_timesteps, n_features)))
#model.add(Dense(100, activation='relu'))
#model.add(Dense(n_outputs))
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=2,
mode='min')
# Multi features predictions
#model.add(LSTM(50, activation='relu',
#input_shape=(n_timesteps, n_features)))
#model.add(Attention(name='attention_weight'))
#model.add(Dropout(0.50))
#model.add(Dense(100, activation='relu'))
#model.add(Dropout(0.50))
#model.add(Dense(n_outputs*n_out_features))
#model.add(tf.keras.layers.Reshape([n_outputs, n_out_features])) # Reshape the output layer back into [Timesteps,out_featrs]
#CNN + LSTM
#model.add(Conv1D(filters=6, kernel_size=2,
# strides=1,
# padding="same",
# activation="tanh",
# input_shape=[n_timesteps, n_features])),
#model.add(Conv1D(filters=16, kernel_size=2,
# strides=1,padding="valid",
# activation="tanh"))
#model.add(AveragePooling1D()),
#model.add(LSTM(50, activation='tanh'))
#model.add(Attention(name='attention_weight'))
#model.add(Dropout(0.50))
#model.add(Dense(100, activation='tanh'))
#model.add(Dropout(0.50))
#model.add(Dense(n_outputs*n_out_features))
#model.add(tf.keras.layers.Reshape([n_outputs, n_out_features])) # Reshape the output layer back into [Timesteps,out_featrs]
#LSTM model
model.add(LSTM(50, activation='tanh', input_shape=(n_timesteps, n_features)))
model.add(Dropout(0.50))
model.add(Dense(100, activation='tanh'))
model.add(Dropout(0.50))
model.add((Dense(n_outputs*n_out_features)))
model.add(tf.keras.layers.Reshape([n_outputs, n_out_features]))
#GRU
#model.add(GRU(50, activation='relu',
# input_shape=(n_timesteps, n_features)))
#model.add(Dropout(0.50))
#model.add(Dense(100, activation='relu'))
#model.add(Dropout(0.50))
#model.add(Dense(n_outputs*n_out_features))
#model.add(tf.keras.layers.Reshape([n_outputs, n_out_features]))
#Bi-directional LSTMS
#model.add(Bidirectional(LSTM(50,return_sequences=True,
# input_shape=(n_timesteps, n_features))))
#model.add(Dropout(0.50))
#model.add(Bidirectional(LSTM(50,return_sequences=True)))
#model.add(Dropout(0.50))
#model.add(Bidirectional(LSTM(50,return_sequences=True)))
#model.add(Dropout(0.50))
#model.add(Bidirectional(LSTM(50,return_sequences=True)))
#model.add(Dropout(0.50))
#model.add(TimeDistributed(Dense(n_out_features)))
#model.add(tf.keras.layers.Reshape([n_outputs, n_out_features]))
model.compile(loss='mae', optimizer='adam')
# fit the network
history = model.fit(train_x, train_y, epochs=epochs,
batch_size=batch_size,shuffle=False,
validation_data=(val_x, val_y),
#callbacks=[early_stopping],
verbose=verbose)
# Plot the validation loss and training loss
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.legend()
plt.show()
return model
# Make a forecast for next 24 hours
def forecast(model, history, n_input):
# flatten the data into timestamps format
data = array(history)
data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))
# retrieve last observations i.e last 24 hours for input data
input_x = data[-n_input:, :]
# reshape into [1, n_input, 1]
input_x = input_x.reshape((1, input_x.shape[0], input_x.shape[1]))
# forecast the next day
yhat = model.predict(input_x, verbose=0)
model.save('tf_lstm_model.h5')
return yhat[0]
# Evaluate the model
def evaluate_model(train,val,test, n_input,n_out_features):
# fit model
model = build_model(train,val,n_input,n_out_features)
model.summary()
# history is a list of daily data
history = [x for x in val] #train]
# walk-forward validation over each day
predictions = list()
for i in range(len(test)):
# predict the next day
yhat_sequence = forecast(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next day
history.append(test[i, :])
# evaluate predictions days for each day
predictions = array(predictions)
return predictions
# Main program starts from here - Firstly split the timestamp data into days format
train, val, test = split_dataset(training_set.values,validation_set.values,testing_set.values)
n_input = 24 # Number of previous timestamps needed to predict future data
n_out_features = 4 # features to be predicted [NO2,NOX as NO2, PM10 and PM2.5]
predictions = evaluate_model(train,val,test, n_input,n_out_features)
predictions.shape
###Output
_____no_output_____
###Markdown
Collect the predicted values for the 4 pollutants, denormalize the values and calculate the SMAPE scores
###Code
# print(predictions.shape)
from sklearn.metrics import r2_score,mean_squared_error,mean_absolute_error
# Predictions for NO2, NOX as NO2, PM10 and PM2.5
pred_de_norm = {}
grnd_de_norm = {}
smape_scores = {}
avg_smape_score = 0
df_write = pd.DataFrame(columns=['NO2','NOX as NO2','PM10','PM2.5'])
n_out = 24
for i in range(0,4):
# Denormalize the predicted values
#pred_de_norm[i] = ( predictions[:,:,i] * train_std[i] ) + train_mean[i]
#grnd_de_norm[i] = ( test[:,:n_out,i] * train_std[i] ) + train_mean[i]
pred_de_norm[i] = predictions[:,:,i]
grnd_de_norm[i] = test[:,:n_out,i]
# Reshape into 1D array
grnd_de_norm[i] = grnd_de_norm[i].reshape(grnd_de_norm[i].shape[0]*grnd_de_norm[i].shape[1])
pred_de_norm[i] = pred_de_norm[i].reshape(pred_de_norm[i].shape[0]*pred_de_norm[i].shape[1])
pred_de_norm[i] = pred_de_norm[i].astype('float32')
grnd_de_norm[i] = grnd_de_norm[i].astype('float32')
#smape_scores[i] = smape(grnd_de_norm[i],pred_de_norm[i])
smape_scores[i] = mean_absolute_error(grnd_de_norm[i],pred_de_norm[i])
avg_smape_score = avg_smape_score + smape_scores[i]
col = df_write.columns[i]
df_write[col] = pred_de_norm[i]
#df_write.insert(loc=0, column='Start', value=date_time[41616:].values)
avg_smape_score = avg_smape_score / 4
print("\nSmape score for all 4 pollutants:",smape_scores)
print("\nAverage smape score:",avg_smape_score)
model.save('tf_lstm_model.h5')
# Write the predicted values to a CSV file
df_write.to_csv(r'2019/Combined/8779_output.csv')
###Output
_____no_output_____
###Markdown
Plots for predicted and real values
###Code
plt.figure(num=None, dpi=70, figsize=(20, 6),facecolor='w', edgecolor='k')
plt.plot(pred_de_norm[0], "-b", label="Predicted values")
plt.plot(grnd_de_norm[0], "-r", label="Real values")
plt.legend(loc="upper left")
plt.title('NO2 comparisons', y=0.5, loc='right')
plt.show()
plt.figure(num=None, dpi=70, figsize=(20, 6),facecolor='w', edgecolor='k')
plt.plot(pred_de_norm[1], "-b", label="Predicted values")
plt.plot(grnd_de_norm[1], "-r", label="Real values")
plt.legend(loc="upper left")
plt.title('NOX as NO2 comparisons', y=0.5, loc='right')
plt.show()
plt.figure(num=None, dpi=70, figsize=(20, 6),facecolor='w', edgecolor='k')
plt.plot(pred_de_norm[2], "-b", label="Predicted values")
plt.plot(grnd_de_norm[2], "-r", label="Real values")
plt.legend(loc="upper left")
plt.title('PM10 comparisons', y=0.5, loc='right')
plt.show()
plt.figure(num=None, dpi=70, figsize=(20, 6),facecolor='w', edgecolor='k')
plt.plot(pred_de_norm[3], "-b", label="Predicted values")
plt.plot(grnd_de_norm[3], "-r", label="Real values")
plt.legend(loc="upper left")
plt.title('PM2.5 comparisons', y=0.5, loc='right')
plt.show()
###Output
_____no_output_____ |
notebooks/m1_submission.ipynb | ###Markdown
Milestone 1 - Group 21https://github.com/UBC-MDS/DSCI_525_group21
###Code
# load libraries
import io
import os
import json
import glob
#import intake
import requests
import numpy as np
import pandas as pd
#import xarray as xr
from urllib.request import urlretrieve
#import proplot as pplot
#from joblib import Parallel, delayed
#import warnings
#warnings.filterwarnings("ignore") # ignore some annoying matplotlib warnings
from memory_profiler import memory_usage
import zipfile
# more library loading
%load_ext rpy2.ipython
%load_ext memory_profiler
###Output
_____no_output_____
###Markdown
3. Downloading the data
###Code
# Necessary metadata
article_id = 14096681 # this is the unique identifier of the article on figshare
url = f"https://api.figshare.com/v2/articles/{article_id}"
headers = {"Content-Type": "application/json"}
output_directory = "figshare/"
# metadata output
response = requests.request("GET", url, headers=headers)
data = json.loads(response.text) #
files = data["files"] # we only want the data and readme 'name' key value
%%time
#download readme and data.zip files only
files_to_dl = ["README.md", "data.zip"]
for file in files:
if file["name"] in files_to_dl:
os.makedirs(output_directory, exist_ok=True)
urlretrieve(file["download_url"], output_directory + file["name"])
%%time
#extract zip files to repo
with zipfile.ZipFile(os.path.join(output_directory, "data.zip"), 'r') as f:
f.extractall(output_directory)
###Output
CPU times: user 16.5 s, sys: 1.67 s, total: 18.1 s
Wall time: 19.4 s
###Markdown
4. Combining data CSVs
###Code
%%time
%memit
# Shows time that regular python takes to merge file
# Join all data together
import pandas as pd
use_cols = ["time",'lat_min','lat_max','lon_min', 'lon_max', 'rain (mm/day)']
files = glob.glob('./figshare/*.csv')
df_all = None
for file in files:
filename = os.path.basename(file)
if '_daily_rainfall_NSW.csv' in filename:
print(f"Processing the file {filename}")
model = filename.split('_daily_rainfall_NSW.csv')[0]
df = pd.read_csv(file, usecols=use_cols, index_col=0)
df['model'] = model
if df_all is None:
df_all = df
else:
df_all = df_all.append(df)
# save combined file
df_all.to_csv('./figshare/combined_data.csv')
%%sh
#get file size of combined csv
du -sh figshare/combined_data.csv
###Output
5.6G figshare/combined_data.csv
###Markdown
**Observations**Our team members had the following computer specs: | Team Member | Ram | Processor || :------------- | :----------: | -----------: || Cal | 16GB | AMD Ryzen 5 3600 6-core || Justin | 32GB | Intel i5 | | Anita | 12GB | Intel i5 | | Yuan | 8GB | Intel i5 | Below are our processing times and peak memory usage by team member: | Team Member | Processing Time | Peak Memory Usage || :------------- | :---------- | :-----------: || Cal | 1min 13sec | 137 mb || Justin | 1min 21sec | 155 mb || Anita | 3min 34sec | 9050 mb || Yuan | 21 min | 3499 mb |We also used the Pandas default writing method. 5. Load the combined CSV to memory and perform a simple EDA **Performance using Default Pandas Method**
###Code
%%time
%%memit
# time to read/calculate when using default Pandas method
df = pd.read_csv("figshare/combined_data.csv")
print(df["model"].value_counts())
###Output
MPI-ESM1-2-HR 5154240
CMCC-CM2-HR4 3541230
CMCC-CM2-SR5 3541230
CMCC-ESM2 3541230
TaiESM1 3541230
NorESM2-MM 3541230
SAM0-UNICON 3541153
FGOALS-f3-L 3219300
GFDL-CM4 3219300
GFDL-ESM4 3219300
MRI-ESM2-0 3037320
EC-Earth3-Veg-LR 3037320
BCC-CSM2-MR 3035340
MIROC6 2070900
ACCESS-CM2 1932840
ACCESS-ESM1-5 1610700
INM-CM4-8 1609650
INM-CM5-0 1609650
KIOST-ESM 1287720
FGOALS-g3 1287720
MPI-ESM-1-2-HAM 966420
AWI-ESM-1-1-LR 966420
NESM3 966420
MPI-ESM1-2-LR 966420
NorESM2-LM 919800
CanESM5 551880
BCC-ESM1 551880
Name: model, dtype: int64
peak memory: 9494.78 MiB, increment: 9343.00 MiB
CPU times: user 52.1 s, sys: 3.18 s, total: 55.3 s
Wall time: 55.4 s
###Markdown
**Performance when loading in Select Columns only**
###Code
%%time
%%memit
use_cols = ["time", "rain (mm/day)", "model"]
df = pd.read_csv("figshare/combined_data.csv", usecols = use_cols)
print(df["model"].value_counts())
###Output
MPI-ESM1-2-HR 5154240
CMCC-CM2-HR4 3541230
CMCC-CM2-SR5 3541230
CMCC-ESM2 3541230
TaiESM1 3541230
NorESM2-MM 3541230
SAM0-UNICON 3541153
FGOALS-f3-L 3219300
GFDL-CM4 3219300
GFDL-ESM4 3219300
MRI-ESM2-0 3037320
EC-Earth3-Veg-LR 3037320
BCC-CSM2-MR 3035340
MIROC6 2070900
ACCESS-CM2 1932840
ACCESS-ESM1-5 1610700
INM-CM4-8 1609650
INM-CM5-0 1609650
KIOST-ESM 1287720
FGOALS-g3 1287720
MPI-ESM-1-2-HAM 966420
AWI-ESM-1-1-LR 966420
NESM3 966420
MPI-ESM1-2-LR 966420
NorESM2-LM 919800
CanESM5 551880
BCC-ESM1 551880
Name: model, dtype: int64
peak memory: 10063.96 MiB, increment: 3904.76 MiB
CPU times: user 40.3 s, sys: 3.04 s, total: 43.4 s
Wall time: 43.6 s
###Markdown
**Performance when reading file using chunks**
###Code
%%time
%%memit
# 10 million chunk size
counts = pd.Series(dtype=int)
for chunk in pd.read_csv("figshare/combined_data.csv", chunksize=10_000_000):
counts = counts.add(chunk["model"].value_counts(), fill_value=0)
print(counts.astype(int))
%%time
%%memit
# 1 million chunk size
counts = pd.Series(dtype=int)
for chunk in pd.read_csv("figshare/combined_data.csv", chunksize=1_000_000):
counts = counts.add(chunk["model"].value_counts(), fill_value=0)
print(counts.astype(int))
%%time
%%memit
counts = pd.Series(dtype=int)
for chunk in pd.read_csv("figshare/combined_data.csv", chunksize=500_000):
counts = counts.add(chunk["model"].value_counts(), fill_value=0)
print(counts.astype(int))
###Output
ACCESS-CM2 1932840
ACCESS-ESM1-5 1610700
AWI-ESM-1-1-LR 966420
BCC-CSM2-MR 3035340
BCC-ESM1 551880
CMCC-CM2-HR4 3541230
CMCC-CM2-SR5 3541230
CMCC-ESM2 3541230
CanESM5 551880
EC-Earth3-Veg-LR 3037320
FGOALS-f3-L 3219300
FGOALS-g3 1287720
GFDL-CM4 3219300
GFDL-ESM4 3219300
INM-CM4-8 1609650
INM-CM5-0 1609650
KIOST-ESM 1287720
MIROC6 2070900
MPI-ESM-1-2-HAM 966420
MPI-ESM1-2-HR 5154240
MPI-ESM1-2-LR 966420
MRI-ESM2-0 3037320
NESM3 966420
NorESM2-LM 919800
NorESM2-MM 3541230
SAM0-UNICON 3541153
TaiESM1 3541230
dtype: int64
peak memory: 4593.33 MiB, increment: 0.00 MiB
CPU times: user 51.4 s, sys: 1.01 s, total: 52.4 s
Wall time: 52.6 s
###Markdown
**Performance when loading with simplier data types**
###Code
df.dtypes
%%time
%%memit
col_type = {'time':object, 'lat_min':np.float32, 'lat_max':np.float32, 'lon_min': np.float32,
'lon_max': np.float32, 'rain (mm/day)': np.float32, 'model': object}
df2 = pd.read_csv("figshare/combined_data.csv", dtype=col_type)
print(df2["model"].value_counts())
df2.dtypes
###Output
_____no_output_____
###Markdown
**Using Dask**
###Code
%%time
%%memit
import dask.dataframe as dd
# time to read/calculate when using default Dask method
d_df = dd.read_csv("figshare/combined_data.csv")
result = d_df.model.value_counts().compute()
print(result)
###Output
MPI-ESM1-2-HR 5154240
TaiESM1 3541230
NorESM2-MM 3541230
CMCC-CM2-HR4 3541230
CMCC-CM2-SR5 3541230
CMCC-ESM2 3541230
SAM0-UNICON 3541153
FGOALS-f3-L 3219300
GFDL-CM4 3219300
GFDL-ESM4 3219300
EC-Earth3-Veg-LR 3037320
MRI-ESM2-0 3037320
BCC-CSM2-MR 3035340
MIROC6 2070900
ACCESS-CM2 1932840
ACCESS-ESM1-5 1610700
INM-CM5-0 1609650
INM-CM4-8 1609650
KIOST-ESM 1287720
FGOALS-g3 1287720
MPI-ESM1-2-LR 966420
NESM3 966420
AWI-ESM-1-1-LR 966420
MPI-ESM-1-2-HAM 966420
NorESM2-LM 919800
BCC-ESM1 551880
CanESM5 551880
Name: model, dtype: int64
peak memory: 9467.43 MiB, increment: 2008.24 MiB
CPU times: user 1min 28s, sys: 7.66 s, total: 1min 36s
Wall time: 35.5 s
###Markdown
*observations*For part 5, we tasked each of our team members to investigate each one of the approaches. Please see the summary of results below which were performed on Justin's machine (i5 processor (4 core, 8 threads), 32GB memory).* Default Pandas approach* Changing the Data Type of the columns using float32 instead of float64 for 4 out of 7 columns* Reading in fewer columns (date, rain and model)* Reading in using several chunk sizes (0.5M, 1M and 10M chunks)* Loading with Dask| Approach Taken | Processing Time | Peak Memory Usage | Increment Memory Usage || :------------- | :---------- | :----------- | :-----------: || Default Pandas | 55s | 9495 MB | 9343 MB || Fewer Columns | 43s | 10064 MB | 3905 MB || Loading in 10 M chunk size| 53s | 5706 MB | 1131 MB || Loading in 1.0 M chunk size| 53s | 4640 MB | 16 MB || Loading in 0.5 M chunk size| 52s | 4593 MB | 0 MB || Change Data Type | 54s | 9604 MB | 5014 MB || Dask | 1m 36s | 9467 MB | 2008 MB | The fastest approach appears to be using fewer columns and the slowest approach was Dask. There are probably more advanced tuning in Dask to improve this but we only used the default settings.Note that:* **Peak memory**: peak memory usage of your system (including memory usage of other processes) during the program runtime.* **Increment**: the increment in memory usage relative to the memory usage just before the program is run The lowest peak memory usage is using chunking with 0.5M chunk size and the highest peak memory usage was using fewer columns. The lowest increment memory usage is using chunking with 0.5M chunk size and the highest peak memory usage is using the default Pandas approach. 6. Perform a simple EDA in R
###Code
import pandas as pd
## install the packages https://arrow.apache.org/docs/python/install.html
import pyarrow.dataset as ds
import pyarrow as pa
import pyarrow.parquet as pq
## How to install put instructions https://anaconda.org/conda-forge/rpy2
import rpy2.rinterface
# install this https://pypi.org/project/rpy2-arrow/#description pip install rpy2-arrow
# have to install this as well conda install -c conda-forge r-arrow
import rpy2_arrow.pyarrow_rarrow as pyra
### instruction
import pyarrow.feather as feather
%%R
#just seeing if its available
library("arrow")
library("dplyr")
%%time
%%memit
## read more on the datasets here https://arrow.apache.org/docs/python/dataset.html
dataset = ds.dataset("figshare/combined_data.csv", format="csv")
## this is of arrow table format
table = dataset.to_table()
%%time
# experiment in writing in feather format
feather.write_feather(table, 'figshare/figshare.feather')
%%time
%%R
### her we are showing how much time it took to read a feather file what we wrote in python
library(arrow)
start_time <- Sys.time()
r_table <- arrow::read_feather("figshare/figshare.feather")
print(class(r_table))
library(dplyr)
result <- r_table %>% count(model)
end_time <- Sys.time()
print(result)
print(end_time - start_time)
###Output
[1] "tbl_df" "tbl" "data.frame"
[90m# A tibble: 27 x 2[39m
model n
[3m[90m<chr>[39m[23m [3m[90m<int>[39m[23m
[90m 1[39m ACCESS-CM2 1[4m9[24m[4m3[24m[4m2[24m840
[90m 2[39m ACCESS-ESM1-5 1[4m6[24m[4m1[24m[4m0[24m700
[90m 3[39m AWI-ESM-1-1-LR [4m9[24m[4m6[24m[4m6[24m420
[90m 4[39m BCC-CSM2-MR 3[4m0[24m[4m3[24m[4m5[24m340
[90m 5[39m BCC-ESM1 [4m5[24m[4m5[24m[4m1[24m880
[90m 6[39m CanESM5 [4m5[24m[4m5[24m[4m1[24m880
[90m 7[39m CMCC-CM2-HR4 3[4m5[24m[4m4[24m[4m1[24m230
[90m 8[39m CMCC-CM2-SR5 3[4m5[24m[4m4[24m[4m1[24m230
[90m 9[39m CMCC-ESM2 3[4m5[24m[4m4[24m[4m1[24m230
[90m10[39m EC-Earth3-Veg-LR 3[4m0[24m[4m3[24m[4m7[24m320
[90m# … with 17 more rows[39m
Time difference of 6.922818 secs
CPU times: user 8.92 s, sys: 4.34 s, total: 13.3 s
Wall time: 7.01 s
|
CHD_Prediction_Final.ipynb | ###Markdown
Coronary Heart Disease Prediction
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Sklearn
from sklearn.preprocessing import normalize
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn import tree
from sklearn.naive_bayes import MultinomialNB,GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import VotingClassifier
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, StandardScaler
from sklearn.calibration import CalibratedClassifierCV
# Evaluation Metrics
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error as mae
from math import sqrt
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix, precision_recall_fscore_support,classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.linear_model import Ridge
from sklearn.svm import SVR
df=pd.read_csv('framingham.csv')
df.head()
columns = df.columns
columns
df.dtypes
df.hist(figsize=(12,12))
df.describe()
###Output
_____no_output_____
###Markdown
Filling Null Values (NaN - Not a Number) with Attribute Mean
###Code
for i in columns:
df[i] = df[i].fillna(df[i].mean())
df.describe()
###Output
_____no_output_____
###Markdown
Analysing Class Distribution TenYearCHD attribute gives whether an individual is at risk of suffering from CHD in next 10 years0 - Not at Risk1 - At Risk
###Code
target_count = df['TenYearCHD'].value_counts()
print('Class 0:', target_count[0])
print('Class 1:', target_count[1])
print('Proportion:', round(target_count[0] / target_count[1], 2), ': 1')
target_count.plot(kind='bar', title='Count (target)');
# Class count
count_class_0, count_class_1 = df['TenYearCHD'].value_counts()
# Divide by class
df_class_0 = df[df['TenYearCHD'] == 0]
df_class_1 = df[df['TenYearCHD'] == 1]
df_class_0
df_class_1
###Output
_____no_output_____
###Markdown
Normaizing Class Distribution
###Code
df_class_1_over = df_class_1.sample(count_class_0, replace=True)
df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)
print('Random over-sampling:')
print(df_test_over['TenYearCHD'].value_counts())
df_test_over['TenYearCHD'].value_counts().plot(kind='bar', title='Count (target)');
df = df_test_over
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Z-score Normalization on selected numerical attributes
###Code
from scipy.stats import zscore
df['cigsPerDay'] = df[['cigsPerDay']].apply(zscore)
df['totChol'] = df[['totChol']].apply(zscore)
df['sysBP'] = df[['sysBP']].apply(zscore)
df['diaBP'] = df[['diaBP']].apply(zscore)
df['BMI'] = df[['BMI']].apply(zscore)
df['heartRate'] = df[['heartRate']].apply(zscore)
df['glucose'] = df[['glucose']].apply(zscore)
df
X = df.drop(['TenYearCHD','education'],axis = 1)
Y = df['TenYearCHD']
X.head()
Y.head()
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.25,random_state=0)
print(X_train.shape)
print(X_test.shape)
#Y_train = np.asarray(Y_train).reshape(-1,1)
#Y_test = np.asarray(Y_test).reshape(-1,1)
print(Y_train.shape)
print(Y_test.shape)
###Output
(5394, 14)
(1798, 14)
(5394,)
(1798,)
###Markdown
KNN
###Code
import time
start = time.time()
knn = KNeighborsClassifier(n_neighbors=2).fit(X_train,Y_train)
y_pred=knn.predict(X_test)
end = time.time()
print("Accuracy:",accuracy_score(Y_test, y_pred))
print(end-start)
cm = confusion_matrix(Y_test,y_pred)
print(cm)
TP =cm[0][0]
FN =cm[0][1]
TN = cm[1][1]
FP = cm[1][0]
print("TP: ", TP)
print("TN", TN)
print("FP", FP)
print("FN", FN)
Precision = (TP / (TP+FP))
Recall = (TP / (TP+FN))
print("Accuracy: ", ((TP+TN) / (TP+FP+FN+TN)))
print("Precision", (TP / (TP+FP)))
print("Recall", (TP / (TP+FN)))
print("Specificity", (TN / (TN+FP)) )
print("F1 Score", ((2 * Precision * Recall) / (Precision + Recall)))
print(classification_report(Y_test, y_pred))
df.groupby('TenYearCHD').count()
error = []
for i in range(1, 40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, Y_train)
pred_i = knn.predict(X_test)
error.append(np.mean(pred_i != Y_test))
plt.figure(figsize=(12, 6))
plt.plot(range(1, 40), error, color='red', linestyle='dashed', marker='o',
markerfacecolor='blue', markersize=10)
plt.title('Error Rate K Value')
plt.xlabel('K Value')
plt.ylabel('Mean Error')
from sklearn.model_selection import cross_val_score
knn = KNeighborsClassifier(n_neighbors=2).fit(X_train,Y_train)
cv_scores = cross_val_score(knn, X, Y, cv=10)
print(cv_scores)
print("\nCross Validation Scores mean:{}".format(np.mean(cv_scores)))
###Output
[ 0.9125 0.89861111 0.91805556 0.92777778 0.92777778 0.91111111
0.92618384 0.91922006 0.92200557 0.90668524]
Cross Validation Scores mean:0.9169928040854224
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='saga',max_iter=10).fit(X_train,Y_train)
y_pred=lr.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
###Output
Accuracy: 0.643492769744
###Markdown
Decision Tree
###Code
start = time.time()
dt = tree.DecisionTreeClassifier().fit(X_train,Y_train)
y_pred=dt.predict(X_test)
end = time.time()
print("Accuracy:",accuracy_score(Y_test, y_pred))
print(end-start)
cm = confusion_matrix(Y_test,y_pred)
print(cm)
TP =cm[0][0]
FN =cm[0][1]
TN = cm[1][1]
FP = cm[1][0]
print("TP: ", TP)
print("TN", TN)
print("FP", FP)
print("FN", FN)
Precision = (TP / (TP+FP))
Recall = (TP / (TP+FN))
print("Accuracy: ", ((TP+TN) / (TP+FP+FN+TN)))
print("Precision", (TP / (TP+FP)))
print("Recall", (TP / (TP+FN)))
print("Specificity", (TN / (TN+FP)) )
print("F1 Score", ((2 * Precision * Recall) / (Precision + Recall)))
print(classification_report(Y_test, y_pred))
###Output
Accuracy: 0.905450500556
0.03830528259277344
[[738 155]
[ 15 890]]
TP: 738
TN 890
FP 15
FN 155
Accuracy: 0.905450500556
Precision 0.980079681275
Recall 0.826427771557
Specificity 0.983425414365
F1 Score 0.896719319563
precision recall f1-score support
0 0.98 0.83 0.90 893
1 0.85 0.98 0.91 905
micro avg 0.91 0.91 0.91 1798
macro avg 0.92 0.90 0.90 1798
weighted avg 0.92 0.91 0.90 1798
###Markdown
Gaussian Naive Bayes
###Code
gnb = GaussianNB()
gnb.fit(X_train,Y_train)
y_pred=gnb.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
###Output
Accuracy: 0.58787541713
###Markdown
Random Forest
###Code
start = time.time()
rf=RandomForestClassifier(n_estimators=10)
rf.fit(X_train,Y_train)
y_pred=rf.predict(X_test)
end = time.time()
print("Accuracy:",accuracy_score(Y_test, y_pred))
print(end-start)
cm = confusion_matrix(Y_test,y_pred)
print(cm)
TP =cm[0][0]
FN =cm[0][1]
TN = cm[1][1]
FP = cm[1][0]
print("TP: ", TP)
print("TN", TN)
print("FP", FP)
print("FN", FN)
Precision = (TP / (TP+FP))
Recall = (TP / (TP+FN))
print("Accuracy: ", ((TP+TN) / (TP+FP+FN+TN)))
print("Precision", (TP / (TP+FP)))
print("Recall", (TP / (TP+FN)))
print("Specificity", (TN / (TN+FP)) )
print("F1 Score", ((2 * Precision * Recall) / (Precision + Recall)))
print(classification_report(Y_test, y_pred))
from sklearn.model_selection import cross_val_score
cv_scores = cross_val_score(rf, X, Y, cv=10)
print(cv_scores)
print("\nCross Validation Scores mean:{}".format(np.mean(cv_scores)))
###Output
[ 0.9625 0.96388889 0.96805556 0.97916667 0.98472222 0.96666667
0.97214485 0.97771588 0.9735376 0.96239554]
Cross Validation Scores mean:0.9710793871866296
###Markdown
ROC(Receiver Operating Characteristics) Curve
###Code
import sklearn.metrics as metrics
y_pred_knn =knn.predict_proba(X_test)[:,1]
y_pred_rf =rf.predict_proba(X_test)[:,1]
y_pred_dt = dt.predict_proba(X_test)[:,1]
#_pred_lr=lr.predict_proba(X_test)[:,1]
models=[y_pred_rf,y_pred_dt,y_pred_knn]
label=['RF','DT','KNN']
fig1 = plt.figure(figsize=[6,6])
for i in range(3):
fpr, tpr,thresholds= metrics.roc_curve(Y_test,models[i])
#rint('model:',label[i])
#rint('thresholds:',np.round(thresholds,3))
#rint('tpr: ',np.round(tpr,3))
#rint('fpr: ',np.round(fpr,3))
roc_auc = auc(fpr, tpr)
plt.plot(fpr,tpr,lw=2,label = ' %s (AUC = %0.2f)' % (label[i],roc_auc))
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.title('ROC curve ')
plt.xlabel('False positive rate (1-specificity)')
plt.ylabel('True positive rate (sensitivity)')
plt.legend(loc=4,)
from sklearn.model_selection import StratifiedKFold
import matplotlib.patches as patches
from sklearn.metrics import roc_curve,auc
from scipy import interp
cvscores = []
clf=RandomForestClassifier(n_estimators=10)
clf.fit(X_train,Y_train)
cv = StratifiedKFold(n_splits=10)
# plot arrows
fig1 = plt.figure(figsize=[8,8])
#ax1 = fig1.add_subplot(111,aspect = 'equal')
#ax1.add_patch(
# patches.Arrow(0.45,0.5,-0.25,0.25,width=0.3,color='green',alpha = 0.5)
# )
#ax1.add_patch(
# patches.Arrow(0.5,0.45,0.25,-0.25,width=0.3,color='red',alpha = 0.5)
# )
tprs = []
aucs = []
mean_fpr = np.linspace(0,1,100)
i = 1
start = time.time()
for train,test in cv.split(X,Y):
prediction = clf.fit(X_train,Y_train).predict_proba(X_test)
y_pred=clf.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
cvscores.append(accuracy_score(Y_test, y_pred))
fpr, tpr, t = roc_curve(Y_test, prediction[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=2, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i= i+1
end = time.time()
plt.plot([0,1],[0,1],linestyle = '--',lw = 2,color = 'black')
mean_tpr = np.mean(tprs, axis=0)
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, color='blue',
label=r'Mean ROC (AUC = %0.2f )' % (mean_auc),lw=2, alpha=1)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.legend(loc="lower right")
#plt.text(0.32,0.7,'More accurate area',fontsize = 12)
#plt.text(0.63,0.4,'Less accurate area',fontsize = 12)
plt.show()
print(cvscores)
print("\nCross Validation Scores mean:{}".format(np.mean(cvscores)))
print(end-start)
###Output
Accuracy: 0.954949944383
Accuracy: 0.957730812013
Accuracy: 0.954949944383
Accuracy: 0.950500556174
Accuracy: 0.952169076752
Accuracy: 0.958286985539
Accuracy: 0.959955506118
Accuracy: 0.956618464961
Accuracy: 0.958286985539
Accuracy: 0.959955506118
###Markdown
SVM
###Code
clf=svm.SVC()
clf.fit(X_train,Y_train)
y_pred=clf.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
###Output
/home/divya/anaconda3/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
"avoid this warning.", FutureWarning)
###Markdown
CNN
###Code
import keras
from keras.models import Sequential
from keras.layers import Dense
print(keras.__version__)
clf = Sequential()
clf.add(Dense(units = 12, init = 'uniform', activation = 'relu', input_dim = X_train.shape[1]))
clf.add(Dense(units = 8, init = 'uniform', activation = 'relu'))
clf.add(Dense(units = 1, init = 'uniform', activation = 'sigmoid'))
clf.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
clf.fit(X_train,Y_train, batch_size = 20, nb_epoch = 50)
ypred = clf.predict(X_test)
ypred = (ypred > 0.5)
accuracy = accuracy_score(Y_test,ypred)
accuracy
###Output
_____no_output_____
###Markdown
Coronary Heart Disease Prediction
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Sklearn
from sklearn.preprocessing import normalize
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn import tree
from sklearn.naive_bayes import MultinomialNB,GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import VotingClassifier
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, StandardScaler
from sklearn.calibration import CalibratedClassifierCV
# Evaluation Metrics
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error as mae
from math import sqrt
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix, precision_recall_fscore_support,classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.linear_model import Ridge
from sklearn.svm import SVR
df=pd.read_csv('framingham.csv')
df.head()
columns = df.columns
columns
df.dtypes
df.hist(figsize=(12,12))
df.describe()
###Output
_____no_output_____
###Markdown
Filling Null Values (NaN - Not a Number) with Attribute Mean
###Code
for i in columns:
df[i] = df[i].fillna(df[i].mean())
df.describe()
###Output
_____no_output_____
###Markdown
Analysing Class Distribution TenYearCHD attribute gives whether an individual is at risk of suffering from CHD in next 10 years0 - Not at Risk1 - At Risk
###Code
target_count = df['TenYearCHD'].value_counts()
print('Class 0:', target_count[0])
print('Class 1:', target_count[1])
print('Proportion:', round(target_count[0] / target_count[1], 2), ': 1')
target_count.plot(kind='bar', title='Count (target)');
# Class count
count_class_0, count_class_1 = df['TenYearCHD'].value_counts()
# Divide by class
df_class_0 = df[df['TenYearCHD'] == 0]
df_class_1 = df[df['TenYearCHD'] == 1]
df_class_0
df_class_1
###Output
_____no_output_____
###Markdown
Normaizing Class Distribution
###Code
df_class_1_over = df_class_1.sample(count_class_0, replace=True)
df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)
print('Random over-sampling:')
print(df_test_over['TenYearCHD'].value_counts())
df_test_over['TenYearCHD'].value_counts().plot(kind='bar', title='Count (target)');
df = df_test_over
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Z-score Normalization on selected numerical attributes
###Code
from scipy.stats import zscore
df['cigsPerDay'] = df[['cigsPerDay']].apply(zscore)
df['totChol'] = df[['totChol']].apply(zscore)
df['sysBP'] = df[['sysBP']].apply(zscore)
df['diaBP'] = df[['diaBP']].apply(zscore)
df['BMI'] = df[['BMI']].apply(zscore)
df['heartRate'] = df[['heartRate']].apply(zscore)
df['glucose'] = df[['glucose']].apply(zscore)
df
X = df.drop(['TenYearCHD','education'],axis = 1)
Y = df['TenYearCHD']
X.head()
Y.head()
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.25,random_state=0)
print(X_train.shape)
print(X_test.shape)
#Y_train = np.asarray(Y_train).reshape(-1,1)
#Y_test = np.asarray(Y_test).reshape(-1,1)
print(Y_train.shape)
print(Y_test.shape)
###Output
(5394, 14)
(1798, 14)
(5394,)
(1798,)
###Markdown
KNN
###Code
import time
start = time.time()
knn = KNeighborsClassifier(n_neighbors=2).fit(X_train,Y_train)
y_pred=knn.predict(X_test)
end = time.time()
print("Accuracy:",accuracy_score(Y_test, y_pred))
print(end-start)
cm = confusion_matrix(Y_test,y_pred)
print(cm)
TP =cm[0][0]
FN =cm[0][1]
TN = cm[1][1]
FP = cm[1][0]
print("TP: ", TP)
print("TN", TN)
print("FP", FP)
print("FN", FN)
Precision = (TP / (TP+FP))
Recall = (TP / (TP+FN))
print("Accuracy: ", ((TP+TN) / (TP+FP+FN+TN)))
print("Precision", (TP / (TP+FP)))
print("Recall", (TP / (TP+FN)))
print("Specificity", (TN / (TN+FP)) )
print("F1 Score", ((2 * Precision * Recall) / (Precision + Recall)))
print(classification_report(Y_test, y_pred))
df.groupby('TenYearCHD').count()
error = []
for i in range(1, 40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, Y_train)
pred_i = knn.predict(X_test)
error.append(np.mean(pred_i != Y_test))
plt.figure(figsize=(12, 6))
plt.plot(range(1, 40), error, color='red', linestyle='dashed', marker='o',
markerfacecolor='blue', markersize=10)
plt.title('Error Rate K Value')
plt.xlabel('K Value')
plt.ylabel('Mean Error')
from sklearn.model_selection import cross_val_score
knn = KNeighborsClassifier(n_neighbors=2).fit(X_train,Y_train)
cv_scores = cross_val_score(knn, X, Y, cv=10)
print(cv_scores)
print("\nCross Validation Scores mean:{}".format(np.mean(cv_scores)))
###Output
[ 0.9125 0.89861111 0.91805556 0.92777778 0.92777778 0.91111111
0.92618384 0.91922006 0.92200557 0.90668524]
Cross Validation Scores mean:0.9169928040854224
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='saga',max_iter=10).fit(X_train,Y_train)
y_pred=lr.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
###Output
Accuracy: 0.643492769744
###Markdown
Decision Tree
###Code
start = time.time()
dt = tree.DecisionTreeClassifier().fit(X_train,Y_train)
y_pred=dt.predict(X_test)
end = time.time()
print("Accuracy:",accuracy_score(Y_test, y_pred))
print(end-start)
cm = confusion_matrix(Y_test,y_pred)
print(cm)
TP =cm[0][0]
FN =cm[0][1]
TN = cm[1][1]
FP = cm[1][0]
print("TP: ", TP)
print("TN", TN)
print("FP", FP)
print("FN", FN)
Precision = (TP / (TP+FP))
Recall = (TP / (TP+FN))
print("Accuracy: ", ((TP+TN) / (TP+FP+FN+TN)))
print("Precision", (TP / (TP+FP)))
print("Recall", (TP / (TP+FN)))
print("Specificity", (TN / (TN+FP)) )
print("F1 Score", ((2 * Precision * Recall) / (Precision + Recall)))
print(classification_report(Y_test, y_pred))
###Output
Accuracy: 0.905450500556
0.03830528259277344
[[738 155]
[ 15 890]]
TP: 738
TN 890
FP 15
FN 155
Accuracy: 0.905450500556
Precision 0.980079681275
Recall 0.826427771557
Specificity 0.983425414365
F1 Score 0.896719319563
precision recall f1-score support
0 0.98 0.83 0.90 893
1 0.85 0.98 0.91 905
micro avg 0.91 0.91 0.91 1798
macro avg 0.92 0.90 0.90 1798
weighted avg 0.92 0.91 0.90 1798
###Markdown
Gaussian Naive Bayes
###Code
gnb = GaussianNB()
gnb.fit(X_train,Y_train)
y_pred=gnb.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
###Output
Accuracy: 0.58787541713
###Markdown
Random Forest
###Code
start = time.time()
rf=RandomForestClassifier(n_estimators=10)
rf.fit(X_train,Y_train)
y_pred=rf.predict(X_test)
end = time.time()
print("Accuracy:",accuracy_score(Y_test, y_pred))
print(end-start)
cm = confusion_matrix(Y_test,y_pred)
print(cm)
TP =cm[0][0]
FN =cm[0][1]
TN = cm[1][1]
FP = cm[1][0]
print("TP: ", TP)
print("TN", TN)
print("FP", FP)
print("FN", FN)
Precision = (TP / (TP+FP))
Recall = (TP / (TP+FN))
print("Accuracy: ", ((TP+TN) / (TP+FP+FN+TN)))
print("Precision", (TP / (TP+FP)))
print("Recall", (TP / (TP+FN)))
print("Specificity", (TN / (TN+FP)) )
print("F1 Score", ((2 * Precision * Recall) / (Precision + Recall)))
print(classification_report(Y_test, y_pred))
from sklearn.model_selection import cross_val_score
cv_scores = cross_val_score(rf, X, Y, cv=10)
print(cv_scores)
print("\nCross Validation Scores mean:{}".format(np.mean(cv_scores)))
###Output
[ 0.9625 0.96388889 0.96805556 0.97916667 0.98472222 0.96666667
0.97214485 0.97771588 0.9735376 0.96239554]
Cross Validation Scores mean:0.9710793871866296
###Markdown
ROC(Receiver Operating Characteristics) Curve
###Code
import sklearn.metrics as metrics
y_pred_knn =knn.predict_proba(X_test)[:,1]
y_pred_rf =rf.predict_proba(X_test)[:,1]
y_pred_dt = dt.predict_proba(X_test)[:,1]
#_pred_lr=lr.predict_proba(X_test)[:,1]
models=[y_pred_rf,y_pred_dt,y_pred_knn]
label=['RF','DT','KNN']
fig1 = plt.figure(figsize=[6,6])
for i in range(3):
fpr, tpr,thresholds= metrics.roc_curve(Y_test,models[i])
#rint('model:',label[i])
#rint('thresholds:',np.round(thresholds,3))
#rint('tpr: ',np.round(tpr,3))
#rint('fpr: ',np.round(fpr,3))
roc_auc = auc(fpr, tpr)
plt.plot(fpr,tpr,lw=2,label = ' %s (AUC = %0.2f)' % (label[i],roc_auc))
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.title('ROC curve ')
plt.xlabel('False positive rate (1-specificity)')
plt.ylabel('True positive rate (sensitivity)')
plt.legend(loc=4,)
from sklearn.model_selection import StratifiedKFold
import matplotlib.patches as patches
from sklearn.metrics import roc_curve,auc
from scipy import interp
cvscores = []
clf=RandomForestClassifier(n_estimators=10)
clf.fit(X_train,Y_train)
cv = StratifiedKFold(n_splits=10)
# plot arrows
fig1 = plt.figure(figsize=[8,8])
#ax1 = fig1.add_subplot(111,aspect = 'equal')
#ax1.add_patch(
# patches.Arrow(0.45,0.5,-0.25,0.25,width=0.3,color='green',alpha = 0.5)
# )
#ax1.add_patch(
# patches.Arrow(0.5,0.45,0.25,-0.25,width=0.3,color='red',alpha = 0.5)
# )
tprs = []
aucs = []
mean_fpr = np.linspace(0,1,100)
i = 1
start = time.time()
for train,test in cv.split(X,Y):
prediction = clf.fit(X_train,Y_train).predict_proba(X_test)
y_pred=clf.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
cvscores.append(accuracy_score(Y_test, y_pred))
fpr, tpr, t = roc_curve(Y_test, prediction[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=2, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i= i+1
end = time.time()
plt.plot([0,1],[0,1],linestyle = '--',lw = 2,color = 'black')
mean_tpr = np.mean(tprs, axis=0)
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, color='blue',
label=r'Mean ROC (AUC = %0.2f )' % (mean_auc),lw=2, alpha=1)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.legend(loc="lower right")
#plt.text(0.32,0.7,'More accurate area',fontsize = 12)
#plt.text(0.63,0.4,'Less accurate area',fontsize = 12)
plt.show()
print(cvscores)
print("\nCross Validation Scores mean:{}".format(np.mean(cvscores)))
print(end-start)
###Output
Accuracy: 0.954949944383
Accuracy: 0.957730812013
Accuracy: 0.954949944383
Accuracy: 0.950500556174
Accuracy: 0.952169076752
Accuracy: 0.958286985539
Accuracy: 0.959955506118
Accuracy: 0.956618464961
Accuracy: 0.958286985539
Accuracy: 0.959955506118
###Markdown
SVM
###Code
clf=svm.SVC()
clf.fit(X_train,Y_train)
y_pred=clf.predict(X_test)
print("Accuracy:",accuracy_score(Y_test, y_pred))
###Output
/home/divya/anaconda3/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
"avoid this warning.", FutureWarning)
###Markdown
CNN
###Code
import keras
from keras.models import Sequential
from keras.layers import Dense
print(keras.__version__)
clf = Sequential()
clf.add(Dense(units = 12, init = 'uniform', activation = 'relu', input_dim = X_train.shape[1]))
clf.add(Dense(units = 8, init = 'uniform', activation = 'relu'))
clf.add(Dense(units = 1, init = 'uniform', activation = 'sigmoid'))
clf.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
clf.fit(X_train,Y_train, batch_size = 20, nb_epoch = 50)
ypred = clf.predict(X_test)
ypred = (ypred > 0.5)
accuracy = accuracy_score(Y_test,ypred)
accuracy
###Output
_____no_output_____ |
courses/udacity_intro_to_tensorflow_for_deep_learning/l06c02_exercise_flowers_with_transfer_learning.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits = ['train[:70%]', 'train[70%:]']
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
split=splits,
with_info=True,
as_supervised=True
)
###Output
[1mDownloading and preparing dataset tf_flowers/3.0.1 (download: 218.21 MiB, generated: 221.83 MiB, total: 440.05 MiB) to /root/tensorflow_datasets/tf_flowers/3.0.1...[0m
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports This Colab will require us to use some things which are not yet in official releases of TensorFlow. So below, we're first installing a nightly version of TensorFlow as well as TensorFlow Hub.This will switch your installation of TensorFlow in Colab to this TensorFlow version. Once you are finished with this Colab, you should switch batch to the latest stable release of TensorFlow by doing selecting `Runtime -> Reset all runtimes...` in the menus above. This will reset the Colab environment to its original state.
###Code
!pip install tf-nightly-gpu
!pip install "tensorflow_hub==0.4.0"
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before. The new one is importing tensorflow_hub which was installed above, and which this Colab will make heavy use of.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
tf.enable_eager_execution()
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the docuemntation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, ceate a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a sumary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation Graphs.In the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` fucntion to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predcitions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Perform Transfer Learning with the Inception ModelGo to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
###Code
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports This Colab will require us to use some things which are not yet in official releases of TensorFlow. So below, we're first installing a nightly version of TensorFlow as well as TensorFlow Hub.This will switch your installation of TensorFlow in Colab to this TensorFlow version. Once you are finished with this Colab, you should switch batch to the latest stable release of TensorFlow by doing selecting `Runtime -> Reset all runtimes...` in the menus above. This will reset the Colab environment to its original state.
###Code
!pip install tf-nightly-gpu
!pip install "tensorflow_hub==0.4.0"
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before. The new one is importing tensorflow_hub which was installed above, and which this Colab will make heavy use of.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
tf.enable_eager_execution()
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the docuemntation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, ceate a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a sumary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation Graphs.In the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` fucntion to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predcitions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Perform Transfer Learning with the Inception ModelGo to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
###Code
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports This Colab will require us to use some things which are not yet in official releases of TensorFlow. So below, we're first installing a nightly version of TensorFlow as well as TensorFlow Hub.This will switch your installation of TensorFlow in Colab to this TensorFlow version. Once you are finished with this Colab, you should switch batch to the latest stable release of TensorFlow by doing selecting `Runtime -> Reset all runtimes...` in the menus above. This will reset the Colab environment to its original state.
###Code
!pip install tf-nightly-gpu
!pip install "tensorflow_hub==0.4.0"
!pip install -U tensorflow_datasets
###Output
_____no_output_____
###Markdown
Some normal imports we've seen before. The new one is importing tensorflow_hub which was installed above, and which this Colab will make heavy use of.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
tf.enable_eager_execution()
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the docuemntation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, ceate a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a sumary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation Graphs.In the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` fucntion to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predcitions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Perform Transfer Learning with the Inception ModelGo to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
###Code
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
num_classes= dataset_info.features['label'].num_classes
num_training_examples = 0
num_validation_examples= 0
for i in training_set:
num_training_examples+=1
for i in validation_set:
num_validation_examples+=1
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
Total Number of Classes: 5
Total Number of Training Images: 2569
Total Number of Validation Images: 1101
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
Image 1 shape: (333, 500, 3) label: 2
Image 2 shape: (212, 320, 3) label: 3
Image 3 shape: (240, 320, 3) label: 3
Image 4 shape: (240, 320, 3) label: 4
Image 5 shape: (317, 500, 3) label: 3
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES = 224
def format_image(image, label):
image= tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
# URL = 'https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4'
INCEPTION_URL = 'https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4'
feature_extractor = hub.KerasLayer(INCEPTION_URL, input_shape=(IMAGE_RES, IMAGE_RES, 3))
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense( num_classes, activation='softmax')
])
print(model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer_1 (KerasLayer) (None, 2048) 21802784
_________________________________________________________________
dense_1 (Dense) (None, 5) 10245
=================================================================
Total params: 21,813,029
Trainable params: 10,245
Non-trainable params: 21,802,784
_________________________________________________________________
None
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']
)
EPOCHS = 6
history = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
###Output
Epoch 1/6
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). Accuracy Notes Accuracy for Mobilenet:Training: 94.59%Validation: 90.37Accuracy for Inception Model: Training: 93.5 Validation: 86.74 TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8,8))
plt.subplot(1,2,1)
plt.plot(epochs_range, acc, label='Training accuracy')
plt.plot(epochs_range, val_acc, label='Validation accuracy')
plt.legend(loc='lower right')
plt.xlabel('Epochs')
plt.ylabel('accuracy')
plt.title('Training and Validation Accuracy')
# row, col, index pos.
plt.subplot(1,2,2)
plt.plot(epochs_range, loss, label= 'Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper left')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =np.array(dataset_info.features['label'].names)
print(class_names)
###Output
['dandelion' 'daisy' 'tulips' 'sunflowers' 'roses']
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch = next(iter(train_batches))
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
print(predicted_ids)
predicted_class_names = class_names[predicted_ids]
###Output
[1 0 3 1 3 1 2 0 4 4 4 4 1 3 3 4 4 0 4 4 3 2 3 0 4 0 3 1 2 4 2 3]
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print("Labels: ", label_batch)
print("Predictions: ", predicted_ids)
###Output
Labels: tf.Tensor([1 0 3 1 3 1 2 0 4 4 4 4 1 3 3 4 4 0 4 4 3 1 3 0 4 0 3 1 4 4 2 3], shape=(32,), dtype=int64)
Predictions: [1 0 3 1 3 1 2 0 4 4 4 4 1 3 3 4 4 0 4 4 3 2 3 0 4 0 3 1 2 4 2 3]
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits = tfds.load('tf_flowers', with_info=True, split=['train[:70%]', 'train[70%:]'])
(training_set, validation_set), dataset_info = splits
# training_set = tf.data.Dataset.from_tensor_slices([(example.numpy(), label.numpy()) for example, label in training_set])
# validation_set = tf.data.Dataset.from_tensor_slices(list(validation_set))
print(dataset_info)
print(training_set)
print(validation_set)
###Output
tfds.core.DatasetInfo(
name='tf_flowers',
version=3.0.1,
description='A large set of images of flowers',
homepage='https://www.tensorflow.org/tutorials/load_data/images',
features=FeaturesDict({
'image': Image(shape=(None, None, 3), dtype=tf.uint8),
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=5),
}),
total_num_examples=3670,
splits={
'train': 3670,
},
supervised_keys=('image', 'label'),
citation="""@ONLINE {tfflowers,
author = "The TensorFlow Team",
title = "Flowers",
month = "jan",
year = "2019",
url = "http://download.tensorflow.org/example_images/flower_photos.tgz" }""",
redistribution_info=,
)
<PrefetchDataset shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>
<PrefetchDataset shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
num_classes = dataset_info.features['label'].num_classes
num_training_examples = len(list(training_set))
num_validation_examples = len(list(validation_set))
# BAD PRACTICE, I could just use the total and percentages like:
num_training_examples == dataset_info.splits.total_num_examples*0.7
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
Total Number of Classes: 5
Total Number of Training Images: 2569
Total Number of Validation Images: 1101
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example['image'].shape, example['label']))
###Output
Image 1 shape: (333, 500, 3) label: 2
Image 2 shape: (212, 320, 3) label: 3
Image 3 shape: (240, 320, 3) label: 3
Image 4 shape: (240, 320, 3) label: 4
Image 5 shape: (317, 500, 3) label: 3
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES = (224, 224)
def separate(example):
return example['image'], example['label']
def format_image(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, IMAGE_RES)
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(separate).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(separate).map(format_image).batch(BATCH_SIZE)
for i, example in enumerate(train_batches.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0][0].shape, example[1][0]))
###Output
Image 1 shape: (224, 224, 3) label: 0
Image 2 shape: (224, 224, 3) label: 1
Image 3 shape: (224, 224, 3) label: 3
Image 4 shape: (224, 224, 3) label: 2
Image 5 shape: (224, 224, 3) label: 0
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
feature_extractor = tf.keras.applications.mobilenet_v2.MobileNetV2(weights='imagenet')
preprocess = tf.keras.applications.mobilenet_v2.preprocess_input
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable = False
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
input = tf.keras.layers.Input(IMAGE_RES + (3,))
x = preprocess(input)
x = feature_extractor(x)
output = tf.keras.layers.Dense(5, activation='softmax')(x)
model = tf.keras.Model(input, output)
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS = 6
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(x=train_batches, epochs=EPOCHS, validation_data=validation_batches)
###Output
Epoch 1/6
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(0, EPOCHS)
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names = dataset_info.features['label'].names
class_names
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
validation_batches.take(1)
image_batch, label_batch = next(iter(validation_batches))
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = tf.math.argmax(predicted_batch, axis=-1)
predicted_class_names = []
for id in predicted_ids:
predicted_class_names.append(class_names[id])
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print(predicted_class_names)
###Output
['tulips', 'dandelion', 'daisy', 'daisy', 'dandelion', 'daisy', 'dandelion', 'dandelion', 'dandelion', 'dandelion', 'dandelion', 'daisy', 'daisy', 'dandelion', 'dandelion', 'daisy', 'dandelion', 'dandelion', 'dandelion', 'tulips', 'dandelion', 'daisy', 'tulips', 'tulips', 'sunflowers', 'dandelion', 'daisy', 'dandelion', 'dandelion', 'dandelion', 'dandelion', 'dandelion']
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
tf.enable_eager_execution()
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Perform Transfer Learning with the Inception ModelGo to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
###Code
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
split= ['train[:70%]', 'train[70%:]'],
with_info = True,
as_supervised = True
)
###Output
[1mDownloading and preparing dataset tf_flowers/3.0.1 (download: 218.21 MiB, generated: 221.83 MiB, total: 440.05 MiB) to /root/tensorflow_datasets/tf_flowers/3.0.1...[0m
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
num_classes = dataset_info.features['label'].num_classes
num_training_examples = 0
num_validation_examples = 0
for example in training_set:
num_training_examples += 1
for example in validation_set:
num_validation_examples += 1
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
Total Number of Classes: 5
Total Number of Training Images: 2569
Total Number of Validation Images: 1101
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
Image 1 shape: (333, 500, 3) label: 2
Image 2 shape: (212, 320, 3) label: 3
Image 3 shape: (240, 320, 3) label: 3
Image 4 shape: (240, 320, 3) label: 4
Image 5 shape: (317, 500, 3) label: 3
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES = 224
def format_image(image, label):
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL, input_shape=(IMAGE_RES, IMAGE_RES, 3))
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor.trainable= False
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(5)
])
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer (KerasLayer) (None, 1280) 2257984
_________________________________________________________________
dense_1 (Dense) (None, 5) 6405
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS = 6
model.compile(optimizer = 'adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches,
epochs = EPOCHS,
validation_data = validation_batches)
###Output
Epoch 1/6
81/81 [==============================] - 37s 63ms/step - loss: 0.7644 - accuracy: 0.7275 - val_loss: 0.4378 - val_accuracy: 0.8574
Epoch 2/6
81/81 [==============================] - 4s 51ms/step - loss: 0.3745 - accuracy: 0.8793 - val_loss: 0.3577 - val_accuracy: 0.8856
Epoch 3/6
81/81 [==============================] - 4s 51ms/step - loss: 0.2940 - accuracy: 0.9058 - val_loss: 0.3196 - val_accuracy: 0.8983
Epoch 4/6
81/81 [==============================] - 4s 51ms/step - loss: 0.2426 - accuracy: 0.9272 - val_loss: 0.3097 - val_accuracy: 0.8937
Epoch 5/6
81/81 [==============================] - 4s 52ms/step - loss: 0.2127 - accuracy: 0.9381 - val_loss: 0.3008 - val_accuracy: 0.8992
Epoch 6/6
81/81 [==============================] - 4s 51ms/step - loss: 0.1851 - accuracy: 0.9482 - val_loss: 0.2922 - val_accuracy: 0.9028
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names = np.array(dataset_info.features['label'].names)
print(class_names)
###Output
['dandelion' 'daisy' 'tulips' 'sunflowers' 'roses']
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch = next(iter(train_batches))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
print(predicted_class_names)
###Output
['daisy' 'sunflowers' 'daisy' 'daisy' 'dandelion' 'roses' 'dandelion'
'daisy' 'roses' 'sunflowers' 'dandelion' 'sunflowers' 'tulips' 'daisy'
'sunflowers' 'dandelion' 'roses' 'sunflowers' 'tulips' 'sunflowers'
'daisy' 'tulips' 'roses' 'dandelion' 'dandelion' 'sunflowers' 'daisy'
'dandelion' 'sunflowers' 'roses' 'dandelion' 'tulips']
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
###Output
Labels: [1 3 1 1 0 4 0 1 4 3 0 3 2 1 3 0 4 3 2 3 1 2 4 0 0 3 1 0 3 4 0 2]
Predicted labels: [1 3 1 1 0 4 0 1 4 3 0 3 2 1 3 0 4 3 2 3 1 2 4 0 0 3 1 0 3 4 0 2]
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(18,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Perform Transfer Learning with the Inception ModelGo to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
###Code
IMAGE_RES = 299
def format_image(image, label):
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
URL = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES, 3),
trainable = False)
model = tf.keras.Sequential([feature_extractor,
layers.Dense(5)])
model.summary()
EPOCHS = 6
model.compile(optimizer = 'adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches,
epochs = EPOCHS,
validation_data = validation_batches)
###Output
Epoch 1/6
81/81 [==============================] - 25s 228ms/step - loss: 0.7328 - accuracy: 0.7462 - val_loss: 0.4767 - val_accuracy: 0.8474
Epoch 2/6
81/81 [==============================] - 14s 173ms/step - loss: 0.3896 - accuracy: 0.8747 - val_loss: 0.3552 - val_accuracy: 0.8837
Epoch 3/6
81/81 [==============================] - 15s 180ms/step - loss: 0.3051 - accuracy: 0.9062 - val_loss: 0.3368 - val_accuracy: 0.8819
Epoch 4/6
81/81 [==============================] - 15s 179ms/step - loss: 0.2587 - accuracy: 0.9206 - val_loss: 0.3057 - val_accuracy: 0.8937
Epoch 5/6
81/81 [==============================] - 14s 174ms/step - loss: 0.2280 - accuracy: 0.9338 - val_loss: 0.2964 - val_accuracy: 0.8910
Epoch 6/6
81/81 [==============================] - 14s 173ms/step - loss: 0.2005 - accuracy: 0.9385 - val_loss: 0.2943 - val_accuracy: 0.8865
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Run in Google Colab View source on GitHub TensorFlow Hub [TensorFlow Hub](http://tensorflow.org/hub) is an online repository of already trained TensorFlow models that you can use.These models can either be used as is, or they can be used for Transfer Learning.Transfer learning is a process where you take an existing trained model, and extend it to do additional work. This involves leaving the bulk of the model unchanged, while adding and retraining the final layers, in order to get a different set of possible outputs.Here, you can see all the models available in [TensorFlow Module Hub](https://tfhub.dev/).Before starting this Colab, you should reset the Colab environment by selecting `Runtime -> Reset all runtimes...` from menu above. Imports Some normal imports we've seen before. The new one is importing tensorflow_hub which this Colab will make heavy use of.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
TODO: Download the Flowers Dataset using TensorFlow Datasets In the cell below you will download the Flowers dataset using TensorFlow Datasets. If you look at the [TensorFlow Datasets documentation](https://www.tensorflow.org/datasets/datasetstf_flowers) you will see that the name of the Flowers dataset is `tf_flowers`. You can also see that this dataset is only split into a TRAINING set. You will therefore have to use `tfds.splits` to split this training set into to a `training_set` and a `validation_set`. Do a `[70, 30]` split such that 70 corresponds to the `training_set` and 30 to the `validation_set`. Then load the `tf_flowers` dataset using `tfds.load`. Make sure the `tfds.load` function uses the all the parameters you need, and also make sure it returns the dataset info, so we can retrieve information about the datasets.
###Code
splits =
(training_set, validation_set), dataset_info =
###Output
_____no_output_____
###Markdown
TODO: Print Information about the Flowers DatasetNow that you have downloaded the dataset, use the dataset info to print the number of classes in the dataset, and also write some code that counts how many images we have in the training and validation sets.
###Code
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(num_training_examples))
print('Total Number of Validation Images: {} \n'.format(num_validation_examples))
###Output
_____no_output_____
###Markdown
The images in the Flowers dataset are not all the same size.
###Code
for i, example in enumerate(training_set.take(5)):
print('Image {} shape: {} label: {}'.format(i+1, example[0].shape, example[1]))
###Output
_____no_output_____
###Markdown
TODO: Reformat Images and Create BatchesIn the cell below create a function that reformats all images to the resolution expected by MobileNet v2 (224, 224) and normalizes them. The function should take in an `image` and a `label` as arguments and should return the new `image` and corresponding `label`. Then create training and validation batches of size `32`.
###Code
IMAGE_RES =
def format_image(image, label):
return image, label
BATCH_SIZE =
train_batches =
validation_batches =
###Output
_____no_output_____
###Markdown
Do Simple Transfer Learning with TensorFlow HubLet's now use TensorFlow Hub to do Transfer Learning. Remember, in transfer learning we reuse parts of an already trained model and change the final layer, or several layers, of the model, and then retrain those layers on our own dataset. TODO: Create a Feature ExtractorIn the cell below create a `feature_extractor` using MobileNet v2. Remember that the partial model from TensorFlow Hub (without the final classification layer) is called a feature vector. Go to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) to see a list of available feature vectors. Click on the `tf2-preview/mobilenet_v2/feature_vector`. Read the documentation and get the corresponding `URL` to get the MobileNet v2 feature vector. Finally, create a `feature_extractor` by using `hub.KerasLayer` with the correct `input_shape` parameter.
###Code
URL =
feature_extractor =
###Output
_____no_output_____
###Markdown
TODO: Freeze the Pre-Trained ModelIn the cell below freeze the variables in the feature extractor layer, so that the training only modifies the final classifier layer.
###Code
feature_extractor
###Output
_____no_output_____
###Markdown
TODO: Attach a classification headIn the cell below create a `tf.keras.Sequential` model, and add the pre-trained model and the new classification layer. Remember that the classification layer must have the same number of classes as our Flowers dataset. Finally print a summary of the Sequential model.
###Code
model =
###Output
_____no_output_____
###Markdown
TODO: Train the modelIn the cell bellow train this model like any other, by first calling `compile` and then followed by `fit`. Make sure you use the proper parameters when applying both methods. Train the model for only 6 epochs.
###Code
EPOCHS =
history =
###Output
_____no_output_____
###Markdown
You can see we get ~88% validation accuracy with only 6 epochs of training, which is absolutely awesome. This is a huge improvement over the model we created in the previous lesson, where we were able to get ~76% accuracy with 80 epochs of training. The reason for this difference is that MobileNet v2 was carefully designed over a long time by experts, then trained on a massive dataset (ImageNet). TODO: Plot Training and Validation GraphsIn the cell below, plot the training and validation accuracy/loss graphs.
###Code
acc =
val_acc =
loss =
val_loss =
epochs_range =
###Output
_____no_output_____
###Markdown
What is a bit curious here is that validation performance is better than training performance, right from the start to the end of execution.One reason for this is that validation performance is measured at the end of the epoch, but training performance is the average values across the epoch.The bigger reason though is that we're reusing a large part of MobileNet which is already trained on Flower images. TODO: Check PredictionsIn the cell below get the label names from the dataset info and convert them into a NumPy array. Print the array to make sure you have the correct label names.
###Code
class_names =
###Output
_____no_output_____
###Markdown
TODO: Create an Image Batch and Make PredictionsIn the cell below, use the `next()` function to create an `image_batch` and its corresponding `label_batch`. Convert both the `image_batch` and `label_batch` to numpy arrays using the `.numpy()` method. Then use the `.predict()` method to run the image batch through your model and make predictions. Then use the `np.argmax()` function to get the indices of the best prediction for each image. Finally convert the indices of the best predictions to class names.
###Code
image_batch, label_batch =
predicted_batch =
predicted_batch = tf.squeeze(predicted_batch).numpy()
predicted_ids =
predicted_class_names =
###Output
_____no_output_____
###Markdown
TODO: Print True Labels and Predicted IndicesIn the cell below, print the true labels and the indices of predicted labels.
###Code
print()
###Output
_____no_output_____
###Markdown
Plot Model Predictions
###Code
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
###Output
_____no_output_____
###Markdown
TODO: Perform Transfer Learning with the Inception ModelGo to the [TensorFlow Hub documentation](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) and click on `tf2-preview/inception_v3/feature_vector`. This feature vector corresponds to the Inception v3 model. In the cells below, use transfer learning to create a CNN that uses Inception v3 as the pretrained model to classify the images from the Flowers dataset. Note that Inception, takes as input, images that are 299 x 299 pixels. Compare the accuracy you get with Inception v3 to the accuracy you got with MobileNet v2.
###Code
###Output
_____no_output_____ |
ICA/.ipynb_checkpoints/Independent Component Analysis Lab-zh-checkpoint.ipynb | ###Markdown
独立成分分析 Lab在此 notebook 中,我们将使用独立成分分析方法从三个观察结果中提取信号,每个观察结果都包含不同的原始混音信号。这个问题与 ICA 视频中解释的问题一样。 数据集首先看看手头的数据集。我们有三个 WAVE 文件,正如我们之前提到的,每个文件都是混音形式。如果你之前没有在 python 中处理过音频文件,没关系,它们实际上就是浮点数列表。首先加载第一个音频文件 **[ICA mix 1.wav](ICA mix 1.wav)** [点击即可聆听该文件]:
###Code
import numpy as np
import wave
# Read the wave file
mix_1_wave = wave.open('ICA mix 1.wav','r')
###Output
_____no_output_____
###Markdown
我们看看该 wave 文件的参数,详细了解该文件
###Code
mix_1_wave.getparams()
###Output
_____no_output_____
###Markdown
该文件只有一个声道(因此是单声道)。帧率是 44100,表示每秒声音由 44100 个整数组成(因为文件是常见的 PCM 16 位格式,所以是整数)。该文件总共有 264515 个整数/帧,因此时长为:
###Code
264515/44100
###Output
_____no_output_____
###Markdown
我们从该 wave 文件中提取帧,这些帧将属于我们将运行 ICA 的数据集:
###Code
# Extract Raw Audio from Wav File
signal_1_raw = mix_1_wave.readframes(-1)
signal_1 = np.fromstring(signal_1_raw, 'Int16')
###Output
_____no_output_____
###Markdown
signal_1 现在是一个整数列表,表示第一个文件中包含的声音。
###Code
'length: ', len(signal_1) , 'first 100 elements: ',signal_1[:100]
###Output
_____no_output_____
###Markdown
如果将此数组绘制成线形图,我们将获得熟悉的波形:
###Code
import matplotlib.pyplot as plt
fs = mix_1_wave.getframerate()
timing = np.linspace(0, len(signal_1)/fs, num=len(signal_1))
plt.figure(figsize=(12,2))
plt.title('Recording 1')
plt.plot(timing,signal_1, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
###Output
_____no_output_____
###Markdown
现在我们可以按照相同的方式加载另外两个 wave 文件 **[ICA mix 2.wav](ICA mix 2.wav)** 和 **[ICA mix 3.wav](ICA mix 3.wav)**
###Code
mix_2_wave = wave.open('ICA mix 2.wav','r')
#Extract Raw Audio from Wav File
signal_raw_2 = mix_2_wave.readframes(-1)
signal_2 = np.fromstring(signal_raw_2, 'Int16')
mix_3_wave = wave.open('ICA mix 3.wav','r')
#Extract Raw Audio from Wav File
signal_raw_3 = mix_3_wave.readframes(-1)
signal_3 = np.fromstring(signal_raw_3, 'Int16')
plt.figure(figsize=(12,2))
plt.title('Recording 2')
plt.plot(timing,signal_2, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
plt.figure(figsize=(12,2))
plt.title('Recording 3')
plt.plot(timing,signal_3, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
###Output
_____no_output_____
###Markdown
读取所有三个文件后,可以通过 [zip](https://docs.python.org/3/library/functions.htmlzip) 运算创建数据集。* 通过将 signal_1、signal_2 和 signal_3 组合成一个列表创建数据集 ```X```
###Code
X = list(zip(signal_1, signal_2, signal_3))
# Let's peak at what X looks like
X[:10]
###Output
_____no_output_____
###Markdown
现在准备运行 ICA 以尝试获取原始信号。* 导入 sklearn 的 [FastICA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FastICA.html) 模块* 初始化 FastICA,查看三个成分* 使用 fit_transform 对数据集 X 运行 FastICA 算法
###Code
# TODO: Import FastICA
# TODO: Initialize FastICA with n_components=3
# TODO: Run the FastICA algorithm using fit_transform on dataset X
```python
ica_result.shape
###Output
_____no_output_____
###Markdown
我们将其拆分为单独的信号并查看这些信号
###Code
result_signal_1 = ica_result[:,0]
result_signal_2 = ica_result[:,1]
result_signal_3 = ica_result[:,2]
###Output
_____no_output_____
###Markdown
我们对信号进行绘制,查看波浪线的形状
###Code
# Plot Independent Component #1
plt.figure(figsize=(12,2))
plt.title('Independent Component #1')
plt.plot(result_signal_1, c="#df8efd")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #2
plt.figure(figsize=(12,2))
plt.title('Independent Component #2')
plt.plot(result_signal_2, c="#87de72")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #3
plt.figure(figsize=(12,2))
plt.title('Independent Component #3')
plt.plot(result_signal_3, c="#f65e97")
plt.ylim(-0.010, 0.010)
plt.show()
###Output
_____no_output_____
###Markdown
某些波浪线看起来像音乐波形吗?确认结果的最佳方式是聆听生成的文件。另存为 wave 文件并进行验证。在此之前,我们需要:* 将它们转换为整数(以便另存为 PCM 16 位 Wave 文件),否则只有某些媒体播放器能够播放它们* 将值映射到 int16 音频的相应范围内。该范围在 -32768 到 +32767 之间。基本的映射方法是乘以 32767。* 音量有点低,我们可以乘以某个值(例如 100)来提高音量
###Code
from scipy.io import wavfile
# Convert to int, map the appropriate range, and increase the volume a little bit
result_signal_1_int = np.int16(result_signal_1*32767*100)
result_signal_2_int = np.int16(result_signal_2*32767*100)
result_signal_3_int = np.int16(result_signal_3*32767*100)
# Write wave files
wavfile.write("result_signal_1.wav", fs, result_signal_1_int)
wavfile.write("result_signal_2.wav", fs, result_signal_2_int)
wavfile.write("result_signal_3.wav", fs, result_signal_3_int)
###Output
_____no_output_____ |
Revisiting Numpy.ipynb | ###Markdown
How to get the array's out of numpy:
###Code
first_array = np.array([3,43,123])
###Output
_____no_output_____
###Markdown
* Please get the array with 7 elements in it: Get the matrix out of the array:
###Code
array_matrix = np.array([[2,34,5], [6,54,3]])
###Output
_____no_output_____
###Markdown
Now checking the dimension(numbers of rows and columns) of the matrix.
###Code
array_matrix.shape
###Output
_____no_output_____
###Markdown
* Please create a matrix with 4 rows and 5 columns and later on please do check it with some method Iterating over a list: use the for loop and print the results
###Code
list1 = [34,5,678,6,5,4]
###Output
_____no_output_____
###Markdown
Getting an extra element added with each list: In our case please add 2 to each elemeent Likewise in order to add any number to the numpy array we need to do the vectorization:
###Code
array_case = np.array([34,5,678,6,5,4])
array_case + 2
###Output
_____no_output_____
###Markdown
* Please create any random array and add elements to it in the vectorizatin form: Checking the nested looping: getting one condition in another condition. In the below case we are getting all those elments from the list that are even
###Code
list1 = [34,5,678,6,5,3]
for i in list1:
if i % 2 == 0:
print(i)
###Output
34
678
6
###Markdown
* Please get the odd numbers from the below list?* list1 = [34,5,678,6,5,3] Satisfing the condition from the numpy array: firstly in the onde dimension. Get those elements that are greater than 15
###Code
num_arr = np.array([34,32,5,4,14,99,145])
num_arr > 15
###Output
_____no_output_____
###Markdown
Now we do not want booleans, but we want actual values:
###Code
num_arr[num_arr > 15]
###Output
_____no_output_____
###Markdown
* Please go through the above code and get the values that are less than 15: Getting the dataset for the two dimensional arrays:
###Code
two_dim = np.array([[1,2,3],[4,5,6],[7,8,9]])
two_dim
two_dim[two_dim > 5]
###Output
_____no_output_____
###Markdown
* Likewise please get all the values that are less than 5. slicing over the to dimensions:
###Code
two_dim[0:2,1]
###Output
_____no_output_____ |
code/neural_networks/train_heston_4.ipynb | ###Markdown
Training the Heston model part 4In this notebook we train a neural network for the Heston model for expiries in the range (0.12,0.40].Be aware that the datasets are rather large. Load, split and scale the datasets
###Code
import os, pandas as pd, numpy as np
wd = os.getcwd()
# Load contract grid:
logMoneyness = pd.read_csv(wd + '\\data\\logMoneyness.txt', delimiter=",", header = None).values
expiries = pd.read_csv(wd + '\\data\\expiries.txt', delimiter=",", header = None).values
# Set useful parameters:
nIn = 5
nOut = 325
# Load training data:
data_train = pd.read_csv(wd + '\\data\\training_and_test_data\\heston\\heston_training_data_4.csv', delimiter=",").values
x_train = data_train[:,:nIn]
y_train = data_train[:,nIn:nIn+nOut]
data_train = None
# Load test data:
data_test = pd.read_csv(wd + '\\data\\training_and_test_data\\heston\\heston_test_data_4.csv', delimiter=",").values
x_valid = data_test[:,:nIn]
y_valid = data_test[:,nIn:nIn+nOut]
data_test = None
# Normalise data:
from sklearn.preprocessing import StandardScaler
ub = np.reshape(np.array([25,1,10,0,1]), (1, 5))
lb = np.reshape(np.array([0,0.0025,0,-1,0.0025]), (1, 5))
def myscale(x):
res=np.zeros(nIn)
for i in range(nIn):
res[i]=(x[i] - (ub[0,i] + lb[0,i])*0.5) * 2 / (ub[0,i] - lb[0,i])
return res
def myinverse(x):
res=np.zeros(nIn)
for i in range(nIn):
res[i]=x[i]*(ub[0,i] - lb[0,i]) *0.5 + (ub[0,i] + lb[0,i])*0.5
return res
# Scale inputs:
x_train_mod = np.array([myscale(x) for x in x_train])
x_valid_mod = np.array([myscale(x) for x in x_valid])
# Scale and normalise output:
scale_y = StandardScaler()
y_train_mod = scale_y.fit_transform(y_train)
y_valid_mod = scale_y.transform(y_valid)
###Output
_____no_output_____
###Markdown
Define utility functions
###Code
import keras
from keras.layers import Activation
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
keras.backend.set_floatx('float64')
def GetNetwork(nIn,nOut,nNodes,nLayers,actFun):
# Description: Creates a neural network of a specified structure
input1 = keras.layers.Input(shape=(nIn,))
layerTmp = keras.layers.Dense(nNodes,activation = actFun)(input1)
for i in range(nLayers-1):
layerTmp = keras.layers.Dense(nNodes,activation = actFun)(layerTmp)
output1 = keras.layers.Dense(nOut,activation = 'linear')(layerTmp)
return(keras.models.Model(inputs=input1, outputs=output1))
def TrainNetwork(nn,batchsize,numEpochs,objFun,optimizer,xTrain,yTrain,xTest,yTest):
# Description: Trains a neural network and returns the network including the history
# of the training process.
nn.compile(loss = objFun, optimizer = optimizer)
history = nn.fit(xTrain, yTrain, batch_size = batchsize,
validation_data = (xTest,yTest),
epochs = numEpochs, verbose = True, shuffle=1)
return nn,history.history['loss'],history.history['val_loss']
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square( y_pred - y_true )))
###Output
_____no_output_____
###Markdown
Define and train neural networkThis section can be skipped! Just go straight to "Load network" and load the already trained model
###Code
# Define model:
model = GetNetwork(nIn,nOut,200,3,'elu')
# Set seed
import random
random.seed(455165)
# Train network
model,loss1,vloss1 = TrainNetwork(model,32,500,root_mean_squared_error,'adam',x_train_mod,y_train_mod,x_valid_mod,y_valid_mod)
model,loss2,vloss2 = TrainNetwork(model,5000,200,root_mean_squared_error,'adam',x_train_mod,y_train_mod,x_valid_mod,y_valid_mod)
###Output
_____no_output_____
###Markdown
Save networkThis section can be skipped! Just go straight to "Load network" and load the already trained model
###Code
# Save model:
model.save(wd + '\\data\\neural_network_weights\\heston\\heston_model_4.h5')
# Save weights (and scalings) in JSON format:
# - You need to install 'json-tricks' first.
# - We need this file for proper import into Matlab, R... etc.
weights_and_more = model.get_weights()
weights_and_more.append(0.5*(ub + lb))
weights_and_more.append(np.power(0.5*(ub - lb),2))
weights_and_more.append(scale_y.mean_)
weights_and_more.append(scale_y.var_)
import codecs, json
for idx, val in enumerate(weights_and_more):
weights_and_more[idx] = weights_and_more[idx].tolist()
json_str = json.dumps(weights_and_more)
text_file = open(wd + "\\data\\neural_network_weights\\heston\\heston_weights_4.json", "w")
text_file.write(json_str)
text_file.close()
###Output
_____no_output_____
###Markdown
Load network
###Code
# Load already trained neural network:
model = keras.models.load_model(wd + '\\data\\neural_network_weights\\heston\\heston_model_4.h5',
custom_objects={'root_mean_squared_error': root_mean_squared_error})
###Output
_____no_output_____
###Markdown
Validate approximation
###Code
# Specify test sample to plot:
sample_ind = 5006
# Print parameters of test sample:
print("Model Parameters (kappa,vbar,eta,rho,v0): ",myinverse(x_valid_mod[sample_ind,:]))
import scipy, matplotlib.pyplot as plt
npts = 25
x_sample = x_valid_mod[sample_ind,:]
y_sample = y_valid_mod[sample_ind,:]
prediction = scale_y.inverse_transform(model.predict(x_valid_mod))
plt.figure(1,figsize=(14,12))
j = -1
for i in range(0,13):
j = j + 1
plt.subplot(4,4,j+1)
plt.plot(logMoneyness[i*npts:(i+1)*npts],y_valid[sample_ind,i*npts:(i+1)*npts],'b',label="True")
plt.plot(logMoneyness[i*npts:(i+1)*npts],prediction[sample_ind,i*npts:(i+1)*npts],'--r',label=" Neural network")
plt.title("Maturity=%1.3f "%expiries[i*npts])
plt.xlabel("log-moneyness")
plt.ylabel("Implied volatility")
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Week 3/Logistic Regression.ipynb | ###Markdown
Logistic Regression
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading the data
###Code
df = pd.read_csv('ChurnData.csv')
df.head()
###Output
_____no_output_____
###Markdown
Data selection and preprocessing
###Code
X = df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip']]
X.head()
y = df['churn'].astype('int')
y.head()
###Output
_____no_output_____
###Markdown
Normalizing the data
###Code
from sklearn import preprocessing
ss_X = preprocessing.StandardScaler()
ss_X.fit(X)
X = ss_X.transform(X)
X[0:5]
###Output
_____no_output_____
###Markdown
Train-test split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
###Output
_____no_output_____
###Markdown
Training the model
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model
###Output
_____no_output_____
###Markdown
Lets build our model using __LogisticRegression__ from Scikit-learn package. This function implements logistic regression and can use different numerical optimizers to find parameters, including ‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’ solvers. You can find extensive information about the pros and cons of these optimizers if you search it in internet.The version of Logistic Regression in Scikit-learn, support regularization. Regularization is a technique used to solve the overfitting problem in machine learning models.__C__ parameter indicates __inverse of regularization strength__ which must be a positive float. Smaller values specify stronger regularization. Now lets fit our model with train set:
###Code
model.C = 0.1 # Changing the value of C to make it more regularized
model.solver = 'liblinear'
model
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Prediction
###Code
y_hat = model.predict(X_test)
y_hat[0:5]
###Output
_____no_output_____
###Markdown
**predict_proba** returns estimates for all classes, ordered by the label of classes. So, the first column is the probability of class 1, P(Y=1|X), and second column is probability of class 0, P(Y=0|X):
###Code
y_hat_proba = model.predict_proba(X_test)
y_hat_proba[0:5]
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn import metrics
print('Accuracy of the Logistic Model using accuracy_score is %.9f' % metrics.accuracy_score(y_hat, y_test))
###Output
Accuracy of the Logistic Model using accuracy_score is 0.700000000
###Markdown
Jaccard-Index Score
###Code
from sklearn import metrics
print('Accuracy of the Logistic Model using jaccard_similarity_score is %.9f' % metrics.jaccard_similarity_score(y_hat, y_test))
###Output
Accuracy of the Logistic Model using jaccard_similarity_score is 0.700000000
###Markdown
Confusion Matrix Confusion Matrix is way representing the result in a 2d-matrix. Predicted No Predicted Yes Actual No a b Actual Yes c d
###Code
from sklearn import metrics
print(metrics.confusion_matrix(y_test, y_hat))
print(metrics.confusion_matrix(y_test, y_hat, labels=[1, 0])) # Tweaking the positions of rows and columns
###Output
[[ 6 5]
[ 7 22]]
###Markdown
Log Loss
###Code
from sklearn import metrics
print('Accuracy of the Logistic Model using log_loss is %.9f' % metrics.log_loss(y_test, y_hat_proba))
###Output
Accuracy of the Logistic Model using log_loss is 0.490226788
|
notebooks/2.2_Hadamard_test.ipynb | ###Markdown
2-2. アダマールテスト 最も簡単な量子アルゴリズムとして、アダマールテストとよばれる以下のような量子回路(図1)を考える。つまり、第1ビットは$|0\rangle$に、第2ビット以降は状態$|\psi\rangle$に初期化されていて、まず第1ビットにアダマールゲートをかける。そして、全体に制御ユニタリ演算子$\Lambda(U)$(後述)を作用させ、再び第1ビットにアダマールゲートをかけて、最後にその第1ビットを測定する。![図1](https://github.com/kumagaimasahito/quantum-native-dojo/blob/master/notebooks/figs/2/Hadamard_test.png?raw=1) ここで制御ユニタリ演算子$\Lambda(U)$というのは、第1量子ビットが$|0\rangle$の場合にはなにもせず、$|1\rangle$の場合には$U$を作用させるユニタリ演算である。$$\Lambda (U) = |0\rangle \langle 0| \otimes I + |1\rangle \langle 1| \otimes U.$$つまり、1つ目の量子ビットが$|0\rangle$か$|1\rangle$かによって条件分岐して、「なにもしない」または「$U$を作用させる」という演算が実行される。従来のコンピュータでは条件分岐は同時に実行することができないが、量子コンピュータでは状態の重ね合わせを利用して、条件分岐を**同時並列的に実行することができる**。 このアダマールテストの動作について考えていく。最初は簡単のために、量子状態$|\psi \rangle$がユニタリー演算(行列)$U$の固有値$e^{i \lambda}$の固有状態(固有ベクトル)である場合を考える:\begin{eqnarray}U|\psi \rangle = e^{i \lambda} |\psi\rangle.\end{eqnarray} 1つ目の量子ビットにアダマール演算$H$を作用させることで\begin{eqnarray}\frac{1}{\sqrt{2}} (|0\rangle + |1\rangle) \otimes |\psi \rangle \end{eqnarray}が得られる。その後、制御$U$演算を作用させることによって、**固有値**$e^{i\lambda}$**が1つめの量子ビットの相対位相として得られる**(このことを**位相キックバック**と呼ぶ):\begin{eqnarray}&&\frac{1}{\sqrt{2}} (|0\rangle \otimes |\psi \rangle + |1\rangle \otimes U|\psi \rangle )\\&=&\frac{1}{\sqrt{2}} (|0\rangle \otimes |\psi \rangle +e^{i \lambda} |1\rangle \otimes |\psi \rangle )\\&=&\frac{1}{\sqrt{2}} (|0\rangle +e^{i \lambda} |1\rangle )\otimes |\psi \rangle.\end{eqnarray}最後に、1つ目の量子ビットに再度アダマール演算を行い\begin{eqnarray}\left(\frac{1+e^{i\lambda}}{2}|0\rangle +\frac{1-e^{i\lambda}}{2} |1\rangle \right)\otimes |\psi \rangle \label{eq01}\end{eqnarray}が得られる。1つ目の量子ビットを測定すると測定結果$m=0,1$を得る確率は\begin{eqnarray}p_{m}=\left|\frac{1+(-1)^m e^{i\lambda}}{2}\right|^2 =\frac{1+(-1)^m \cos \lambda}{2}\end{eqnarray}となる。$|\psi \rangle$、$U$、$\langle \psi |$はそれぞれ$2^n$次元の列ベクトル、$2^n \times 2^n$行列、$2^n$次元の行ベクトルなので、このアダマールテストを古典コンピュータ上で愚直に計算すると指数的に大きなメモリーの確保と演算回数が必要になる。一方で、量子コンピューターでは、確率分布$p_m$のもとで$m$がサンプルされる。$\cos \lambda$をある誤差$\epsilon$で推定したい場合は、その逆数$1/\epsilon$の多項式回程度サンプルすればよいことになる。 同じ計算を、必ずしも固有ベクトルとは限らない、一般の入力に対して行うと、測定前の状態は、$$ |0\rangle \frac{I+U}{2} |\psi \rangle + |1\rangle \frac{I-U}{2} |\psi \rangle $$となり、0もしくは1が得られる確率は、\begin{align}p_0 &= \frac{1+ {\rm Re} \langle \psi | U | \psi \rangle }{2} \\p_1 &= \frac{1- {\rm Re} \langle \psi | U | \psi \rangle }{2} \tag{1}\end{align}となる。つまり、量子コンピュータ上でアダマールテストを実行すれば、その測定結果のサンプル平均をとることで**ベクトル**$|\psi \rangle$**でユニタリ行列**$U$**を挟んだ値を推定することができる**。同じ値を古典コンピュータで求めようとした場合、量子ビット数$n$が大きくなるにつれベクトルや行列の次元は指数的に大きくなるので、指数的な時間を要する。 なお、1つ目の量子ビットを測定した後の、2つ目の量子ビットの状態は、測定結果$m = 0, 1$に応じて以下の状態になる(規格化因子は省略):$$|\psi_0\rangle = \frac{I + U}{2}|\psi\rangle,\quad|\psi_1\rangle = \frac{I - U}{2}|\psi\rangle.$$ここで、$U$が1量子ビットのユニタリ演算で、かつその固有値が$\pm 1$であるような場合を考える。固有値$\pm 1$に対応する固有ベクトル$|u_1\rangle$, $|u_{-1}\rangle$を使って$|\psi\rangle = c_1|u_1\rangle + c_2|u_2\rangle$と展開し代入することで、測定後の状態$|\psi_0\rangle$, $|\psi_1\rangle$はそれぞれ固有値$\pm 1$に対応する固有状態であることが分かる。固有値が$\pm 1$ではない場合も、アダマールテストの出力を入力として繰り返すと$U$の固有状態に状態が収束していく(興味のある人は、以下の例を参考にして試してもらいたい)。 SymPyでの実装具体的な例として、$U=H$(アダマールゲート)の場合を考えてみよう。補助量子ビットを$|0\rangle$、アダマールテストの入力$|\psi\rangle$も$|0\rangle$とする。
###Code
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit,QubitBra
init_printing() # ベクトルや行列を綺麗に表示するため
from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP,CPHASE,CGateS
# Google Colaboratory上でのみ実行してください
from IPython.display import HTML
def setup_mathjax():
display(HTML('''
<script>
if (!window.MathJax && window.google && window.google.colab) {
window.MathJax = {
'tex2jax': {
'inlineMath': [['$', '$'], ['\\(', '\\)']],
'displayMath': [['$$', '$$'], ['\\[', '\\]']],
'processEscapes': true,
'processEnvironments': true,
'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'],
'displayAlign': 'center',
},
'HTML-CSS': {
'styles': {'.MathJax_Display': {'margin': 0}},
'linebreaks': {'automatic': true},
// Disable to prevent OTF font loading, which aren't part of our
// distribution.
'imageFont': null,
},
'messageStyle': 'none'
};
var script = document.createElement("script");
script.src = "https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe";
document.head.appendChild(script);
}
</script>
'''))
get_ipython().events.register('pre_run_cell', setup_mathjax)
state = Qubit('00')
###Output
_____no_output_____
###Markdown
制御H演算は、`CGateS()`を用いて
###Code
ctrlH = CGateS(1,H(0))
represent(ctrlH,nqubits=2)
###Output
_____no_output_____
###Markdown
と行列表示される。測定前の状態は、
###Code
H(1)*ctrlH*H(1)*state
###Output
_____no_output_____
###Markdown
とかけるが、SymPyに計算させてみると
###Code
qapply(H(1)*ctrlH*H(1)*state)
###Output
_____no_output_____
###Markdown
となる。第1章で紹介したSymPyの`measure_partial`関数を用いて、1つ目の量子ビットの測定結果が0だった場合の量子状態と確率を求めると、
###Code
from sympy.physics.quantum.qubit import measure_all, measure_partial, measure_all_oneshot, measure_partial_oneshot
measured_state_and_probability_zero = measure_partial(qapply(H(1)*ctrlH*H(1)*state),(1,))[0]
simplify(measured_state_and_probability_zero)
###Output
_____no_output_____
###Markdown
となる。$\langle 0 | H | 0\rangle = 1/\sqrt{2}$だから、測定確率が式(1)と一致していることが分かる。 また、2つめの量子ビットに$H$を作用させてみると、測定後の状態が$H$の固有ベクトルになっていることが分かる(SymPyのインデックスは左から1つ目が1、2つ目が0になるよう対応させていることに注意)。
###Code
measured_state_zero = measured_state_and_probability_zero[0]
simplify(qapply(H(0)*measured_state_zero))
###Output
_____no_output_____
###Markdown
同様に1の測定結果を得た場合は、固有値−1の固有状態であることも確認できるので試してもらいたい。
###Code
measured_state_one = measure_partial(qapply(H(1)*ctrlH*H(1)*state),(1,))[1][0]
simplify(qapply(H(0)*measured_state_one))
###Output
_____no_output_____
###Markdown
2-2. アダマールテスト 最も簡単な量子アルゴリズムとして、アダマールテストとよばれる以下のような量子回路(図1)を考える。つまり、第1ビットは$|0\rangle$に、第2ビット以降は状態$|\psi\rangle$に初期化されていて、まず第1ビットにアダマールゲートをかける。そして、全体に制御ユニタリ演算子$\Lambda(U)$(後述)を作用させ、再び第1ビットにアダマールゲートをかけて、最後にその第1ビットを測定する。![図1](figs/2/Hadamard_test.png) ここで制御ユニタリ演算子$\Lambda(U)$というのは、第1量子ビットが$|0\rangle$の場合にはなにもせず、$|1\rangle$の場合には$U$を作用させるユニタリ演算である。$$\Lambda (U) = |0\rangle \langle 0| \otimes I + |1\rangle \langle 1| \otimes U.$$つまり、1つ目の量子ビットが$|0\rangle$か$|1\rangle$かによって条件分岐して、「なにもしない」または「$U$を作用させる」という演算が実行される。従来のコンピュータでは条件分岐は同時に実行することができないが、量子コンピュータでは状態の重ね合わせを利用して、条件分岐を**同時並列的に実行することができる**。 このアダマールテストの動作について考えていく。最初は簡単のために、量子状態$|\psi \rangle$がユニタリー演算(行列)$U$の固有値$e^{i \lambda}$の固有状態(固有ベクトル)である場合を考える:\begin{eqnarray}U|\psi \rangle = e^{i \lambda} |\psi\rangle.\end{eqnarray} 1つ目の量子ビットにアダマール演算$H$を作用させることで\begin{eqnarray}\frac{1}{\sqrt{2}} (|0\rangle + |1\rangle) \otimes |\psi \rangle \end{eqnarray}が得られる。その後、制御$U$演算を作用させることによって、**固有値**$e^{i\lambda}$**が1つめの量子ビットの相対位相として得られる**(このことを**位相キックバック**と呼ぶ):\begin{eqnarray}&&\frac{1}{\sqrt{2}} (|0\rangle \otimes |\psi \rangle + |1\rangle \otimes U|\psi \rangle )\\&=&\frac{1}{\sqrt{2}} (|0\rangle \otimes |\psi \rangle +e^{i \lambda} |1\rangle \otimes |\psi \rangle )\\&=&\frac{1}{\sqrt{2}} (|0\rangle +e^{i \lambda} |1\rangle )\otimes |\psi \rangle.\end{eqnarray}最後に、1つ目の量子ビットに再度アダマール演算を行い\begin{eqnarray}\left(\frac{1+e^{i\lambda}}{2}|0\rangle +\frac{1-e^{i\lambda}}{2} |1\rangle \right)\otimes |\psi \rangle \label{eq01}\end{eqnarray}が得られる。1つ目の量子ビットを測定すると測定結果$m=0,1$を得る確率は\begin{eqnarray}p_{m}=\left|\frac{1+(-1)^m e^{i\lambda}}{2}\right|^2 =\frac{1+(-1)^m \cos \lambda}{2}\end{eqnarray}となる。$|\psi \rangle$、$U$、$\langle \psi |$はそれぞれ$2^n$次元の列ベクトル、$2^n \times 2^n$行列、$2^n$次元の行ベクトルなので、このアダマールテストを古典コンピュータ上で愚直に計算すると指数的に大きなメモリーの確保と演算回数が必要になる。一方で、量子コンピューターでは、確率分布$p_m$のもとで$m$がサンプルされる。$\cos \lambda$をある誤差$\epsilon$で推定したい場合は、その逆数$1/\epsilon$の多項式回程度サンプルすればよいことになる。 同じ計算を、必ずしも固有ベクトルとは限らない、一般の入力に対して行うと、測定前の状態は、$$ |0\rangle \frac{I+U}{2} |\psi \rangle + |1\rangle \frac{I-U}{2} |\psi \rangle $$となり、0もしくは1が得られる確率は、\begin{align}p_0 &= \frac{1+ {\rm Re} \langle \psi | U | \psi \rangle }{2} \\p_1 &= \frac{1- {\rm Re} \langle \psi | U | \psi \rangle }{2} \tag{1}\end{align}となる。つまり、量子コンピュータ上でアダマールテストを実行すれば、その測定結果のサンプル平均をとることで**ベクトル**$|\psi \rangle$**でユニタリ行列**$U$**を挟んだ値を推定することができる**。同じ値を古典コンピュータで求めようとした場合、量子ビット数$n$が大きくなるにつれベクトルや行列の次元は指数的に大きくなるので、指数的な時間を要する。 なお、1つ目の量子ビットを測定した後の、2つ目の量子ビットの状態は、測定結果$m = 0, 1$に応じて以下の状態になる(規格化因子は省略):$$|\psi_0\rangle = \frac{I + U}{2}|\psi\rangle,\quad|\psi_1\rangle = \frac{I - U}{2}|\psi\rangle.$$ここで、$U$が1量子ビットのユニタリ演算で、かつその固有値が$\pm 1$であるような場合を考える。固有値$\pm 1$に対応する固有ベクトル$|u_1\rangle$, $|u_{-1}\rangle$を使って$|\psi\rangle = c_1|u_1\rangle + c_2|u_2\rangle$と展開し代入することで、測定後の状態$|\psi_0\rangle$, $|\psi_1\rangle$はそれぞれ固有値$\pm 1$に対応する固有状態であることが分かる。固有値が$\pm 1$ではない場合も、アダマールテストの出力を入力として繰り返すと$U$の固有状態に状態が収束していく(興味のある人は、以下の例を参考にして試してもらいたい)。 SymPyでの実装具体的な例として、$U=H$(アダマールゲート)の場合を考えてみよう。補助量子ビットを$|0\rangle$、アダマールテストの入力$|\psi\rangle$も$|0\rangle$とする。
###Code
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit,QubitBra
init_printing() # ベクトルや行列を綺麗に表示するため
from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP,CPHASE,CGateS
# Google Colaboratory上でのみ実行してください
from IPython.display import HTML
def setup_mathjax():
display(HTML('''
<script>
if (!window.MathJax && window.google && window.google.colab) {
window.MathJax = {
'tex2jax': {
'inlineMath': [['$', '$'], ['\\(', '\\)']],
'displayMath': [['$$', '$$'], ['\\[', '\\]']],
'processEscapes': true,
'processEnvironments': true,
'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'],
'displayAlign': 'center',
},
'HTML-CSS': {
'styles': {'.MathJax_Display': {'margin': 0}},
'linebreaks': {'automatic': true},
// Disable to prevent OTF font loading, which aren't part of our
// distribution.
'imageFont': null,
},
'messageStyle': 'none'
};
var script = document.createElement("script");
script.src = "https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe";
document.head.appendChild(script);
}
</script>
'''))
get_ipython().events.register('pre_run_cell', setup_mathjax)
state = Qubit('00')
###Output
_____no_output_____
###Markdown
制御H演算は、`CGateS()`を用いて
###Code
ctrlH = CGateS(1,H(0))
represent(ctrlH,nqubits=2)
###Output
_____no_output_____
###Markdown
と行列表示される。測定前の状態は、
###Code
H(1)*ctrlH*H(1)*state
###Output
_____no_output_____
###Markdown
とかけるが、SymPyに計算させてみると
###Code
qapply(H(1)*ctrlH*H(1)*state)
###Output
_____no_output_____
###Markdown
となる。第1章で紹介したSymPyの`measure_partial`関数を用いて、1つ目の量子ビットの測定結果が0だった場合の量子状態と確率を求めると、
###Code
from sympy.physics.quantum.qubit import measure_all, measure_partial, measure_all_oneshot, measure_partial_oneshot
measured_state_and_probability_zero = measure_partial(qapply(H(1)*ctrlH*H(1)*state),(1,))[0]
simplify(measured_state_and_probability_zero)
###Output
_____no_output_____
###Markdown
となる。$\langle 0 | H | 0\rangle = 1/\sqrt{2}$だから、測定確率が式(1)と一致していることが分かる。 また、2つめの量子ビットに$H$を作用させてみると、測定後の状態が$H$の固有ベクトルになっていることが分かる(SymPyのインデックスは左から1つ目が1、2つ目が0になるよう対応させていることに注意)。
###Code
measured_state_zero = measured_state_and_probability_zero[0]
simplify(qapply(H(0)*measured_state_zero))
###Output
_____no_output_____
###Markdown
同様に1の測定結果を得た場合は、固有値−1の固有状態であることも確認できるので試してもらいたい。
###Code
measured_state_one = measure_partial(qapply(H(1)*ctrlH*H(1)*state),(1,))[1][0]
simplify(qapply(H(0)*measured_state_one))
###Output
_____no_output_____
###Markdown
2-2. アダマールテスト 最も簡単な量子アルゴリズムとして、アダマールテストとよばれる以下のような量子回路(図1)を考える。つまり、第1ビットは$|0\rangle$に、第2ビット以降は状態$|\psi\rangle$に初期化されていて、まず第1ビットにアダマールゲートをかける。そして、全体に制御ユニタリ演算子$\Lambda(U)$(後述)を作用させ、再び第1ビットにアダマールゲートをかけて、最後にその第1ビットを測定する。![図1](figs/2/Hadamard_test.png) ここで制御ユニタリ演算子$\Lambda(U)$というのは、第1量子ビットが$|0\rangle$の場合にはなにもせず、$|1\rangle$の場合には$U$を作用させるユニタリ演算である。$$\Lambda (U) = |0\rangle \langle 0| \otimes I + |1\rangle \langle 1| \otimes U.$$つまり、1つ目の量子ビットが$|0\rangle$か$|1\rangle$かによって条件分岐して、「なにもしない」または「$U$を作用させる」という演算が実行される。従来のコンピュータでは条件分岐は同時に実行することができないが、量子コンピュータでは状態の重ね合わせを利用して、条件分岐を**同時並列的に実行することができる**。 このアダマールテストの動作について考えていく。最初は簡単のために、量子状態$|\psi \rangle$がユニタリー演算(行列)$U$の固有値$e^{i \lambda}$の固有状態(固有ベクトル)である場合を考える:\begin{eqnarray}U|\psi \rangle = e^{i \lambda} |\psi\rangle.\end{eqnarray} 1つ目の量子ビットにアダマール演算$H$を作用させることで\begin{eqnarray}\frac{1}{\sqrt{2}} (|0\rangle + |1\rangle) \otimes |\psi \rangle \end{eqnarray}が得られる。その後、制御$U$演算を作用させることによって、**固有値**$e^{i\lambda}$**が1つめの量子ビットの相対位相として得られる**(このことを**位相キックバック**と呼ぶ):\begin{eqnarray}&&\frac{1}{\sqrt{2}} (|0\rangle \otimes |\psi \rangle + |1\rangle \otimes U|\psi \rangle )\\&=&\frac{1}{\sqrt{2}} (|0\rangle \otimes |\psi \rangle +e^{i \lambda} |1\rangle \otimes |\psi \rangle )\\&=&\frac{1}{\sqrt{2}} (|0\rangle +e^{i \lambda} |1\rangle )\otimes |\psi \rangle.\end{eqnarray}最後に、1つ目の量子ビットに再度アダマール演算を行い\begin{eqnarray}\left(\frac{1+e^{i\lambda}}{2}|0\rangle +\frac{1-e^{i\lambda}}{2} |1\rangle \right)\otimes |\psi \rangle \label{eq01}\end{eqnarray}が得られる。1つ目の量子ビットを測定すると測定結果$m=0,1$を得る確率は\begin{eqnarray}p_{m}=\left|\frac{1+(-1)^m e^{i\lambda}}{2}\right|^2 =\frac{1+(-1)^m \cos \lambda}{2}\end{eqnarray}となる。$|\psi \rangle$、$U$、$\langle \psi |$はそれぞれ$2^n$次元の列ベクトル、$2^n \times 2^n$行列、$2^n$次元の行ベクトルなので、このアダマールテストを古典コンピュータ上で愚直に計算すると指数的に大きなメモリーの確保と演算回数が必要になる。一方で、量子コンピューターでは、確率分布$p_m$のもとで$m$がサンプルされる。$\cos \lambda$をある誤差$\epsilon$で推定したい場合は、その逆数$1/\epsilon$の多項式回程度サンプルすればよいことになる。 同じ計算を、必ずしも固有ベクトルとは限らない、一般の入力に対して行うと、測定前の状態は、$$ |0\rangle \frac{I+U}{2} |\psi \rangle + |1\rangle \frac{I-U}{2} |\psi \rangle $$となり、0もしくは1が得られる確率は、\begin{align}p_0 &= \frac{1+ {\rm Re} \langle \psi | U | \psi \rangle }{2} \\p_1 &= \frac{1- {\rm Re} \langle \psi | U | \psi \rangle }{2} \tag{1}\end{align}となる。つまり、量子コンピュータ上でアダマールテストを実行すれば、その測定結果のサンプル平均をとることで**ベクトル**$|\psi \rangle$**でユニタリ行列**$U$**を挟んだ値を推定することができる**。同じ値を古典コンピュータで求めようとした場合、量子ビット数$n$が大きくなるにつれベクトルや行列の次元は指数的に大きくなるので、指数的な時間を要する。 なお、1つ目の量子ビットを測定した後の、2つ目の量子ビットの状態は、測定結果$m = 0, 1$に応じて以下の状態になる(規格化因子は省略):$$|\psi_0\rangle = \frac{I + U}{2}|\psi\rangle,\quad|\psi_1\rangle = \frac{I - U}{2}|\psi\rangle.$$ここで、$U$が1量子ビットのユニタリ演算で、かつその固有値が$\pm 1$であるような場合を考える。固有値$\pm 1$に対応する固有ベクトル$|u_1\rangle$, $|u_{-1}\rangle$を使って$|\psi\rangle = c_1|u_1\rangle + c_{-1}|u_{-1}\rangle$と展開し代入することで、測定後の状態$|\psi_0\rangle$, $|\psi_1\rangle$はそれぞれ固有値$\pm 1$に対応する固有状態であることが分かる。固有値が$\pm 1$ではない場合も、アダマールテストの出力を入力として繰り返すと$U$の固有状態に状態が収束していく(興味のある人は、以下の例を参考にして試してもらいたい)。 SymPyでの実装具体的な例として、$U=H$(アダマールゲート)の場合を考えてみよう。補助量子ビットを$|0\rangle$、アダマールテストの入力$|\psi\rangle$も$|0\rangle$とする。
###Code
from sympy import *
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import Qubit,QubitBra
init_printing() # ベクトルや行列を綺麗に表示するため
from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP,CPHASE,CGateS
# Google Colaboratory上でのみ実行してください
from IPython.display import HTML
def setup_mathjax():
display(HTML('''
<script>
if (!window.MathJax && window.google && window.google.colab) {
window.MathJax = {
'tex2jax': {
'inlineMath': [['$', '$'], ['\\(', '\\)']],
'displayMath': [['$$', '$$'], ['\\[', '\\]']],
'processEscapes': true,
'processEnvironments': true,
'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'],
'displayAlign': 'center',
},
'HTML-CSS': {
'styles': {'.MathJax_Display': {'margin': 0}},
'linebreaks': {'automatic': true},
// Disable to prevent OTF font loading, which aren't part of our
// distribution.
'imageFont': null,
},
'messageStyle': 'none'
};
var script = document.createElement("script");
script.src = "https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe";
document.head.appendChild(script);
}
</script>
'''))
get_ipython().events.register('pre_run_cell', setup_mathjax)
state = Qubit('00')
###Output
_____no_output_____
###Markdown
制御H演算は、`CGateS()`を用いて
###Code
ctrlH = CGateS(1,H(0))
represent(ctrlH,nqubits=2)
###Output
_____no_output_____
###Markdown
と行列表示される。測定前の状態は、
###Code
H(1)*ctrlH*H(1)*state
###Output
_____no_output_____
###Markdown
とかけるが、SymPyに計算させてみると
###Code
qapply(H(1)*ctrlH*H(1)*state)
###Output
_____no_output_____
###Markdown
となる。第1章で紹介したSymPyの`measure_partial`関数を用いて、1つ目の量子ビットの測定結果が0だった場合の量子状態と確率を求めると、
###Code
from sympy.physics.quantum.qubit import measure_all, measure_partial, measure_all_oneshot, measure_partial_oneshot
measured_state_and_probability_zero = measure_partial(qapply(H(1)*ctrlH*H(1)*state),(1,))[0]
simplify(measured_state_and_probability_zero)
###Output
_____no_output_____
###Markdown
となる。$\langle 0 | H | 0\rangle = 1/\sqrt{2}$だから、測定確率が式(1)と一致していることが分かる。 また、2つめの量子ビットに$H$を作用させてみると、測定後の状態が$H$の固有ベクトルになっていることが分かる(SymPyのインデックスは左から1つ目が1、2つ目が0になるよう対応させていることに注意)。
###Code
measured_state_zero = measured_state_and_probability_zero[0]
simplify(qapply(H(0)*measured_state_zero))
###Output
_____no_output_____
###Markdown
同様に1の測定結果を得た場合は、固有値−1の固有状態であることも確認できるので試してもらいたい。
###Code
measured_state_one = measure_partial(qapply(H(1)*ctrlH*H(1)*state),(1,))[1][0]
simplify(qapply(H(0)*measured_state_one))
###Output
_____no_output_____ |
notebooks/4. Cluster temporal stability.ipynb | ###Markdown
It's worth remembering that the first PC is mostly driven by standard deviation while all variables contribute to the second. Cluster stability - series temporal stabilityFor the purpose of this notebook, cluster stability is defined as the tendency of series to belong to the same cluster over time. A baseline clustering algorithm and clusters are defined in this section and the series are classified at different times using the same model. Two experiments are run: a cumulative time experiment (i.e. each period starts at day1), and a rolling experiment. Both experiments are run on timeseries and visualised on the principal components.**NOTE**: This is conducted well avare of [Keogh at al.](http://www.cs.ucr.edu/~eamonn/meaningless.pdf). Results will have to be examined with great care. Training the modelThe clustering model is a Timeseries KMeans using [soft DTW](https://arxiv.org/pdf/1703.01541.pdf) as distance metric.
###Code
from tslearn import utils, clustering
%%time
# training can take up to 30 minutes, depending on hardware resources
seed(1)
series = utils.to_time_series_dataset(data.T.values)
km = clustering.TimeSeriesKMeans(n_clusters=15, metric="softdtw", max_iter=10)
km.fit(series)
###Output
571448.112 --> 631432.750 --> 642338.522 --> 646458.893 --> 646657.061 --> 646932.626 --> 647203.437 --> 647448.775 --> 647462.378 --> 647647.453 -->
CPU times: user 16min 39s, sys: 24.5 s, total: 17min 3s
Wall time: 17min 2s
###Markdown
Cumulative time period
###Code
from matplotlib import animation, rc
from IPython.display import HTML
sys.path.append("../../iz4vve/utils")
import tools
cluster_assignments = list()
cumulative = None
for chunk in tqdm.tqdm_notebook(tools.chunker(data, 90)):
if cumulative is None:
cumulative = chunk.values
else:
cumulative = np.concatenate([cumulative, chunk.values])
clusters = km.predict(cumulative.T)
cluster_assignments.append(clusters)
pd.DataFrame(cluster_assignments).plot(legend=False)
plt.title("Series cluster evolution")
plt.show()
changes = pd.DataFrame(cluster_assignments).diff().abs()
print("how many times series change cluster:")
for c in list(changes):
print(f"{c} --> {np.count_nonzero(changes[c].values)}")
cluster_assign = pd.DataFrame(cluster_assignments)
cluster_assign.shape
import ipywidgets as widgets
@widgets.interact(x=range(1, cluster_assign.shape[0]))
def g(x):
_df = cluster_assign.iloc[x]
plt.scatter(*list(zip(*pcaed)), c=_df, cmap="rainbow")
plt.title(f"Cluster at time {x}")
plt.show()
###Output
_____no_output_____
###Markdown
The graph above shows some slight changes for cluster assignments over time, but clusters appear overall stable. Rolling time period
###Code
# %%time
cluster_assignments = list()
for chunk in tqdm.tqdm_notebook(tools.chunker(data, 90)):
clusters = km.predict(chunk.T.values)
cluster_assignments.append(clusters)
pd.DataFrame(cluster_assignments).plot(legend=False)
plt.title("Series cluster evolution")
plt.show()
changes = pd.DataFrame(cluster_assignments).diff().abs()
print("how many times series change cluster:")
for c in list(changes):
print(f"{c} --> {np.count_nonzero(changes[c].values)}")
cluster_assign = pd.DataFrame(cluster_assignments)
cluster_assign.shape
import ipywidgets as widgets
@widgets.interact(x=range(1, cluster_assign.shape[0]))
def g(x):
_df = cluster_assign.iloc[x]
plt.scatter(*list(zip(*pcaed)), c=_df, cmap="rainbow")
plt.title(f"Cluster at time {x}")
plt.show()
###Output
_____no_output_____ |
Notebooks/.ipynb_checkpoints/RunMilaXraysClassification-checkpoint.ipynb | ###Markdown
Imports
###Code
import warnings
warnings.filterwarnings(action='ignore')
import tensorflow as tf
from tensorflow import keras
import sklearn
from sklearn.metrics import roc_curve, auc, log_loss, precision_score, f1_score, recall_score, confusion_matrix
from sklearn.model_selection import KFold, StratifiedKFold
import matplotlib as mplb
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
import numpy as np
import pandas as pd
import seaborn as sns
import os
import zipfile
import shutil
import getpass
import requests
from IPython.display import clear_output
from tqdm.notebook import tqdm
import datetime
%load_ext tensorboard
print(f'[INFO] Using tensorflow-gpu {tf.__version__}')
###Output
[INFO] Using tensorflow-gpu 2.3.0
###Markdown
Config
###Code
os.environ['TF_CPP_MIN_LOG_LEVEL'] = "2"
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
seed_val = 2020
# set seed
np.random.seed(seed=seed_val)
tf.random.set_seed(seed=seed_val)
###Output
_____no_output_____
###Markdown
Params
###Code
IMG_SIZE = 224
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
BATCH_SIZE = 64
class_names = ['NEG', 'POS']
base_dir = '../'
train_images_dir = os.path.join(base_dir, 'Datasets/Images', 'train')
val_images_dir = os.path.join(base_dir, 'Datasets/Images', 'val')
test_images_dir = os.path.join(base_dir, 'Datasets/Images', 'test')
train_csv_path = os.path.join(base_dir, 'Datasets/Csv', 'Train.csv')
test_csv_path = os.path.join(base_dir, 'Datasets/Csv', 'Test.csv')
sample_csv_path = os.path.join(base_dir, 'Datasets/Csv', 'Train.csv')
train_df = pd.read_csv(train_csv_path)
test_df = pd.read_csv(test_csv_path)
sample_sub_df = pd.read_csv(sample_csv_path)
train_df.head()
test_df.head()
sample_sub_df.tail()
###Output
_____no_output_____
###Markdown
Datasets & Dataloaders
###Code
image_generator = keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
preprocessing_function=keras.applications.efficientnet.preprocess_input,
rotation_range=33,
brightness_range=[0.3, 1.0],
zoom_range=0.3,
fill_mode='nearest',
horizontal_flip=True,
vertical_flip=True,
#rescale=1./255.0,
validation_split=0.25)
train_generator = image_generator.flow_from_directory(directory=train_images_dir+'/train',
target_size=(IMG_SIZE, IMG_SIZE),
batch_size=BATCH_SIZE,
seed=seed_val,
subset='training')
validation_generator = image_generator.flow_from_directory(directory=train_images_dir+'/train',
target_size=(IMG_SIZE, IMG_SIZE),
batch_size=BATCH_SIZE,
seed=seed_val,
subset='validation')
for imgs, labels in train_generator:
print(f"First image shape : {imgs[0].shape}, label : {labels[0]}")
break
###Output
First image shape : (224, 224, 3), label : [0. 1.]
###Markdown
Visualization
###Code
def show_training_sample(batch_size=BATCH_SIZE):
imgs, labs = next(iter(train_generator))
plt.figure(figsize=(22, 18))
for i in range(min(25, batch_size)):
l, c = 5, 5
img = imgs[i]
label = class_names[tf.argmax(labs[i])]
ax = plt.subplot(l, c, i+1)
plt.imshow(img)
plt.title(label)
plt.axis("off")
###Output
_____no_output_____
###Markdown
show_training_sample()
###Code
arch_name = "EfficientNetB4"
base_arch = getattr(tf.keras.applications, arch_name)
base_model = base_arch(include_top=False, input_shape=IMG_SHAPE)
# freeze trained layers
for layer in base_model.layers:
layer.trainable = False
def build_model(fc_size=2, n_dense_units=512):
inputs = inputs = keras.Input(shape=IMG_SHAPE)
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dense(units=n_dense_units, activation='relu')(x)
x = keras.layers.Dropout(0.3)(x)
if fc_size > 1:
predictions = keras.layers.Dense(units=fc_size, activation="softmax")(x)
else:
predictions = keras.layers.Dense(units=1, activation="sigmoid")(x)
model = keras.Model(inputs = inputs, outputs=predictions)
return model
model = build_model(fc_size=2, n_dense_units=1024)
model.summary()
###Output
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
efficientnetb4 (Functional) (None, 7, 7, 1792) 17673823
_________________________________________________________________
global_average_pooling2d (Gl (None, 1792) 0
_________________________________________________________________
dense (Dense) (None, 1024) 1836032
_________________________________________________________________
dropout (Dropout) (None, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 2) 2050
=================================================================
Total params: 19,511,905
Trainable params: 1,838,082
Non-trainable params: 17,673,823
_________________________________________________________________
###Markdown
Training phase
###Code
# training params
# optimizer
lr = 2e-5
optimizer = keras.optimizers.Adam(learning_rate=lr)
# loss
loss_fn = keras.losses.CategoricalCrossentropy()
model.compile(optimizer=optimizer, loss=loss_fn, metrics=['AUC'])
num_epochs = 50
optim_name = optimizer.get_config()['name']
model_name = f'tf_model_x_rays_based_on_{arch_name}_and_{optim_name}.h5'
model_path = os.path.join(base_dir, 'Models', model_name)
# CALLBACKS
auc_ckpt = keras.callbacks.ModelCheckpoint(filepath=model_path,
verbose=1,
monitor='val_auc',
mode='max',
save_best_only=True)
acc_ckpt = keras.callbacks.ModelCheckpoint(filepath=model_path,
verbose=1,
mode='max',
monitor='val_accuracy',
save_best_only=True)
loss_ckpt = keras.callbacks.ModelCheckpoint(filepath=model_path,
verbose=1,
mode='min',
monitor='val_loss',
save_best_only=True)
es = keras.callbacks.EarlyStopping(monitor='val_loss',
patience=20,
verbose=1,
restore_best_weights=True)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_auc',
factor=0.1,
patience=10,
verbose=1,
mode='max',
min_lr=lr)
LOGDIR = os.path.join(base_dir, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = keras.callbacks.TensorBoard(LOGDIR, histogram_freq=1)
# bot config
#bot_callback = botCallback(access_token=access_token)
#plotter = Plotter(access_token)
CALLBACKS = [auc_ckpt, loss_ckpt, es, reduce_lr, tensorboard_callback] #bot_callback, plotter]
print(LOGDIR)
%tensorboard --logdir {LOGDIR}
h = model.fit(train_generator,
validation_data=validation_generator,
epochs=num_epochs,
steps_per_epoch=train_generator.n // BATCH_SIZE,
validation_steps=validation_generator.n // BATCH_SIZE,
callbacks=CALLBACKS)
###Output
Epoch 1/50
1/8 [==>...........................] - ETA: 0s - loss: 0.6422 - auc: 0.6985WARNING:tensorflow:From /home/zeusdric/anaconda3/envs/tf2-gpu/lib/python3.7/site-packages/tensorflow/python/ops/summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
8/8 [==============================] - ETA: 0s - loss: 0.6934 - auc: 0.5639
Epoch 00001: val_auc improved from -inf to 0.59793, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00001: val_loss improved from inf to 0.67910, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 95s 12s/step - loss: 0.6934 - auc: 0.5639 - val_loss: 0.6791 - val_auc: 0.5979
Epoch 2/50
8/8 [==============================] - ETA: 0s - loss: 0.6624 - auc: 0.6456
Epoch 00002: val_auc improved from 0.59793 to 0.67050, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00002: val_loss improved from 0.67910 to 0.65588, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 79s 10s/step - loss: 0.6624 - auc: 0.6456 - val_loss: 0.6559 - val_auc: 0.6705
Epoch 3/50
8/8 [==============================] - ETA: 0s - loss: 0.6505 - auc: 0.6680
Epoch 00003: val_auc improved from 0.67050 to 0.73206, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00003: val_loss improved from 0.65588 to 0.62319, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 75s 9s/step - loss: 0.6505 - auc: 0.6680 - val_loss: 0.6232 - val_auc: 0.7321
Epoch 4/50
8/8 [==============================] - ETA: 0s - loss: 0.6537 - auc: 0.6545
Epoch 00004: val_auc improved from 0.73206 to 0.76651, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00004: val_loss improved from 0.62319 to 0.60894, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 75s 9s/step - loss: 0.6537 - auc: 0.6545 - val_loss: 0.6089 - val_auc: 0.7665
Epoch 5/50
8/8 [==============================] - ETA: 0s - loss: 0.6228 - auc: 0.7188
Epoch 00005: val_auc did not improve from 0.76651
Epoch 00005: val_loss did not improve from 0.60894
8/8 [==============================] - 67s 8s/step - loss: 0.6228 - auc: 0.7188 - val_loss: 0.6105 - val_auc: 0.7513
Epoch 6/50
8/8 [==============================] - ETA: 0s - loss: 0.6236 - auc: 0.7125
Epoch 00006: val_auc improved from 0.76651 to 0.76797, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00006: val_loss improved from 0.60894 to 0.60704, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 76s 10s/step - loss: 0.6236 - auc: 0.7125 - val_loss: 0.6070 - val_auc: 0.7680
Epoch 7/50
8/8 [==============================] - ETA: 0s - loss: 0.5936 - auc: 0.7679
Epoch 00007: val_auc improved from 0.76797 to 0.78375, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00007: val_loss improved from 0.60704 to 0.58755, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 72s 9s/step - loss: 0.5936 - auc: 0.7679 - val_loss: 0.5875 - val_auc: 0.7838
Epoch 8/50
8/8 [==============================] - ETA: 0s - loss: 0.5921 - auc: 0.7702
Epoch 00008: val_auc improved from 0.78375 to 0.80460, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00008: val_loss improved from 0.58755 to 0.56840, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 73s 9s/step - loss: 0.5921 - auc: 0.7702 - val_loss: 0.5684 - val_auc: 0.8046
Epoch 9/50
8/8 [==============================] - ETA: 0s - loss: 0.5965 - auc: 0.7603
Epoch 00009: val_auc improved from 0.80460 to 0.81714, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00009: val_loss improved from 0.56840 to 0.55954, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 76s 9s/step - loss: 0.5965 - auc: 0.7603 - val_loss: 0.5595 - val_auc: 0.8171
Epoch 10/50
8/8 [==============================] - ETA: 0s - loss: 0.5813 - auc: 0.7731
Epoch 00010: val_auc improved from 0.81714 to 0.83807, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00010: val_loss improved from 0.55954 to 0.54760, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 73s 9s/step - loss: 0.5813 - auc: 0.7731 - val_loss: 0.5476 - val_auc: 0.8381
Epoch 11/50
8/8 [==============================] - ETA: 0s - loss: 0.5799 - auc: 0.7748
Epoch 00011: val_auc did not improve from 0.83807
Epoch 00011: val_loss did not improve from 0.54760
8/8 [==============================] - 75s 9s/step - loss: 0.5799 - auc: 0.7748 - val_loss: 0.5915 - val_auc: 0.7609
Epoch 12/50
8/8 [==============================] - ETA: 0s - loss: 0.5616 - auc: 0.8030
Epoch 00012: val_auc did not improve from 0.83807
Epoch 00012: val_loss did not improve from 0.54760
8/8 [==============================] - 78s 10s/step - loss: 0.5616 - auc: 0.8030 - val_loss: 0.5625 - val_auc: 0.8044
Epoch 13/50
8/8 [==============================] - ETA: 0s - loss: 0.5663 - auc: 0.7935
Epoch 00013: val_auc improved from 0.83807 to 0.85486, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00013: val_loss improved from 0.54760 to 0.52061, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 83s 10s/step - loss: 0.5663 - auc: 0.7935 - val_loss: 0.5206 - val_auc: 0.8549
Epoch 14/50
8/8 [==============================] - ETA: 0s - loss: 0.5538 - auc: 0.8124
Epoch 00014: val_auc did not improve from 0.85486
Epoch 00014: val_loss did not improve from 0.52061
8/8 [==============================] - 77s 10s/step - loss: 0.5538 - auc: 0.8124 - val_loss: 0.5441 - val_auc: 0.8215
Epoch 15/50
8/8 [==============================] - ETA: 0s - loss: 0.5785 - auc: 0.7746
Epoch 00015: val_auc did not improve from 0.85486
Epoch 00015: val_loss did not improve from 0.52061
8/8 [==============================] - 84s 10s/step - loss: 0.5785 - auc: 0.7746 - val_loss: 0.5386 - val_auc: 0.8343
Epoch 16/50
8/8 [==============================] - ETA: 0s - loss: 0.5451 - auc: 0.8173
Epoch 00016: val_auc did not improve from 0.85486
Epoch 00016: val_loss did not improve from 0.52061
8/8 [==============================] - 83s 10s/step - loss: 0.5451 - auc: 0.8173 - val_loss: 0.5391 - val_auc: 0.8217
Epoch 17/50
8/8 [==============================] - ETA: 0s - loss: 0.5571 - auc: 0.7966
Epoch 00017: val_auc did not improve from 0.85486
Epoch 00017: val_loss did not improve from 0.52061
8/8 [==============================] - 79s 10s/step - loss: 0.5571 - auc: 0.7966 - val_loss: 0.5721 - val_auc: 0.7830
Epoch 18/50
8/8 [==============================] - ETA: 0s - loss: 0.5687 - auc: 0.7858
Epoch 00018: val_auc did not improve from 0.85486
Epoch 00018: val_loss did not improve from 0.52061
8/8 [==============================] - 81s 10s/step - loss: 0.5687 - auc: 0.7858 - val_loss: 0.5330 - val_auc: 0.8239
Epoch 19/50
8/8 [==============================] - ETA: 0s - loss: 0.5516 - auc: 0.8050
Epoch 00019: val_auc did not improve from 0.85486
Epoch 00019: val_loss did not improve from 0.52061
8/8 [==============================] - 78s 10s/step - loss: 0.5516 - auc: 0.8050 - val_loss: 0.5552 - val_auc: 0.7982
Epoch 20/50
8/8 [==============================] - ETA: 0s - loss: 0.5376 - auc: 0.8173
Epoch 00020: val_auc did not improve from 0.85486
Epoch 00020: val_loss improved from 0.52061 to 0.51275, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 86s 11s/step - loss: 0.5376 - auc: 0.8173 - val_loss: 0.5127 - val_auc: 0.8428
Epoch 21/50
8/8 [==============================] - ETA: 0s - loss: 0.5453 - auc: 0.8063
Epoch 00021: val_auc did not improve from 0.85486
Epoch 00021: val_loss did not improve from 0.51275
8/8 [==============================] - 79s 10s/step - loss: 0.5453 - auc: 0.8063 - val_loss: 0.5609 - val_auc: 0.7899
Epoch 22/50
8/8 [==============================] - ETA: 0s - loss: 0.5445 - auc: 0.8059
Epoch 00022: val_auc did not improve from 0.85486
Epoch 00022: val_loss improved from 0.51275 to 0.50560, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 89s 11s/step - loss: 0.5445 - auc: 0.8059 - val_loss: 0.5056 - val_auc: 0.8536
Epoch 23/50
8/8 [==============================] - ETA: 0s - loss: 0.5698 - auc: 0.7779
Epoch 00023: val_auc did not improve from 0.85486
Epoch 00023: val_loss did not improve from 0.50560
8/8 [==============================] - 78s 10s/step - loss: 0.5698 - auc: 0.7779 - val_loss: 0.5385 - val_auc: 0.8146
Epoch 24/50
8/8 [==============================] - ETA: 0s - loss: 0.5234 - auc: 0.8364
Epoch 00024: val_auc improved from 0.85486 to 0.87396, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00024: val_loss improved from 0.50560 to 0.48089, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 82s 10s/step - loss: 0.5234 - auc: 0.8364 - val_loss: 0.4809 - val_auc: 0.8740
Epoch 25/50
8/8 [==============================] - ETA: 0s - loss: 0.5236 - auc: 0.8258
Epoch 00025: val_auc did not improve from 0.87396
Epoch 00025: val_loss did not improve from 0.48089
8/8 [==============================] - 81s 10s/step - loss: 0.5236 - auc: 0.8258 - val_loss: 0.5136 - val_auc: 0.8475
Epoch 26/50
8/8 [==============================] - ETA: 0s - loss: 0.5063 - auc: 0.8405
Epoch 00026: val_auc did not improve from 0.87396
Epoch 00026: val_loss did not improve from 0.48089
8/8 [==============================] - 79s 10s/step - loss: 0.5063 - auc: 0.8405 - val_loss: 0.5484 - val_auc: 0.8096
Epoch 27/50
8/8 [==============================] - ETA: 0s - loss: 0.5112 - auc: 0.8372
Epoch 00027: val_auc did not improve from 0.87396
Epoch 00027: val_loss did not improve from 0.48089
8/8 [==============================] - 83s 10s/step - loss: 0.5112 - auc: 0.8372 - val_loss: 0.4916 - val_auc: 0.8630
Epoch 28/50
8/8 [==============================] - ETA: 0s - loss: 0.5067 - auc: 0.8424
Epoch 00028: val_auc did not improve from 0.87396
Epoch 00028: val_loss did not improve from 0.48089
8/8 [==============================] - 106s 13s/step - loss: 0.5067 - auc: 0.8424 - val_loss: 0.5102 - val_auc: 0.8377
Epoch 29/50
8/8 [==============================] - ETA: 0s - loss: 0.5178 - auc: 0.8284
Epoch 00029: val_auc did not improve from 0.87396
Epoch 00029: val_loss did not improve from 0.48089
8/8 [==============================] - 103s 13s/step - loss: 0.5178 - auc: 0.8284 - val_loss: 0.5184 - val_auc: 0.8260
Epoch 30/50
8/8 [==============================] - ETA: 0s - loss: 0.5192 - auc: 0.8262
Epoch 00030: val_auc did not improve from 0.87396
Epoch 00030: val_loss did not improve from 0.48089
8/8 [==============================] - 101s 13s/step - loss: 0.5192 - auc: 0.8262 - val_loss: 0.4874 - val_auc: 0.8560
Epoch 31/50
8/8 [==============================] - ETA: 0s - loss: 0.5031 - auc: 0.8458
Epoch 00031: val_auc did not improve from 0.87396
Epoch 00031: val_loss did not improve from 0.48089
8/8 [==============================] - 79s 10s/step - loss: 0.5031 - auc: 0.8458 - val_loss: 0.5492 - val_auc: 0.7961
Epoch 32/50
8/8 [==============================] - ETA: 0s - loss: 0.5215 - auc: 0.8261
Epoch 00032: val_auc did not improve from 0.87396
Epoch 00032: val_loss did not improve from 0.48089
8/8 [==============================] - 85s 11s/step - loss: 0.5215 - auc: 0.8261 - val_loss: 0.5071 - val_auc: 0.8449
Epoch 33/50
8/8 [==============================] - ETA: 0s - loss: 0.5303 - auc: 0.8133
Epoch 00033: val_auc did not improve from 0.87396
Epoch 00033: val_loss did not improve from 0.48089
8/8 [==============================] - 80s 10s/step - loss: 0.5303 - auc: 0.8133 - val_loss: 0.5632 - val_auc: 0.7824
Epoch 34/50
8/8 [==============================] - ETA: 0s - loss: 0.5153 - auc: 0.8333
Epoch 00034: val_auc did not improve from 0.87396
Epoch 00034: val_loss did not improve from 0.48089
8/8 [==============================] - 88s 11s/step - loss: 0.5153 - auc: 0.8333 - val_loss: 0.5099 - val_auc: 0.8394
Epoch 35/50
8/8 [==============================] - ETA: 0s - loss: 0.5211 - auc: 0.8259
Epoch 00035: val_auc did not improve from 0.87396
Epoch 00035: val_loss did not improve from 0.48089
8/8 [==============================] - 87s 11s/step - loss: 0.5211 - auc: 0.8259 - val_loss: 0.5053 - val_auc: 0.8410
Epoch 36/50
8/8 [==============================] - ETA: 0s - loss: 0.5128 - auc: 0.8332
Epoch 00036: val_auc did not improve from 0.87396
Epoch 00036: val_loss did not improve from 0.48089
8/8 [==============================] - 102s 13s/step - loss: 0.5128 - auc: 0.8332 - val_loss: 0.5402 - val_auc: 0.8084
Epoch 37/50
8/8 [==============================] - ETA: 0s - loss: 0.5130 - auc: 0.8323
Epoch 00037: val_auc did not improve from 0.87396
Epoch 00037: val_loss did not improve from 0.48089
8/8 [==============================] - 112s 14s/step - loss: 0.5130 - auc: 0.8323 - val_loss: 0.4830 - val_auc: 0.8686
Epoch 38/50
8/8 [==============================] - ETA: 0s - loss: 0.4780 - auc: 0.8676
Epoch 00038: val_auc did not improve from 0.87396
Epoch 00038: val_loss improved from 0.48089 to 0.46646, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 107s 13s/step - loss: 0.4780 - auc: 0.8676 - val_loss: 0.4665 - val_auc: 0.8735
Epoch 39/50
8/8 [==============================] - ETA: 0s - loss: 0.4921 - auc: 0.8554
Epoch 00039: val_auc did not improve from 0.87396
Epoch 00039: val_loss did not improve from 0.46646
8/8 [==============================] - 100s 13s/step - loss: 0.4921 - auc: 0.8554 - val_loss: 0.4990 - val_auc: 0.8429
Epoch 40/50
8/8 [==============================] - ETA: 0s - loss: 0.4816 - auc: 0.8602
Epoch 00040: val_auc improved from 0.87396 to 0.88257, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
Epoch 00040: val_loss improved from 0.46646 to 0.45996, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 82s 10s/step - loss: 0.4816 - auc: 0.8602 - val_loss: 0.4600 - val_auc: 0.8826
Epoch 41/50
8/8 [==============================] - ETA: 0s - loss: 0.5053 - auc: 0.8371
Epoch 00041: val_auc did not improve from 0.88257
Epoch 00041: val_loss did not improve from 0.45996
8/8 [==============================] - 79s 10s/step - loss: 0.5053 - auc: 0.8371 - val_loss: 0.4605 - val_auc: 0.8760
Epoch 42/50
8/8 [==============================] - ETA: 0s - loss: 0.4725 - auc: 0.8676
Epoch 00042: val_auc did not improve from 0.88257
Epoch 00042: val_loss did not improve from 0.45996
8/8 [==============================] - 84s 10s/step - loss: 0.4725 - auc: 0.8676 - val_loss: 0.4934 - val_auc: 0.8556
Epoch 43/50
8/8 [==============================] - ETA: 0s - loss: 0.5023 - auc: 0.8443
Epoch 00043: val_auc did not improve from 0.88257
Epoch 00043: val_loss improved from 0.45996 to 0.45909, saving model to ../Models/tf_model_x_rays_based_on_EfficientNetB4_and_Adam.h5
8/8 [==============================] - 81s 10s/step - loss: 0.5023 - auc: 0.8443 - val_loss: 0.4591 - val_auc: 0.8716
Epoch 44/50
8/8 [==============================] - ETA: 0s - loss: 0.5100 - auc: 0.8328
Epoch 00044: val_auc did not improve from 0.88257
Epoch 00044: val_loss did not improve from 0.45909
8/8 [==============================] - 72s 9s/step - loss: 0.5100 - auc: 0.8328 - val_loss: 0.5077 - val_auc: 0.8278
Epoch 45/50
8/8 [==============================] - ETA: 0s - loss: 0.4759 - auc: 0.8597
Epoch 00045: val_auc did not improve from 0.88257
Epoch 00045: val_loss did not improve from 0.45909
8/8 [==============================] - 78s 10s/step - loss: 0.4759 - auc: 0.8597 - val_loss: 0.5468 - val_auc: 0.8044
Epoch 46/50
8/8 [==============================] - ETA: 0s - loss: 0.4999 - auc: 0.8388
Epoch 00046: val_auc did not improve from 0.88257
Epoch 00046: val_loss did not improve from 0.45909
8/8 [==============================] - 73s 9s/step - loss: 0.4999 - auc: 0.8388 - val_loss: 0.4731 - val_auc: 0.8632
Epoch 47/50
8/8 [==============================] - ETA: 0s - loss: 0.4964 - auc: 0.8436
Epoch 00047: val_auc did not improve from 0.88257
Epoch 00047: val_loss did not improve from 0.45909
8/8 [==============================] - 76s 9s/step - loss: 0.4964 - auc: 0.8436 - val_loss: 0.5518 - val_auc: 0.7990
Epoch 48/50
8/8 [==============================] - ETA: 0s - loss: 0.4733 - auc: 0.8639
Epoch 00048: val_auc did not improve from 0.88257
Epoch 00048: val_loss did not improve from 0.45909
8/8 [==============================] - 77s 10s/step - loss: 0.4733 - auc: 0.8639 - val_loss: 0.4977 - val_auc: 0.8466
Epoch 49/50
8/8 [==============================] - ETA: 0s - loss: 0.4950 - auc: 0.8454
Epoch 00049: val_auc did not improve from 0.88257
Epoch 00049: val_loss did not improve from 0.45909
8/8 [==============================] - 83s 10s/step - loss: 0.4950 - auc: 0.8454 - val_loss: 0.4931 - val_auc: 0.8483
Epoch 50/50
8/8 [==============================] - ETA: 0s - loss: 0.4992 - auc: 0.8398
Epoch 00050: val_auc did not improve from 0.88257
Epoch 00050: val_loss did not improve from 0.45909
8/8 [==============================] - 80s 10s/step - loss: 0.4992 - auc: 0.8398 - val_loss: 0.5206 - val_auc: 0.8141
###Markdown
Results
###Code
y_hat = model.predict(validation_generator)
y_hat = tf.argmax(y_hat, axis=1).numpy()
y_true = validation_generator.classes
y_true.shape, y_hat.shape
# preds = model(validation_generator)
fpr, tpr, thresholds = roc_curve(y_true, y_hat, pos_label=1)
print(f'[INFO] False positive rate : {fpr}')
print(f'[INFO] True positive rate : {tpr}')
print(f'[INFO] Thresholds : {thresholds}')
metric = auc(x=fpr, y=tpr)
plt.figure(figsize=(20, 8))
plt.plot(fpr, tpr, label=f"AUC score = {metric}")
plt.legend(fontsize = 14)
plt.xlabel('False positive rate', fontsize = 18)
plt.ylabel('True positive rate', fontsize = 18)
plt.xlim(0,1)
plt.ylim(0,1)
plt.title('ROC Curve')
plt.show()
###Output
_____no_output_____
###Markdown
Scores CV/LB Predictions
###Code
def load_models(cv_models_path = os.path.join(base_dir, 'Models', 'CV_models'), optim_name="Adam"):
models = []
n_folds = 5
try:
for fold_num in range(1, n_folds+1):
m = keras.models.load_model(os.path.join(cv_models_path, f"tf_xrays_model_based_on_{arch_name}_and_{optim_name}_fold_{fold_num}.h5"))
m.trainable = False
models.append(m)
except :
model.trainable = False
models.append(model)
return models
models = load_models(optim_name=optim_name)
len(models)
def test_step(models):
images_test = []
predictions = []
for im in tqdm(os.listdir(os.path.join(test_images_dir, 'test')), desc=f"Predicting on test images "):
images_test.append(im.split('.')[0])
x = keras.preprocessing.image.load_img(os.path.join(test_images_dir, 'test', im), target_size=(IMG_SIZE, IMG_SIZE))
x = keras.preprocessing.image.img_to_array(x)
x = keras.applications.efficientnet.preprocess_input(x)
tmp_preds = []
for model in models:
pred = model.predict(x.reshape(-1, IMG_SIZE, IMG_SIZE, 3))[0][1]# get column 1 of prediction
tmp_preds.append(pred)
predictions.append(np.array(tmp_preds).mean())
return images_test, predictions
images_test, predictions = test_step(models = [model])
assert len(predictions) == len(images_test)
my_file = pd.DataFrame({
'ID': images_test,
'LABEL':predictions
})
my_file
file_name = f"tf_xrays_based_on_{arch_name}_bs_{BATCH_SIZE}_opt_{optim_name}_lr_{lr}_ep_{num_epochs}.csv"
my_file.to_csv(os.path.join(base_dir, 'Submissions', file_name), index=False)
print(f"[INFO] Saved file as {file_name}")
###Output
[INFO] Saved file as tf_xrays_based_on_EfficientNetB4_bs_64_opt_Adam_lr_2e-05_ep_50.csv
|
notebooks/python_machine_learning/.ipynb_checkpoints/Matplotlib-checkpoint.ipynb | ###Markdown
Parts of a Figure
###Code
fig = plt.figure() # an empty figure with no Axes
fig, ax = plt.subplots() # a figure with a single Axes
fig, axs = plt.subplots(2, 2) # a figure with a 2x2 grid of Axes
###Output
_____no_output_____
###Markdown
The object-oriented interface and the pyplot interface There are esentially two ways to use Matplotlib:- Explicitly create figures and axes, and call methods on them (the "object-oriented (OO) style")- Rely on pyplot to automatically create and manage the figures and axes, and use pyplot functions for plotting.
###Code
x = np.linspace(0, 2, 100)
# Using OO-style
# Note that in even in the OO-style, we use `.pyplot.figure` to create the figure.
fig, ax = plt.subplots() # Create the figure and axes.
ax.plot(x, x, label='linear') # Plot some data on the axes
ax.plot(x, x**2, label='quadratic') # Plot more data on the axes...
ax.plot(x, x**3, label='cubic') # ... and some more.
ax.set_xlabel('x label') # Add a x-label to the axes.
ax.set_ylabel('y label') # Add an y-label to the axes.
ax.set_title('Simple plot') # Add a title to the axes.
ax.legend() # Add a legend.
# Using the pyplot-style
x = np.linspace(0, 2, 100)
plt.plot(x, x, label='linear') # Plot some data on the (implicit) axes.
plt.plot(x, x**2, label='quadratic')
plt.plot(x, x**3, label='cubic')
plt.xlabel('x label')
plt.ylabel('y label')
plt.title('Simple Plot')
plt.legend()
###Output
_____no_output_____ |
Analise_Completa.ipynb | ###Markdown
Análise de Dados Empresa de telefonia e tem clientes de vários serviços diferentes, entre os principais: internet e telefone.Analisando o histórico dos clientes dos últimos anos, você percebeu que a empresa está com perca de mais de 26% dos clientes.
###Code
#passo a passo para o desafio
#Observar o problema
#importar a base de dados e visualizar para ver as informações disponiveis
#descobrir qual é o problema e tratar os dados
#ver se as colunas estao sendo reconhecidos de forma correta
#excluir as colunas irrelevantes
#analise inicial
#analise grafica
#importando a biblioteca e os dados - dataframe TABELA
import pandas as pd
tabela = pd.read_csv('/content/drive/MyDrive/Analista de Dados/ANALISE/telecom_users.csv')
#churn significa cancelamento / a não renovação
tabela
#aqui estou informando quantas linhas e colunas tem
tabela.shape
#aqui estou informando quais sao as minhas colunas
tabela.columns
#aqui estou informando o index
tabela.index
#excluindo coluna irrelevante - coluna unnamed - coloque dentro do parametro exatamente o nome da coluna que quer excluir
# e o parametro axis, que é o eixo da linha ou da coluna
#para deixar bem claro se é pra excluir linha ou coluna onde 1 é coluna e 0 é linha
tabela = tabela.drop('Unnamed: 0' , axis=1)
#rodando meu dataframe para visualizar a exclusão da coluna
tabela
#aqui mostra dados mais precisos da tabela, se tem valores vazios
#fala cada coluna e quantos valores nao nulos(preenchidos)
#dtype informa o tipo de valor object(texto) int(numero inteiro) float(numero com casa decimal)
print(tabela.info())
#aqui estou selecionando a coluna para concertar seu valor
#a coluna 19 informa que é object enquanto ela é um numero
#este comando pega a informação dentro do parentese e transforma em numero
#este parametro errors mostra que pode ter erros, e se encontrar , deleta a informação
tabela['TotalGasto'] = pd.to_numeric(tabela["TotalGasto"], errors='coerce')
#rodando novamente a tabela
print(tabela.info())
#tratando valores vazios - deletando a linha vazia / deletando a coluna vazia
#nesse comando vc pode deletar varias colunas vazias, com o parametro how
#e axis- all é o parametro pra deletar todas as colunas vazias // quando tem nAN na tabela, siguinifica que nao foi preenchido
#parametro 'any' siguinifica querer deletar uma coluna com algum valor vazio, pelo menos 1
#axis = 0 - significa coluna // axis = 1 - significa linha
#primeira linha excluindo coluna vazia irrelevante
#segunda linha excluindo linhas vazias da tabela -
tabela = tabela.dropna(how='all', axis=1)
tabela = tabela.dropna(how='any', axis=0)
#verificando se todos os valores estao iguais
#e as colunas e linhas vazias foram deletadas
print(tabela.info())
#analizando - como estao os cancelamentos -
#aqui vou avaliar a coluna CHURN e ver quantos disseram sim e quantos disseram nao para o cancelamento
#nesse parametro irá contar os valores - quantos disseram sim e quantos disseram nao
print(tabela['Churn'].value_counts())
#vendo em percentual
#normalizar a coluna é calcular o percentual da coluna
#26% cancelaram e 73%nao cancelaram
print(tabela["Churn"].value_counts(normalize=True))
#26% equivale a 1587 cancelamentos
#formatar valores em percentual - parametro .map e o parametro que voce quer ver, sempre dentro dos parenteses e colchetes
print(tabela["Churn"].value_counts(normalize=True).map("{:.1%}".format))
#aqui estou dizendo quantos sao homens e quantos sao mulheres em numero e em porcentagem
print(tabela["Genero"].value_counts())
print(tabela["Genero"].value_counts(normalize=True).map("{:.1%}".format))
#comparar cada coluna da tabela com a coluna de cancelamento usando grafico
#comando para importar uma biblioteca especifica
#primeiro voce cria o grafico, depois exibe - o eixo x, quer dizer quem voce quer exibir no grafico
import plotly.express as px
grafico = px.histogram(tabela, x="MesesComoCliente")
grafico.show()
#grafico que conta valores - passe os parametros
#x é quem voce quer mostrar na tabela
#aqui esta mostrando quantos tem dependentes e quantos nao tem
#histogram é o grafico
#parametro show é para mostrar o grafico
grafico = px.histogram(tabela, x="Dependentes")
grafico.show()
#aqui estou relacionando a coluna de dependentes com a coluna de cancelados
#mostra a coluna de dependentes e que cancelaram e dos que nao tem dependentes mas tambem cancelaram
grafico = px.histogram(tabela, x="Dependentes", color="Churn")
grafico.show()
#aqui mostra a relação dos meses como cliente e a coluna de cancelamento.
#da pra observar quanto tempo depois houve o cancelamento
#voce observa que o maior numero de cancelamento está nos contratos mais recentes, ou seja, nos primeiros meses
grafico = px.histogram(tabela, x="MesesComoCliente", color="Churn")
grafico.show()
#aqui estou relacionando o tipo de contrato com a coluna cancelamento
#observe que o maior numero de cancelamento é no tipo de contrato mensal
grafico = px.histogram(tabela, x="TipoContrato", color="Churn")
grafico.show()
#aqui mostra o tal de gastos relacionado a coluna de cancelamento
#observe que quem tem mais gastos, cancela mais
grafico = px.histogram(tabela, x="TotalGasto", color="Churn")
grafico.show()
print(tabela['ValorMensal'])
print(tabela['TotalGasto'])
grafico = px.histogram(tabela, x="ServicoInternet", color="Churn")
grafico.show()
#Cliente com contrato mensal tem muito mais chance de cancelar
#podemos fazer promoçoes para esses clientes irem para o contrato anual, onde a taxa é bem menor
#Cliente mais novos estao cancelando
#observe que o cliente mais recentes ou ate 10 meses e com mais gastos estao cancelando, tente oferecer um pacote promocional e mais viavel
#A primeira experiencia do cliente na operadora pode ser ruim
#servoço de internet canvelam mais
#talvez a captação de clientes está trazendo clientes desqualicados
#criar incentivo para o cliente ficar mais tempo na operadora/ fidelizar
#aqui eu importei minha tabela ja tratada para a maquina local em csv
tabela.to_csv('telefonia.csv')
#aqui importei minha tabela ja tratada em xlsx para levar ao Power Bi
tabela.to_excel('tabela.xlsx')
###Output
_____no_output_____ |
oops/Lesson 2 - Class Variables.ipynb | ###Markdown
In this notebook, we are going to learn class variables importance. How to access class variables, different ways of using class variables. They are shared among all instances of the class Lets learn class variables using pay raise variable Step 1
###Code
class Employee:
def __init__(self, first_name, last_name, salary):
self.first_name = first_name
self.last_name = last_name
self.email = first_name+"."+last_name+"@company.com"
self.salary = salary
def fullname(self):
return "{} {}".format(self.first_name, self.last_name)
def apply_raise(self):
self.salary = int(self.salary * 1.04)
emp1 = Employee("Srikanth","Metlakunta",5000)
emp2 = Employee("Ramesh","Gummi",6000)
print(emp1.salary)
emp1.apply_raise()
print(emp1.salary)
###Output
5000
5200
###Markdown
Step 2
###Code
class Employee:
raise_amount = 1.04
def __init__(self, first_name, last_name, salary):
self.first_name = first_name
self.last_name = last_name
self.email = first_name+"."+last_name+"@company.com"
self.salary = salary
def fullname(self):
return "{} {}".format(self.first_name, self.last_name)
def apply_raise(self):
# self.salary = int(self.salary * Employee.raise_amount)
self.salary = int(self.salary * self.raise_amount)
emp1 = Employee("Srikanth","Metlakunta",5000)
emp2 = Employee("Ramesh","Gummi",6000)
print(emp1.salary)
emp1.apply_raise()
print(emp1.salary)
print(Employee.raise_amount)
print(emp1.raise_amount) #accessing the class instance variable
print(emp1.raise_amount) #accessing the class instance variable
Employee.__dict__
###Output
_____no_output_____
###Markdown
You can see in the below statements, we can change the class variable and the effect on class instances
###Code
Employee.raise_amount = 1.05
print(Employee.raise_amount)
print(emp1.raise_amount) #accessing the class instance variable
print(emp1.raise_amount) #accessing the class instance variable
###Output
1.05
1.05
1.05
###Markdown
What if I want to change the raise amount class variable using instance instead of class
###Code
emp1.raise_amount = 1.05
print(Employee.raise_amount)
print(emp1.raise_amount) #accessing the class instance variable
print(emp2.raise_amount) #accessing the class instance variable
###Output
1.04
1.05
1.04
###Markdown
to check the count of number of employees
###Code
class Employee:
raise_amount = 1.04
num_of_emp = 0
def __init__(self, first_name, last_name, salary):
self.first_name = first_name
self.last_name = last_name
self.email = first_name+"."+last_name+"@company.com"
self.salary = salary
Employee.num_of_emp += 1
def fullname(self):
return "{} {}".format(self.first_name, self.last_name)
def apply_raise(self):
# self.salary = int(self.salary * Employee.raise_amount)
self.salary = int(self.salary * self.raise_amount)
print(Employee.num_of_emp)
emp1 = Employee("Srikanth","Metlakunta",5000)
emp2 = Employee("Ramesh","Gummi",6000)
print(Employee.num_of_emp)
###Output
0
2
|
notebooks/tutorial/Lecture_11_2_17_PDAs_and_TMs.ipynb | ###Markdown
The exercises below are described further [HERE](https://www.overleaf.com/read/kzjggwqbfjwd) Tests on PDA
###Code
from jove.SystemImports import *
from jove.DotBashers import *
from jove.Def_md2mc import *
from jove.Def_PDA import *
from jove.Def_TM import *
pda_a1_b2 = md2mc('''PDA
I : a, # ; cc# -> I
I : a, c ; ccc -> I
I : b, # ; b# -> I
I : b, b ; bb -> I
I : b, c ; '' -> I
I : a, b ; '' -> SeeIfMore
SeeIfMore : '', b ; '' -> I
SeeIfMore : '', # ; c# -> I
SeeIfMore : '', c ; cc -> I
I : '', #; # -> F
''')
dotObj_pda(pda_a1_b2, FuseEdges = True)
explore_pda("aabbbbbabbba", pda_a1_b2)
a1b2_s = md2mc('''
PDA
!!---------------------------------------------------------------------------
!! This is a PDA that accepts all strings with twice as many b's as a's
!! That is, n_b = 2 * n_a must be satisfied
!! Recall this can happen when n_a = n_b = 0 (trivial case)
!!
!! Acceptance is required to be by empty stack
!! (see a1b2_accept_f.pda for a PDA that accepts by final state)
!!
!! PDA made by markdown will have # on top of stack (TOS)
!!
!! The basic algorithm is to convert a's to two c's
!! Only c's and b's are allowed on the stack
!! But depending on the arrival order, we need to juggle what we put on TOS
!! For details, study the comments below
!!
!!---------------------------------------------------------------------------
I : '', # ; '' -> I !! In case the input string is empty, we satisfy trivially
I : a, # ; cc# -> I !! An a coming in with # on TOS : turn a into two c
I : a, b ; '' -> Try !! An a coming in with a b on TOS: we don't know what lies below b
!! So entry Try state, but after consuming that one b
Try : '', b ; '' -> I !! In Try state we find another b; how handy; consume that also
!! Now we are back in state I
Try : '', c ; cc -> I !! In Try state, we face a c; so we have consumed only one b
!! Express deficit of a 'b' by stacking a c
Try : '', # ; c# -> I !! We face a #; we have again consumed only one b
!! Express deficit of a 'b' by stacking a c
I : a, c ; ccc -> I !! In I, we face 'c' on TOS, so express deficit of two b
!! by stacking two c
I : b, # ; b# -> I !! b input when # is TOS turns into b stacked
I : b, b ; bb -> I !! b input when b is TOS turns into b stacked
I : b, c ; '' -> I !! b and c are even match
!!---------------------------------------------------------------------------
''')
dotObj_pda(a1b2_s, FuseEdges=True)
help(explore_pda)
explore_pda("abb", a1b2_s, acceptance='ACCEPT_S')
explore_pda("bab", a1b2_s, acceptance='ACCEPT_S')
explore_pda("bba", a1b2_s, acceptance='ACCEPT_S')
explore_pda("bbaabbbabaabbabbbb", a1b2_s, acceptance='ACCEPT_S')
explore_pda("babaababbbaabbbbbb", a1b2_s, acceptance='ACCEPT_S')
explore_pda("abbaababbbabbbbbba", a1b2_s, acceptance='ACCEPT_S')
pdaDyck = md2mc('''PDA
IF : (, #; (# -> A
A : (, (; (( -> A
A : ), (; '' -> A
A : '',#; # -> IF
''')
DOpdaDyck = dotObj_pda(pdaDyck, FuseEdges=True)
DOpdaDyck
explore_pda("", pdaDyck)
explore_pda("()", pdaDyck)
explore_pda("()()(())", pdaDyck)
explore_pda("()()(()", pdaDyck)
###Output
_____no_output_____
###Markdown
na = nb + nc using pda
###Code
# Parsing an arithmetic expression
pdaE = md2mc('''PDA
!!E -> E+T | T
!!T -> T*F | F
!!F -> 2 | 3 | ~F | (E)
I : '', # ; E# -> M
M : '', E ; E+T -> M
M : '', E ; T -> M
M : '', T ; T*F -> M
M : '', T ; F -> M
M : '', F ; 2 -> M
M : '', F ; 3 -> M
M : '', F ; ~F -> M
M : '', F ; (E) -> M
M : ~, ~ ; '' -> M
M : 2, 2 ; '' -> M
M : 3, 3 ; '' -> M
M : (, ( ; '' -> M
M : ), ) ; '' -> M
M : +, + ; '' -> M
M : *, * ; '' -> M
M : '', # ; # -> F
'''
)
DOpdaE = dotObj_pda(pdaE, FuseEdges=True)
DOpdaE
explore_pda("2+2*3", pdaE, STKMAX=7)
# Parsing an arithmetic expression
pdaEamb = md2mc('''PDA
!!E -> E * E | E + E | ~E | ( E ) | 2 | 3
I : '', # ; E# -> M
M : '', E ; ~E -> M
M : '', E ; E+E -> M
M : '', E ; E*E -> M
M : '', E ; (E) -> M
M : '', E ; 2 -> M
M : '', E ; 3 -> M
M : ~, ~ ; '' -> M
M : 2, 2 ; '' -> M
M : 3, 3 ; '' -> M
M : (, ( ; '' -> M
M : ), ) ; '' -> M
M : +, + ; '' -> M
M : *, * ; '' -> M
M : '', # ; # -> F
'''
)
DOpdaEamb = dotObj_pda(pdaEamb, FuseEdges=True)
DOpdaEamb
explore_pda("3+2*3", pdaEamb, STKMAX=5)
3
eqpda=md2mc('''
PDA
I : a,#;a# | b,#;b# | a,a;aa | b,b;bb -> I
I : a,b;'' | b,a;'' -> I
I : '',#; # -> F
'''
)
dotObj_pda(eqpda, FuseEdges=True)
explore_pda("aaaaaabbbbbb",eqpda)
explore_pda("",eqpda)
explore_pda("bbabaaabaabbbbaa",eqpda)
explore_pda("aaaaaabbbbb",eqpda)
chyr1pda = md2mc('''
PDA
I : a, # ; aa# -> I !! bottom of the stack, push two "a"'s
I : b, # ; b# -> I !! bottom of the stack, push one "b"
I : a, a ; aaa -> I !! another incoming a, put the a old "a" back and push an additional two a's
I : b, b ; bb -> I !! another incoming b, put the old "b" back and push a "b"
I : a, b ; '' -> AB !! since there must be two "a"'s for every "b", remove the first "b"
AB : '', # ; a# -> I !! the bottom of the stack was reach with an additional "a" to add
AB : '', a ; aa -> I !! this really should never happen, but it's here for safety
AB : '', b ; '' -> I !! there was another "b", cancel this one as well
I : b, a ; '' -> I !! cancel the "b" with the "a"
I : '', # ; # -> F !! the "a"'s and "b"'s cancel correctly, all done and accept!
''')
dotObj_pda(chyr1pda, FuseEdges=True)
explore_pda("babbaabbabbb",chyr1pda)
explore_pda("babbab",chyr1pda)
chyr2pda = md2mc('''
PDA
I : a, # ; aa# -> I !! bottom of the stack, push two "a"'s
I : b, # ; b# -> I !! bottom of the stack, push one "b"
I : a, a ; aaa -> I !! another incoming a, put the a old "a" back and push an additional two a's
I : b, b ; bb -> I !! another incoming b, put the old "b" back and push a "b"
I : a, b ; '' -> AB !! since there must be two "a"'s for every "b", remove the first "b"
AB : '', # ; a# -> I !! the bottom of the stack was reach with an additional "a" to add
AB : '', a ; aa -> I !! this really should never happen, but it's here for safety
AB : '', b ; '' -> I !! there was another "b", cancel this one as well
I : b, a ; '' -> I !! cancel the "b" with the "a"
I : '', # ; # -> F !! the "a"'s and "b"'s cancel correctly, all done and accept!
''')
explore_pda("babbab",chyr2pda)
###Output
_____no_output_____
###Markdown
1 > 0
###Code
onesGTzeros = md2mc('''PDA
I : 1,#;1# | 0,#;0 -> I
I : 1,1;11 | 0,0;00 -> I
I : 1,0;'' | 0,1;'' -> I
I : '',1;'' -> FryMyLuck
FryMyLuck : '',1;'' -> FryMyLuck
FryMyLuck : '',#;# -> FryMyLuck
''')
dotObj_pda(onesGTzeros, FuseEdges=True)
explore_pda("11011011011", onesGTzeros)
explore_pda("1101101101100", onesGTzeros)
explore_pda("110110110110000", onesGTzeros)
explore_pda("1010110110110000", onesGTzeros)
explore_pda("10101101101100100", onesGTzeros)
explore_pda("1", onesGTzeros)
explore_pda("0", onesGTzeros)
explore_pda("", onesGTzeros)
explore_pda("1111", onesGTzeros)
explore_pda("1010101", onesGTzeros)
'''
S -> A | AS
A -> E1E
E -> 0E1E | 1E0E | ''
'''
pda1GT0 = md2mc('''
PDA
I : '' , # ; S# -> L
L : '' , S ; A -> L
L : '' , S ; AS -> L
L : '' , A ; E1E -> L
L : '' , E ; 0E1E -> L
L : '' , E ; 1E0E -> L
L : '' , E ; '' -> L
L : 0 , 0 ; '' -> L
L : 1 , 1 ; '' -> L
L : '' , # ; # -> F
''')
dotObj_pda(pda1GT0, FuseEdges=True)
help(explore_pda)
explore_pda("1", pda1GT0, STKMAX = 8 )
explore_pda("10101", pda1GT0, STKMAX = 8 )
explore_pda("1010101", pda1GT0, STKMAX = 8 )
explore_pda("10101", pda1GT0, STKMAX = 8 )
f27sip = md2mc('''
PDA
!!---------------------------------------
!! This is a PDA From Sipser's book
!! This matches a's and b's ignoring c's
!! or matches a's and c's, ignoring b's
!! in the middle. Thus, the language is
!! a^m b^m c^n or a^m b^n c^m
!!---------------------------------------
!!---------------------------------------------------------------------------
!! State: in , sin ; spush -> tostates !! comment
!!---------------------------------------------------------------------------
iq2 : a , '' ; a -> iq2 !! stack a's
iq2 : '' , '' ; '' -> q3,q5 !! split non-det for a^m b^m c^n (q3)
!! or a^m b^n c^m (q5)
q3 : b , a ; '' -> q3 !! match b's against a's
q3 : '' , # ; '' -> fq4 !! hope for acceptance when # surfaces
fq4 : c , '' ; '' -> fq4 !! be happy so long as c's come
!! will choke and reject if anything
!! other than c's come
q5 : b , '' ; '' -> q5 !! here, we are going to punt over b's
q5 : '' , '' ; '' -> q6 !! and non-det decide to honor c's matching
!! against a's
q6 : c , a ; '' -> q6 !! OK to match so long as c's keep coming
q6 : '' , # ; '' -> fq7 !! when # surfaces, be ready to accept in
!! state fq7. However, anything else coming in
!! now will foil match and cause rejection.
!!---------------------------------------------------------------------------
''')
DOf27sip = dotObj_pda(f27sip, FuseEdges=True)
DOf27sip
explore_pda("aaabbbccc", f27sip)
wpw_tm = md2mc('''
TM
!!---------------------------------------------------------------------------
!! This is a DTM for recognizing strings of the form w#w where w is in {0,1}*
!! The presence of the "#" serves as the midpoint-marker, thus allowing the
!! TM to deterministically match around it.
!!
!!---------------------------------------------------------------------------
!!---------------------------------------------------------------------------
!! State : rd ; wr , mv -> tostates !! comment
!!---------------------------------------------------------------------------
Iq0 : 0 ; X , R -> q1 !! All 0s are converted to X, and matching
!! 0s are then sought to the right of the #
Iq0 : 1 ; Y , R -> q7 !! All 1s are converted to Y, and matching
!! 1s are then sought to the right of the #
Iq0 : # ; # , R -> q5 !! If we see # rightaway, we are in the
!! situation of having to match eps # eps
!!---
q5 : X ; X,R | Y ; Y,R -> q5 !! In q5, we skip over X and Y (an equal number
!! of X and Y lie to the left of the #)
q5 : . ; . , R -> Fq6 !! .. and we accept when we see a blank (.)
!!---
q1 : 0 ; 0,R | 1 ; 1,R -> q1 !! In q1, skip over the remaining 0s and 1s
q1 : # ; # , R -> q2 !! But upon seeing a #, look for a matching
!! 0 (since we are in q2, we know this).
q2 : X ; X,R | Y ; Y,R -> q2 !! All X and Y are "past stuff" to skip over
q2 : 0 ; X , L -> q3 !! When we find a matching 0, turn that to
!! an X, and sweep left to do the next pass
q3 : X ; X,L | Y ; Y,L -> q3 !! In q3, we move over all past X, Y
q3 : # ; # , L -> q4 !! but when we reach the middle marker, we
!! know that the next action is to seek the
!! next unprocessed 0 or 1
q4 : 0 ; 0,L | 1 ; 1,L -> q4 !! In q4, wait till we hit the leftmost 0/1
q4 : X ; X,R | Y ; Y,R -> Iq0 !! When we hit an X or Y, we know that we've
!! found the leftmost 0/1. Another pass begins.
!!---
q7 : 0 ; 0,R | 1 ; 1,R -> q7 !! q7 is similar to q1
q7 : # ; # , R -> q8 !! and q8 is similar to q2
q8 : X ; X,R | Y ; Y,R -> q8
q8 : 1 ; Y , L -> q3
!!---------------------------------------------------------------------------
!! You may use the line below as an empty shell to populate for your purposes
!! Also serves as a syntax reminder for entering DFAs.
!!
!! State : r1 ; w1 , m1 | r2 ; w2 , m2 -> s1 , s2 !! comment
!!
!! .. : .. ; .. , .. | .. ; .. , .. -> .. , .. !! ..
!!---------------------------------------------------------------------------
!!
!! Good commenting and software-engineering methods, good clean indentation,
!! grouping of similar states, columnar alignment, etc etc. are HUGELY
!! important in any programming endeavor -- especially while programming
!! automata. Otherwise, you can easily make a mistake in your automaton
!! code. Besides, you cannot rely upon others to find your mistakes, as
!! they will find your automaton code impossible to read!
!!
!!---------------------------------------------------------------------------
''')
dotObj_tm(wpw_tm, FuseEdges=True)
explore_tm(wpw_tm, "010#010", 33)
ww_ndtm = md2mc('''
TM
!!---------------------------------------------------------------------------
!! This is a TM for ww processing. Guesses midpoint using nondet.
!!
!!---------------------------------------------------------------------------
!!---------------------------------------------------------------------------
!! State : rd ; wr , mv -> tostates !! comment
!!---------------------------------------------------------------------------
Iq0 : 0 ; 0 , S -> q14 !! This simulates the TM taking a guess
Iq0 : 1 ; 1 , S -> q14 !! that it hasn't seen the midpoint. It
!! moves to q14
Iq0 : . ; . , R -> Fq1 !! yay! shortest acceptance is for eps eps
!! i.e. facing a sea of blanks that encodes
!! an epsilon followed by another epsilon.
!!---------------------------------------------------------------------------
q14 : 0 ; 0 , R -> q14 !! The TM skips over 0s or
!! 1s for a while, and then chooses a cell,
q14 : 0 ; X , L -> q2 !! declaring it the midpoint, or more specifically
!! FIRST CHARACTER PAST MIDPOINT, by marking it 'X'
!! and then moves to q2 (to march around the
!! chosen midpoint).
q14 : 1 ; 1 , R -> q14 !! Similar actions as with 0 in state q14,
q14 : 1 ; Y , L -> q2 !! except that it "dings" the "1" with a "Y"
!! to mark it the FIRST CHARACTER PAST MIDPOINT.
!! Then we march around it. While the separate
!! use of "X" and "Y" may not be necessary,
!! it improves understandability when you
!! finally see the result of TM executions.
q2 : 0 ; 0 , L -> q2 !! The TM is now winding back, seeking the
q2 : 1 ; 1 , L -> q2 !! left-end of the tape till hit hits a '.'
!! (blank).
q2 : . ; . , R -> q3 !! When that happens, the TM goes to state q3
!! to begin its work of "matching around."
!! We describe the q3,q5,q11,q9,q3 loop well.
!! The other loop q3,q4,q10,q8,q3 is similar.
!!-----------------------------------------------------------------
q3 : X ; X , R -> q6 !! This state is a stuck state (no progress)
!! WE came to q3 because we dinged a 0->X
!! or a 1->Y while in q14; so its matching
!! "partner" 0 or 1 must be found to the
!! left. Unfortunately, we are finding an
!! X or a Y. Thus, no "match around the middle"
!! is likely to happen.
q3 : Y ; Y , R -> q7 !! This state is ALSO a stuck state for similar
!! reasons as expressed in the comments
!! associated with q3 : X ; X ...
!!-----------------------------------------------------------------
!! Description of the q3,q5,q11,q9,q3 loop :
q3 : 1 ; Q , R -> q5 !! Upon seeing a 1, change to Q. Then MUST see a
!! matching Y, then change to 3, and go right, and to state q5.
!! We do this because 'Y' represents what
!! was '1' and got marked as midpoint (well,
!! one-past midpoint..).
!!-- What will happen in q5,q11,q9,q3 --
!! So we have to get past this assumed
!! midpoint and choose the next
!! "one past midpoint that has not been seen so far".
!! We enter q11 to then ding a matching
!! 0 to X or 1 to Y, moving left.
!! A blank sends us leftwards, as well.
!! We sweep left till we hit a Q. We MUST see a Q
!! because we entered "this lobe" by dinging a 1->Q.
!! The process repeats from state q3.
q5 : 0;0,R | 1;1,R | 2;2,R | 3;3,R -> q5 !! punt the 0/1/2/3; we need a "Y".
q5 : Y ; 3, R -> q11 !! ah-ha , got a Y. Ding to 3, seek 0/1/.
q11 : 1;Y,L | .;.,L | 0;X,L -> q9 !! phew! got to sweep left now!
q9 : 0;0,L | 1;1,L | 2;2,L | 3;3,L -> q9 !! whee! going left!
q9 : Q ; Q , R -> q3 !! Boiinggg - now gonna go right!
!!-----------------------------------------------------------------
!! Description of the q3,q4,q10,q8,q3 loop :
q3 : 0 ; P , R -> q4 !! This is similar to q3 : 1 ; Q , R -> q5 above
q4 : 0;0,R | 1;1,R | 2;2,R | 3;3,R -> q4 !! punt the 0/1/2/3; we need a "X".
q4 : X ; 2, R -> q10 !! ah-ha , got a X. Ding to 2, seek 0/1/.
q10 : 1;Y,L | .;.,L | 0;X,L -> q8 !! phew! got to sweep left now!
q8 : 0;0,L | 1;1,L | 2;2,L | 3;3,L -> q8 !! whee! going left!
q8 : P ; P , R -> q3 !! Boiinggg - now gonna go right!
!!-----------------------------------------------------------------
q3 : 2;2,R | 3;3,R -> q12 !! Seeing every sign of acceptance!!
!! We are seeing piles of 2 and 3
!! ALSO did not get stuck in q6 or q7
!! That means all the matches went fine
q12 : 2 ; 2 , R | 3 ; 3 , R -> q12 !! Skip over piles of past 2s and 3s
q12 : . ; . , R -> Fq13 !! Yay, acceptance when we hit a blank!
!!---------------------------------------------------------------------------
!! You may use the line below as an empty shell to populate for your purposes
!! Also serves as a syntax reminder for entering DFAs.
!!
!! State : r1 ; w1 , m1 | r2 ; w2 , m2 -> s1 , s2 !! comment
!!
!! .. : .. ; .. , .. | .. ; .. , .. -> .. , .. !! ..
!!---------------------------------------------------------------------------
!!
!! Good commenting and software-engineering methods, good clean indentation,
!! grouping of similar states, columnar alignment, etc etc. are HUGELY
!! important in any programming endeavor -- especially while programming
!! automata. Otherwise, you can easily make a mistake in your automaton
!! code. Besides, you cannot rely upon others to find your mistakes, as
!! they will find your automaton code impossible to read!
!!
!!---------------------------------------------------------------------------
''')
dotObj_tm(ww_ndtm, FuseEdges=True)
explore_tm(ww_ndtm, "0101", 30)
###Output
_____no_output_____
###Markdown
The exercises below are described further [HERE](https://www.overleaf.com/read/kzjggwqbfjwd) Tests on PDA
###Code
import sys
sys.path[0:0] = ['../..','../../3rdparty'] # Put these at the head of the search path
from jove.SystemImports import *
from jove.DotBashers import *
from jove.Def_md2mc import *
from jove.Def_PDA import *
from jove.Def_TM import *
pda_a1_b2 = md2mc('''PDA
I : a, # ; cc# -> I
I : a, c ; ccc -> I
I : b, # ; b# -> I
I : b, b ; bb -> I
I : b, c ; '' -> I
I : a, b ; '' -> SeeIfMore
SeeIfMore : '', b ; '' -> I
SeeIfMore : '', # ; c# -> I
SeeIfMore : '', c ; cc -> I
I : '', #; # -> F
''')
dotObj_pda(pda_a1_b2, FuseEdges = True)
explore_pda("aabbbbbabbba", pda_a1_b2)
a1b2_s = md2mc('''
PDA
!!---------------------------------------------------------------------------
!! This is a PDA that accepts all strings with twice as many b's as a's
!! That is, n_b = 2 * n_a must be satisfied
!! Recall this can happen when n_a = n_b = 0 (trivial case)
!!
!! Acceptance is required to be by empty stack
!! (see a1b2_accept_f.pda for a PDA that accepts by final state)
!!
!! PDA made by markdown will have # on top of stack (TOS)
!!
!! The basic algorithm is to convert a's to two c's
!! Only c's and b's are allowed on the stack
!! But depending on the arrival order, we need to juggle what we put on TOS
!! For details, study the comments below
!!
!!---------------------------------------------------------------------------
I : '', # ; '' -> I !! In case the input string is empty, we satisfy trivially
I : a, # ; cc# -> I !! An a coming in with # on TOS : turn a into two c
I : a, b ; '' -> Try !! An a coming in with a b on TOS: we don't know what lies below b
!! So entry Try state, but after consuming that one b
Try : '', b ; '' -> I !! In Try state we find another b; how handy; consume that also
!! Now we are back in state I
Try : '', c ; cc -> I !! In Try state, we face a c; so we have consumed only one b
!! Express deficit of a 'b' by stacking a c
Try : '', # ; c# -> I !! We face a #; we have again consumed only one b
!! Express deficit of a 'b' by stacking a c
I : a, c ; ccc -> I !! In I, we face 'c' on TOS, so express deficit of two b
!! by stacking two c
I : b, # ; b# -> I !! b input when # is TOS turns into b stacked
I : b, b ; bb -> I !! b input when b is TOS turns into b stacked
I : b, c ; '' -> I !! b and c are even match
!!---------------------------------------------------------------------------
''')
dotObj_pda(a1b2_s, FuseEdges=True)
help(explore_pda)
explore_pda("abb", a1b2_s, acceptance='ACCEPT_S')
explore_pda("bab", a1b2_s, acceptance='ACCEPT_S')
explore_pda("bba", a1b2_s, acceptance='ACCEPT_S')
explore_pda("bbaabbbabaabbabbbb", a1b2_s, acceptance='ACCEPT_S')
explore_pda("babaababbbaabbbbbb", a1b2_s, acceptance='ACCEPT_S')
explore_pda("abbaababbbabbbbbba", a1b2_s, acceptance='ACCEPT_S')
pdaDyck = md2mc('''PDA
IF : (, #; (# -> A
A : (, (; (( -> A
A : ), (; '' -> A
A : '',#; # -> IF
''')
DOpdaDyck = dotObj_pda(pdaDyck, FuseEdges=True)
DOpdaDyck
explore_pda("", pdaDyck)
explore_pda("()", pdaDyck)
explore_pda("()()(())", pdaDyck)
explore_pda("()()(()", pdaDyck)
###Output
*** Exploring wrt STKMAX= 6 ; increase it if needed ***
*** Exploring wrt STKMAX = 6 ; increase it if needed ***
String ()()(() rejected by your PDA :-(
Visited states are:
{('A', '()(()', '#'), ('A', ')(()', '(#'), ('IF', '(()', '#'), ('A', ')', '((#'), ('IF', '()(()', '#'), ('A', ')()(()', '(#'), ('A', '(()', '#'), ('A', '', '(#'), ('IF', '()()(()', '#'), ('A', '()', '(#')}
###Markdown
na = nb + nc using pda
###Code
# Parsing an arithmetic expression
pdaE = md2mc('''PDA
!!E -> E+T | T
!!T -> T*F | F
!!F -> 2 | 3 | ~F | (E)
I : '', # ; E# -> M
M : '', E ; E+T -> M
M : '', E ; T -> M
M : '', T ; T*F -> M
M : '', T ; F -> M
M : '', F ; 2 -> M
M : '', F ; 3 -> M
M : '', F ; ~F -> M
M : '', F ; (E) -> M
M : ~, ~ ; '' -> M
M : 2, 2 ; '' -> M
M : 3, 3 ; '' -> M
M : (, ( ; '' -> M
M : ), ) ; '' -> M
M : +, + ; '' -> M
M : *, * ; '' -> M
M : '', # ; # -> F
'''
)
DOpdaE = dotObj_pda(pdaE, FuseEdges=True)
DOpdaE
explore_pda("2+2*3", pdaE, STKMAX=7)
# Parsing an arithmetic expression
pdaEamb = md2mc('''PDA
!!E -> E * E | E + E | ~E | ( E ) | 2 | 3
I : '', # ; E# -> M
M : '', E ; ~E -> M
M : '', E ; E+E -> M
M : '', E ; E*E -> M
M : '', E ; (E) -> M
M : '', E ; 2 -> M
M : '', E ; 3 -> M
M : ~, ~ ; '' -> M
M : 2, 2 ; '' -> M
M : 3, 3 ; '' -> M
M : (, ( ; '' -> M
M : ), ) ; '' -> M
M : +, + ; '' -> M
M : *, * ; '' -> M
M : '', # ; # -> F
'''
)
DOpdaEamb = dotObj_pda(pdaEamb, FuseEdges=True)
DOpdaEamb
explore_pda("3+2*3", pdaEamb, STKMAX=5)
3
eqpda=md2mc('''
PDA
I : a,#;a# | b,#;b# | a,a;aa | b,b;bb -> I
I : a,b;'' | b,a;'' -> I
I : '',#; # -> F
'''
)
dotObj_pda(eqpda, FuseEdges=True)
explore_pda("aaaaaabbbbbb",eqpda)
explore_pda("",eqpda)
explore_pda("bbabaaabaabbbbaa",eqpda)
explore_pda("aaaaaabbbbb",eqpda)
chyr1pda = md2mc('''
PDA
I : a, # ; aa# -> I !! bottom of the stack, push two "a"'s
I : b, # ; b# -> I !! bottom of the stack, push one "b"
I : a, a ; aaa -> I !! another incoming a, put the a old "a" back and push an additional two a's
I : b, b ; bb -> I !! another incoming b, put the old "b" back and push a "b"
I : a, b ; '' -> AB !! since there must be two "a"'s for every "b", remove the first "b"
AB : '', # ; a# -> I !! the bottom of the stack was reach with an additional "a" to add
AB : '', a ; aa -> I !! this really should never happen, but it's here for safety
AB : '', b ; '' -> I !! there was another "b", cancel this one as well
I : b, a ; '' -> I !! cancel the "b" with the "a"
I : '', # ; # -> F !! the "a"'s and "b"'s cancel correctly, all done and accept!
''')
dotObj_pda(chyr1pda, FuseEdges=True)
explore_pda("babbaabbabbb",chyr1pda)
explore_pda("babbab",chyr1pda)
chyr2pda = md2mc('''
PDA
I : a, # ; aa# -> I !! bottom of the stack, push two "a"'s
I : b, # ; b# -> I !! bottom of the stack, push one "b"
I : a, a ; aaa -> I !! another incoming a, put the a old "a" back and push an additional two a's
I : b, b ; bb -> I !! another incoming b, put the old "b" back and push a "b"
I : a, b ; '' -> AB !! since there must be two "a"'s for every "b", remove the first "b"
AB : '', # ; a# -> I !! the bottom of the stack was reach with an additional "a" to add
AB : '', a ; aa -> I !! this really should never happen, but it's here for safety
AB : '', b ; '' -> I !! there was another "b", cancel this one as well
I : b, a ; '' -> I !! cancel the "b" with the "a"
I : '', # ; # -> F !! the "a"'s and "b"'s cancel correctly, all done and accept!
''')
explore_pda("babbab",chyr2pda)
###Output
*** Exploring wrt STKMAX= 6 ; increase it if needed ***
*** Exploring wrt STKMAX = 6 ; increase it if needed ***
String babbab accepted by your PDA in 1 ways :-)
Here are the ways:
Final state ('F', '', '#')
Reached as follows:
-> ('I', 'babbab', '#')
-> ('I', 'abbab', 'b#')
-> ('AB', 'bbab', '#')
-> ('I', 'bbab', 'a#')
-> ('I', 'bab', '#')
-> ('I', 'ab', 'b#')
-> ('AB', 'b', '#')
-> ('I', 'b', 'a#')
-> ('I', '', '#')
-> ('F', '', '#') .
###Markdown
1 > 0
###Code
onesGTzeros = md2mc('''PDA
I : 1,#;1# | 0,#;0 -> I
I : 1,1;11 | 0,0;00 -> I
I : 1,0;'' | 0,1;'' -> I
I : '',1;'' -> FryMyLuck
FryMyLuck : '',1;'' -> FryMyLuck
FryMyLuck : '',#;# -> FryMyLuck
''')
dotObj_pda(onesGTzeros, FuseEdges=True)
explore_pda("11011011011", onesGTzeros)
explore_pda("1101101101100", onesGTzeros)
explore_pda("110110110110000", onesGTzeros)
explore_pda("1010110110110000", onesGTzeros)
explore_pda("10101101101100100", onesGTzeros)
explore_pda("1", onesGTzeros)
explore_pda("0", onesGTzeros)
explore_pda("", onesGTzeros)
explore_pda("1111", onesGTzeros)
explore_pda("1010101", onesGTzeros)
'''
S -> A | AS
A -> E1E
E -> 0E1E | 1E0E | ''
'''
pda1GT0 = md2mc('''
PDA
I : '' , # ; S# -> L
L : '' , S ; A -> L
L : '' , S ; AS -> L
L : '' , A ; E1E -> L
L : '' , E ; 0E1E -> L
L : '' , E ; 1E0E -> L
L : '' , E ; '' -> L
L : 0 , 0 ; '' -> L
L : 1 , 1 ; '' -> L
L : '' , # ; # -> F
''')
dotObj_pda(pda1GT0, FuseEdges=True)
help(explore_pda)
explore_pda("1", pda1GT0, STKMAX = 8 )
explore_pda("10101", pda1GT0, STKMAX = 8 )
explore_pda("1010101", pda1GT0, STKMAX = 8 )
explore_pda("10101", pda1GT0, STKMAX = 8 )
f27sip = md2mc('''
PDA
!!---------------------------------------
!! This is a PDA From Sipser's book
!! This matches a's and b's ignoring c's
!! or matches a's and c's, ignoring b's
!! in the middle. Thus, the language is
!! a^m b^m c^n or a^m b^n c^m
!!---------------------------------------
!!---------------------------------------------------------------------------
!! State: in , sin ; spush -> tostates !! comment
!!---------------------------------------------------------------------------
iq2 : a , '' ; a -> iq2 !! stack a's
iq2 : '' , '' ; '' -> q3,q5 !! split non-det for a^m b^m c^n (q3)
!! or a^m b^n c^m (q5)
q3 : b , a ; '' -> q3 !! match b's against a's
q3 : '' , # ; '' -> fq4 !! hope for acceptance when # surfaces
fq4 : c , '' ; '' -> fq4 !! be happy so long as c's come
!! will choke and reject if anything
!! other than c's come
q5 : b , '' ; '' -> q5 !! here, we are going to punt over b's
q5 : '' , '' ; '' -> q6 !! and non-det decide to honor c's matching
!! against a's
q6 : c , a ; '' -> q6 !! OK to match so long as c's keep coming
q6 : '' , # ; '' -> fq7 !! when # surfaces, be ready to accept in
!! state fq7. However, anything else coming in
!! now will foil match and cause rejection.
!!---------------------------------------------------------------------------
''')
DOf27sip = dotObj_pda(f27sip, FuseEdges=True)
DOf27sip
explore_pda("aaabbbccc", f27sip)
wpw_tm = md2mc('''
TM
!!---------------------------------------------------------------------------
!! This is a DTM for recognizing strings of the form w#w where w is in {0,1}*
!! The presence of the "#" serves as the midpoint-marker, thus allowing the
!! TM to deterministically match around it.
!!
!!---------------------------------------------------------------------------
!!---------------------------------------------------------------------------
!! State : rd ; wr , mv -> tostates !! comment
!!---------------------------------------------------------------------------
Iq0 : 0 ; X , R -> q1 !! All 0s are converted to X, and matching
!! 0s are then sought to the right of the #
Iq0 : 1 ; Y , R -> q7 !! All 1s are converted to Y, and matching
!! 1s are then sought to the right of the #
Iq0 : # ; # , R -> q5 !! If we see # rightaway, we are in the
!! situation of having to match eps # eps
!!---
q5 : X ; X,R | Y ; Y,R -> q5 !! In q5, we skip over X and Y (an equal number
!! of X and Y lie to the left of the #)
q5 : . ; . , R -> Fq6 !! .. and we accept when we see a blank (.)
!!---
q1 : 0 ; 0,R | 1 ; 1,R -> q1 !! In q1, skip over the remaining 0s and 1s
q1 : # ; # , R -> q2 !! But upon seeing a #, look for a matching
!! 0 (since we are in q2, we know this).
q2 : X ; X,R | Y ; Y,R -> q2 !! All X and Y are "past stuff" to skip over
q2 : 0 ; X , L -> q3 !! When we find a matching 0, turn that to
!! an X, and sweep left to do the next pass
q3 : X ; X,L | Y ; Y,L -> q3 !! In q3, we move over all past X, Y
q3 : # ; # , L -> q4 !! but when we reach the middle marker, we
!! know that the next action is to seek the
!! next unprocessed 0 or 1
q4 : 0 ; 0,L | 1 ; 1,L -> q4 !! In q4, wait till we hit the leftmost 0/1
q4 : X ; X,R | Y ; Y,R -> Iq0 !! When we hit an X or Y, we know that we've
!! found the leftmost 0/1. Another pass begins.
!!---
q7 : 0 ; 0,R | 1 ; 1,R -> q7 !! q7 is similar to q1
q7 : # ; # , R -> q8 !! and q8 is similar to q2
q8 : X ; X,R | Y ; Y,R -> q8
q8 : 1 ; Y , L -> q3
!!---------------------------------------------------------------------------
!! You may use the line below as an empty shell to populate for your purposes
!! Also serves as a syntax reminder for entering DFAs.
!!
!! State : r1 ; w1 , m1 | r2 ; w2 , m2 -> s1 , s2 !! comment
!!
!! .. : .. ; .. , .. | .. ; .. , .. -> .. , .. !! ..
!!---------------------------------------------------------------------------
!!
!! Good commenting and software-engineering methods, good clean indentation,
!! grouping of similar states, columnar alignment, etc etc. are HUGELY
!! important in any programming endeavor -- especially while programming
!! automata. Otherwise, you can easily make a mistake in your automaton
!! code. Besides, you cannot rely upon others to find your mistakes, as
!! they will find your automaton code impossible to read!
!!
!!---------------------------------------------------------------------------
''')
dotObj_tm(wpw_tm, FuseEdges=True)
explore_tm(wpw_tm, "010#010", 33)
ww_ndtm = md2mc('''
TM
!!---------------------------------------------------------------------------
!! This is a TM for ww processing. Guesses midpoint using nondet.
!!
!!---------------------------------------------------------------------------
!!---------------------------------------------------------------------------
!! State : rd ; wr , mv -> tostates !! comment
!!---------------------------------------------------------------------------
Iq0 : 0 ; 0 , S -> q14 !! This simulates the TM taking a guess
Iq0 : 1 ; 1 , S -> q14 !! that it hasn't seen the midpoint. It
!! moves to q14
Iq0 : . ; . , R -> Fq1 !! yay! shortest acceptance is for eps eps
!! i.e. facing a sea of blanks that encodes
!! an epsilon followed by another epsilon.
!!---------------------------------------------------------------------------
q14 : 0 ; 0 , R -> q14 !! The TM skips over 0s or
!! 1s for a while, and then chooses a cell,
q14 : 0 ; X , L -> q2 !! declaring it the midpoint, or more specifically
!! FIRST CHARACTER PAST MIDPOINT, by marking it 'X'
!! and then moves to q2 (to march around the
!! chosen midpoint).
q14 : 1 ; 1 , R -> q14 !! Similar actions as with 0 in state q14,
q14 : 1 ; Y , L -> q2 !! except that it "dings" the "1" with a "Y"
!! to mark it the FIRST CHARACTER PAST MIDPOINT.
!! Then we march around it. While the separate
!! use of "X" and "Y" may not be necessary,
!! it improves understandability when you
!! finally see the result of TM executions.
q2 : 0 ; 0 , L -> q2 !! The TM is now winding back, seeking the
q2 : 1 ; 1 , L -> q2 !! left-end of the tape till hit hits a '.'
!! (blank).
q2 : . ; . , R -> q3 !! When that happens, the TM goes to state q3
!! to begin its work of "matching around."
!! We describe the q3,q5,q11,q9,q3 loop well.
!! The other loop q3,q4,q10,q8,q3 is similar.
!!-----------------------------------------------------------------
q3 : X ; X , R -> q6 !! This state is a stuck state (no progress)
!! WE came to q3 because we dinged a 0->X
!! or a 1->Y while in q14; so its matching
!! "partner" 0 or 1 must be found to the
!! left. Unfortunately, we are finding an
!! X or a Y. Thus, no "match around the middle"
!! is likely to happen.
q3 : Y ; Y , R -> q7 !! This state is ALSO a stuck state for similar
!! reasons as expressed in the comments
!! associated with q3 : X ; X ...
!!-----------------------------------------------------------------
!! Description of the q3,q5,q11,q9,q3 loop :
q3 : 1 ; Q , R -> q5 !! Upon seeing a 1, change to Q. Then MUST see a
!! matching Y, then change to 3, and go right, and to state q5.
!! We do this because 'Y' represents what
!! was '1' and got marked as midpoint (well,
!! one-past midpoint..).
!!-- What will happen in q5,q11,q9,q3 --
!! So we have to get past this assumed
!! midpoint and choose the next
!! "one past midpoint that has not been seen so far".
!! We enter q11 to then ding a matching
!! 0 to X or 1 to Y, moving left.
!! A blank sends us leftwards, as well.
!! We sweep left till we hit a Q. We MUST see a Q
!! because we entered "this lobe" by dinging a 1->Q.
!! The process repeats from state q3.
q5 : 0;0,R | 1;1,R | 2;2,R | 3;3,R -> q5 !! punt the 0/1/2/3; we need a "Y".
q5 : Y ; 3, R -> q11 !! ah-ha , got a Y. Ding to 3, seek 0/1/.
q11 : 1;Y,L | .;.,L | 0;X,L -> q9 !! phew! got to sweep left now!
q9 : 0;0,L | 1;1,L | 2;2,L | 3;3,L -> q9 !! whee! going left!
q9 : Q ; Q , R -> q3 !! Boiinggg - now gonna go right!
!!-----------------------------------------------------------------
!! Description of the q3,q4,q10,q8,q3 loop :
q3 : 0 ; P , R -> q4 !! This is similar to q3 : 1 ; Q , R -> q5 above
q4 : 0;0,R | 1;1,R | 2;2,R | 3;3,R -> q4 !! punt the 0/1/2/3; we need a "X".
q4 : X ; 2, R -> q10 !! ah-ha , got a X. Ding to 2, seek 0/1/.
q10 : 1;Y,L | .;.,L | 0;X,L -> q8 !! phew! got to sweep left now!
q8 : 0;0,L | 1;1,L | 2;2,L | 3;3,L -> q8 !! whee! going left!
q8 : P ; P , R -> q3 !! Boiinggg - now gonna go right!
!!-----------------------------------------------------------------
q3 : 2;2,R | 3;3,R -> q12 !! Seeing every sign of acceptance!!
!! We are seeing piles of 2 and 3
!! ALSO did not get stuck in q6 or q7
!! That means all the matches went fine
q12 : 2 ; 2 , R | 3 ; 3 , R -> q12 !! Skip over piles of past 2s and 3s
q12 : . ; . , R -> Fq13 !! Yay, acceptance when we hit a blank!
!!---------------------------------------------------------------------------
!! You may use the line below as an empty shell to populate for your purposes
!! Also serves as a syntax reminder for entering DFAs.
!!
!! State : r1 ; w1 , m1 | r2 ; w2 , m2 -> s1 , s2 !! comment
!!
!! .. : .. ; .. , .. | .. ; .. , .. -> .. , .. !! ..
!!---------------------------------------------------------------------------
!!
!! Good commenting and software-engineering methods, good clean indentation,
!! grouping of similar states, columnar alignment, etc etc. are HUGELY
!! important in any programming endeavor -- especially while programming
!! automata. Otherwise, you can easily make a mistake in your automaton
!! code. Besides, you cannot rely upon others to find your mistakes, as
!! they will find your automaton code impossible to read!
!!
!!---------------------------------------------------------------------------
''')
dotObj_tm(ww_ndtm, FuseEdges=True)
explore_tm(ww_ndtm, "0101", 30)
###Output
Allocating 8 tape cells to the LEFT!
Allocating 8 tape cells to the LEFT!
Allocating 8 tape cells to the RIGHT!
Allocating 8 tape cells to the RIGHT!
Allocating 8 tape cells to the LEFT!
Allocating 8 tape cells to the LEFT!
Detailing the halted configs now.
Rejected at ('q6', 9, '........X101', 26)
via ..
->('Iq0', 0, '0101', 30)
->('q14', 0, '0101', 29)
->('q2', 7, '........X101', 28)
->('q3', 8, '........X101', 27)
->('q6', 9, '........X101', 26)
Accepted at ('Fq13', 13, '........PQ23........', 6)
via ..
->('Iq0', 0, '0101', 30)
->('q14', 0, '0101', 29)
->('q14', 1, '0101', 28)
->('q14', 2, '0101', 27)
->('q2', 1, '01X1', 26)
->('q2', 0, '01X1', 25)
->('q2', 7, '........01X1', 24)
->('q3', 8, '........01X1', 23)
->('q4', 9, '........P1X1', 22)
->('q4', 10, '........P1X1', 21)
->('q10', 11, '........P121', 20)
->('q8', 10, '........P12Y', 19)
->('q8', 9, '........P12Y', 18)
->('q8', 8, '........P12Y', 17)
->('q3', 9, '........P12Y', 16)
->('q5', 10, '........PQ2Y', 15)
->('q5', 11, '........PQ2Y', 14)
->('q11', 12, '........PQ23', 13)
->('q9', 11, '........PQ23........', 12)
->('q9', 10, '........PQ23........', 11)
->('q9', 9, '........PQ23........', 10)
->('q3', 10, '........PQ23........', 9)
->('q12', 11, '........PQ23........', 8)
->('q12', 12, '........PQ23........', 7)
->('Fq13', 13, '........PQ23........', 6)
Rejected at ('q14', 4, '0101', 25)
via ..
->('Iq0', 0, '0101', 30)
->('q14', 0, '0101', 29)
->('q14', 1, '0101', 28)
->('q14', 2, '0101', 27)
->('q14', 3, '0101', 26)
->('q14', 4, '0101', 25)
Rejected at ('q4', 11, '........P10Y', 18)
via ..
->('Iq0', 0, '0101', 30)
->('q14', 0, '0101', 29)
->('q14', 1, '0101', 28)
->('q14', 2, '0101', 27)
->('q14', 3, '0101', 26)
->('q2', 2, '010Y', 25)
->('q2', 1, '010Y', 24)
->('q2', 0, '010Y', 23)
->('q2', 7, '........010Y', 22)
->('q3', 8, '........010Y', 21)
->('q4', 9, '........P10Y', 20)
->('q4', 10, '........P10Y', 19)
->('q4', 11, '........P10Y', 18)
Rejected at ('q4', 9, '........PY01', 24)
via ..
->('Iq0', 0, '0101', 30)
->('q14', 0, '0101', 29)
->('q14', 1, '0101', 28)
->('q2', 0, '0Y01', 27)
->('q2', 7, '........0Y01', 26)
->('q3', 8, '........0Y01', 25)
->('q4', 9, '........PY01', 24)
|
DEMO/load_data_tutorial.ipynb | ###Markdown
Dataset TutorialLet's first load several packages from DeepPurpose
###Code
# if you are using source version, uncomment the next two lines:
#import os
#os.chdir('../')
from DeepPurpose import utils, DTI, dataset
###Output
_____no_output_____
###Markdown
There are mainly three types of input data for DeepPurpose.1. Target Sequence and its name to be repurposed.2. Drug repurposing library.3. Training drug-target pairs, along with the binding scores.There are two ways to load the data.The first is to use the DeepPurpose.dataset library loader, which is very simple and we preprocess the data for you. The list of dataset supported is listed here:https://github.com/kexinhuang12345/DeepPurpose/blob/master/README.mddataThe second way is to read from local files, which should follow our data format, as we illustrated below. Here are some examples. First, let's show how to load some target sequences for COVID19.
###Code
target, target_name = dataset.load_SARS_CoV_Protease_3CL()
print('The target is: ' + target)
print('The target name is: ' + target_name)
target, target_name = dataset.load_SARS_CoV2_Protease_3CL()
print('The target is: ' + target)
print('The target name is: ' + target_name)
###Output
The target is: SGFRKMAFPSGKVEGCMVQVTCGTTTLNGLWLDDVVYCPRHVICTSEDMLNPNYEDLLIRKSNHNFLVQAGNVQLRVIGHSMQNCVLKLKVDTANPKTPKYKFVRIQPGQTFSVLACYNGSPSGVYQCAMRPNFTIKGSFLNGSCGSVGFNIDYDCVSFCYMHHMELPTGVHAGTDLEGNFYGPFVDRQTAQAAGTDTTITVNVLAWLYAAVINGDRWFLNRFTTTLNDFNLVAMKYNYEPLTQDHVDILGPLSAQTGIAVLDMCASLKELLQNGMNGRTILGSALLEDEFTPFDVVRQCSGVTFQ
The target name is: SARS-CoV2 3CL Protease
###Markdown
We also support to read from local txt files. For target sequence, we assume it has one line, and the first is the target name, and space, and followed by targe amino acid sequence.RNA_polymerase_SARS_CoV2_target_seq.txt:RNA_polymerase_SARS_CoV2 SADAQS...PHTVLQ
###Code
target, target_name = dataset.read_file_target_sequence('./toy_data/RNA_polymerase_SARS_CoV2_target_seq.txt')
print('The target is: ' + target)
print('The target name is: ' + target_name)
###Output
The target is: SADAQSFLNRVCGVSAARLTPCGTGTSTDVVYRAFDIYNDKVAGFAKFLKTNCCRFQEKDEDDNLIDSYFVVKRHTFSNYQHEETIYNLLKDCPAVAKHDFFKFRIDGDMVPHISRQRLTKYTMADLVYALRHFDEGNCDTLKEILVTYNCCDDDYFNKKDWYDFVENPDILRVYANLGERVRQALLKTVQFCDAMRNAGIVGVLTLDNQDLNGNWYDFGDFIQTTPGSGVPVVDSYYSLLMPILTLTRALTAESHVDTDLTKPYIKWDLLKYDFTEERLKLFDRYFKYWDQTYHPNCVNCLDDRCILHCANFNVLFSTVFPPTSFGPLVRKIFVDGVPFVVSTGYHFRELGVVHNQDVNLHSSRLSFKELLVYAADPAMHAASGNLLLDKRTTCFSVAALTNNVAFQTVKPGNFNKDFYDFAVSKGFFKEGSSVELKHFFFAQDGNAAISDYDYYRYNLPTMCDIRQLLFVVEVVDKYFDCYDGGCINANQVIVNNLDKSAGFPFNKWGKARLYYDSMSYEDQDALFAYTKRNVIPTITQMNLKYAISAKNRARTVAGVSICSTMTNRQFHQKLLKSIAATRGATVVIGTSKFYGGWHNMLKTVYSDVENPHLMGWDYPKCDRAMPNMLRIMASLVLARKHTTCCSLSHRFYRLANECAQVLSEMVMCGGSLYVKPGGTSSGDATTAYANSVFNICQAVTANVNALLSTDGNKIADKYVRNLQHRLYECLYRNRDVDTDFVNEFYAYLRKHFSMMILSDDAVVCFNSTYASQGLVASIKNFKSVLYYQNNVFMSEAKCWTETDLTKGPHEFCSQHTMLVKQGDDYVYLPYPDPSRILGAGCFVDDIVKTDGTLMIERFVSLAIDAYPLTKHPNQEYADVFHLYLQYIRKLHDELTGHMLDMYSVMLTNDNTSRYWEPEFYEAMYTPHTVLQ
The target name is: RNA_polymerase_SARS_CoV2
###Markdown
Now, let's move on to drug repurposing library. We currently support an antiviral drugs library and the broad repurposing library.
###Code
X_repurpose, Drug_Names, Drug_CIDs = dataset.load_antiviral_drugs()
X_repurpose[:3]
Drug_Names[:3]
Drug_CIDs[:3]
###Output
_____no_output_____
###Markdown
In the above example, the data is downloaded from the cloud and saved into default folder *'./data'*, you can also specify your PATH by *dataset.load_antiviral_drugs(PATH)*. We also allow option to not output PubChem CID by setting *dataset.load_antiviral_drugs(no_cid = True)*, this allows less lines for one line mode DeepPurpose, since in one line mode, the function expects only X_repurpose and Drug_Names. Similarly for Broad Repurposing Hub, we can do the same:
###Code
X_drug, Drug_Names, Drug_CIDs = dataset.load_broad_repurposing_hub()
X_drug[:3]
Drug_Names[:3]
###Output
_____no_output_____
###Markdown
This will first download the file from cloud to local default *'./data'* folder or you can input your data folder. Note that in the one line mode (*oneliner.repurpose()*), if you don't specify any *X_repurpose* library, the method will automatically use the Broad Repurposing Hub data and use the PubChem CIDs as the drug names since some drugs (as you can see from the above examples) are way too long.Now, let's show how you can load your own library using txt file!We assume the txt file consists of the following structure:repurposing_library.txtRufloxacin CN1CCN(CC1)c1c(F)cc2c3c1SCCn3cc(C(O)=O)c2=O\Sparfloxacin C[C@H]1CN(C[C@@H](C)N1)c1c(F)c(N)c2c(c1F)n(cc(C(O)=O)c2=O)C1CC1
###Code
X_drug, Drug_Names = dataset.read_file_repurposing_library('./toy_data/repurposing_data_examples.txt')
X_drug
Drug_Names
###Output
_____no_output_____
###Markdown
Okay, let's now move to the final training dataset! There are in general two types of training dataset that we expect.1. The drug-target pairs with the binding score or the interaction 1/0 label.2. The bioassay data where there is only one target and many drugs are screened.For the first one, we provide three data loaders for public available drug-target interaction datasets: KIBA, DAVIS, and BindingDB. Let's first talk about DAVIS.
###Code
X_drugs, X_targets, y = dataset.load_process_DAVIS(path = './data', binary = False, convert_to_log = True, threshold = 30)
X_drugs[:2]
X_targets[:1]
y[:2]
###Output
_____no_output_____
###Markdown
DAVIS dataloader has several default parameters. The path is the saving path. The binary parameter asks if you want to convert the binding score to binary classification since lots of the models are aimed to do that. The convert_to_log is to transform from the Kd unit from nM to p which has a more normal distribution for easier regression. The threshold is for binary classification, the default is recommended but you could also tune your own.Similarly, for KIBA.
###Code
X_drugs, X_targets, y = dataset.load_process_KIBA(path = './data', binary = False, threshold = 9)
###Output
Beginning Processing...
Beginning to extract zip file...
Done!
###Markdown
Another large dataset we support is BindingDB. There are three different thing from the KIBA and DAVIS data loader:1. BindingDB is big (several GBs). So we provide a separate function to download BindingDB *download_BindingDB()*, which will return the downloaded file path for you. Then you can set the *path = download_BindingDB()* in the *process_BindingDB()* function.2. BindingDB has four Binding values for drug target pairs: IC50, EC50, Kd, Ki. You should set the 'y' to one of them for the drug target pairs you would like.3. The loading of BindingDB from local file to Pandas is also pretty slow. So instead of putting path into the function, you could also set the df = the bindingDB pandas dataframe object.
###Code
data_path = dataset.download_BindingDB('./data/')
X_drugs, X_targets, y = dataset.process_BindingDB(path = data_path, df = None, y = 'Kd', binary = False, convert_to_log = True, threshold = 30)
print('There are ' + str(len(X_drugs)) + ' drug-target pairs.')
###Output
There are 66444 drug-target pairs.
###Markdown
Now, let's show how to load it from txt file. We assume it has the following format:dti.txtCC1=C...C4)N MKK...LIDL 7.365 \CC1=C...C4)N QQP...EGKH 4.999
###Code
X_drugs, X_targets, y = dataset.read_file_training_dataset_drug_target_pairs('./toy_data/dti.txt')
X_drugs
###Output
_____no_output_____
###Markdown
We are almost here! Now, in the end, let's look at bioassay data. We only write the AID1706 bioassay loader for now. But please check the source code since it is easy to produce another one. There are several things to look at.1. we have a new balanced parameter. Since bioassay data usually are highly skewed (i.e. only few are hits and most of them are not), for a better training purpose, we can make the data slightly more balanced. 2. The ratio of balancing can be tuned by the oversample_num parameter. It states the percentage of unbalanced:balanced data points.
###Code
X_drugs, X_targets, y = dataset.load_AID1706_SARS_CoV_3CL(path = './data', binary = True, threshold = 15, balanced = True, oversample_num = 30, seed = 1)
###Output
Beginning Processing...
###Markdown
In the end, we show how to load customized bioassay training data. We assume the following format:AID1706.txtSGFKKLVSP...GVRLQ \CCOC1...C=N4 0 \CCCCO...=CS2 0 \COC1=...23)F 0 \C1=CC...)CN 1 \CC(=O...3.Cl 1
###Code
X_drugs, X_targets, y = dataset.read_file_training_dataset_bioassay('./toy_data/AID1706.txt')
X_drugs[:2]
###Output
_____no_output_____
###Markdown
Dataset TutorialLet's first load several packages from DeepPurpose
###Code
import os
os.chdir('../')
from DeepPurpose import utils, models, dataset
###Output
_____no_output_____
###Markdown
There are mainly three types of input data for DeepPurpose.1. Target Sequence and its name to be repurposed.2. Drug repurposing library.3. Training drug-target pairs, along with the binding scores.There are two ways to load the data.The first is to use the DeepPurpose.dataset library loader, which is very simple and we preprocess the data for you. The list of dataset supported is listed here:https://github.com/kexinhuang12345/DeepPurpose/blob/master/README.mddataThe second way is to read from local files, which should follow our data format, as we illustrated below. Here are some examples. First, let's show how to load some target sequences for COVID19.
###Code
target, target_name = dataset.load_SARS_CoV_Protease_3CL()
print('The target is: ' + target)
print('The target name is: ' + target_name)
target, target_name = dataset.load_SARS_CoV2_Protease_3CL()
print('The target is: ' + target)
print('The target name is: ' + target_name)
###Output
The target is: SGFRKMAFPSGKVEGCMVQVTCGTTTLNGLWLDDVVYCPRHVICTSEDMLNPNYEDLLIRKSNHNFLVQAGNVQLRVIGHSMQNCVLKLKVDTANPKTPKYKFVRIQPGQTFSVLACYNGSPSGVYQCAMRPNFTIKGSFLNGSCGSVGFNIDYDCVSFCYMHHMELPTGVHAGTDLEGNFYGPFVDRQTAQAAGTDTTITVNVLAWLYAAVINGDRWFLNRFTTTLNDFNLVAMKYNYEPLTQDHVDILGPLSAQTGIAVLDMCASLKELLQNGMNGRTILGSALLEDEFTPFDVVRQCSGVTFQ
The target name is: SARS-CoV2 3CL Protease
###Markdown
We also support to read from local txt files. For target sequence, we assume it has one line, and the first is the target name, and space, and followed by targe amino acid sequence.RNA_polymerase_SARS_CoV2_target_seq.txt:RNA_polymerase_SARS_CoV2 SADAQS...PHTVLQ
###Code
target, target_name = dataset.read_file_target_sequence('./toy_data/RNA_polymerase_SARS_CoV2_target_seq.txt')
print('The target is: ' + target)
print('The target name is: ' + target_name)
###Output
The target is: SADAQSFLNRVCGVSAARLTPCGTGTSTDVVYRAFDIYNDKVAGFAKFLKTNCCRFQEKDEDDNLIDSYFVVKRHTFSNYQHEETIYNLLKDCPAVAKHDFFKFRIDGDMVPHISRQRLTKYTMADLVYALRHFDEGNCDTLKEILVTYNCCDDDYFNKKDWYDFVENPDILRVYANLGERVRQALLKTVQFCDAMRNAGIVGVLTLDNQDLNGNWYDFGDFIQTTPGSGVPVVDSYYSLLMPILTLTRALTAESHVDTDLTKPYIKWDLLKYDFTEERLKLFDRYFKYWDQTYHPNCVNCLDDRCILHCANFNVLFSTVFPPTSFGPLVRKIFVDGVPFVVSTGYHFRELGVVHNQDVNLHSSRLSFKELLVYAADPAMHAASGNLLLDKRTTCFSVAALTNNVAFQTVKPGNFNKDFYDFAVSKGFFKEGSSVELKHFFFAQDGNAAISDYDYYRYNLPTMCDIRQLLFVVEVVDKYFDCYDGGCINANQVIVNNLDKSAGFPFNKWGKARLYYDSMSYEDQDALFAYTKRNVIPTITQMNLKYAISAKNRARTVAGVSICSTMTNRQFHQKLLKSIAATRGATVVIGTSKFYGGWHNMLKTVYSDVENPHLMGWDYPKCDRAMPNMLRIMASLVLARKHTTCCSLSHRFYRLANECAQVLSEMVMCGGSLYVKPGGTSSGDATTAYANSVFNICQAVTANVNALLSTDGNKIADKYVRNLQHRLYECLYRNRDVDTDFVNEFYAYLRKHFSMMILSDDAVVCFNSTYASQGLVASIKNFKSVLYYQNNVFMSEAKCWTETDLTKGPHEFCSQHTMLVKQGDDYVYLPYPDPSRILGAGCFVDDIVKTDGTLMIERFVSLAIDAYPLTKHPNQEYADVFHLYLQYIRKLHDELTGHMLDMYSVMLTNDNTSRYWEPEFYEAMYTPHTVLQ
The target name is: RNA_polymerase_SARS_CoV2
###Markdown
Now, let's move on to drug repurposing library. We currently support an antiviral drugs library and the broad repurposing library.
###Code
X_repurpose, Drug_Names, Drug_CIDs = dataset.load_antiviral_drugs()
X_repurpose[:3]
Drug_Names[:3]
Drug_CIDs[:3]
###Output
_____no_output_____
###Markdown
In the above example, the data is downloaded from the cloud and saved into default folder *'./data'*, you can also specify your PATH by *dataset.load_antiviral_drugs(PATH)*. We also allow option to not output PubChem CID by setting *dataset.load_antiviral_drugs(no_cid = True)*, this allows less lines for one line mode DeepPurpose, since in one line mode, the function expects only X_repurpose and Drug_Names. Similarly for Broad Repurposing Hub, we can do the same:
###Code
X_drug, Drug_Names, Drug_CIDs = dataset.load_broad_repurposing_hub()
X_drug[:3]
Drug_Names[:3]
###Output
_____no_output_____
###Markdown
This will first download the file from cloud to local default *'./data'* folder or you can input your data folder. Note that in the one line mode (*oneliner.repurpose()*), if you don't specify any *X_repurpose* library, the method will automatically use the Broad Repurposing Hub data and use the PubChem CIDs as the drug names since some drugs (as you can see from the above examples) are way too long.Now, let's show how you can load your own library using txt file!We assume the txt file consists of the following structure:repurposing_library.txtRufloxacin CN1CCN(CC1)c1c(F)cc2c3c1SCCn3cc(C(O)=O)c2=O\Sparfloxacin C[C@H]1CN(C[C@@H](C)N1)c1c(F)c(N)c2c(c1F)n(cc(C(O)=O)c2=O)C1CC1
###Code
X_drug, Drug_Names = dataset.read_file_repurposing_library('./toy_data/repurposing_data_examples.txt')
X_drug
Drug_Names
###Output
_____no_output_____
###Markdown
Okay, let's now move to the final training dataset! There are in general two types of training dataset that we expect.1. The drug-target pairs with the binding score or the interaction 1/0 label.2. The bioassay data where there is only one target and many drugs are screened.For the first one, we provide three data loaders for public available drug-target interaction datasets: KIBA, DAVIS, and BindingDB. Let's first talk about DAVIS.
###Code
X_drugs, X_targets, y = dataset.load_process_DAVIS(path = './data', binary = False, convert_to_log = True, threshold = 30)
X_drugs[:2]
X_targets[:1]
y[:2]
###Output
_____no_output_____
###Markdown
DAVIS dataloader has several default parameters. The path is the saving path. The binary parameter asks if you want to convert the binding score to binary classification since lots of the models are aimed to do that. The convert_to_log is to transform from the Kd unit from nM to p which has a more normal distribution for easier regression. The threshold is for binary classification, the default is recommended but you could also tune your own.Similarly, for KIBA.
###Code
X_drugs, X_targets, y = dataset.load_process_KIBA(path = './data', binary = False, threshold = 9)
###Output
Beginning Processing...
Beginning to extract zip file...
Done!
###Markdown
Another large dataset we support is BindingDB. There are three different thing from the KIBA and DAVIS data loader:1. BindingDB is big (several GBs). So we provide a separate function to download BindingDB *download_BindingDB()*, which will return the downloaded file path for you. Then you can set the *path = download_BindingDB()* in the *process_BindingDB()* function.2. BindingDB has four Binding values for drug target pairs: IC50, EC50, Kd, Ki. You should set the 'y' to one of them for the drug target pairs you would like.3. The loading of BindingDB from local file to Pandas is also pretty slow. So instead of putting path into the function, you could also set the df = the bindingDB pandas dataframe object.
###Code
data_path = dataset.download_BindingDB('./data/')
X_drugs, X_targets, y = dataset.process_BindingDB(path = data_path, df = None, y = 'Kd', binary = False, convert_to_log = True, threshold = 30)
print('There are ' + str(len(X_drugs)) + ' drug-target pairs.')
###Output
There are 66444 drug-target pairs.
###Markdown
Now, let's show how to load it from txt file. We assume it has the following format:dti.txtCC1=C...C4)N MKK...LIDL 7.365 \CC1=C...C4)N QQP...EGKH 4.999
###Code
X_drugs, X_targets, y = dataset.read_file_training_dataset_drug_target_pairs('./toy_data/dti.txt')
X_drugs
###Output
_____no_output_____
###Markdown
We are almost here! Now, in the end, let's look at bioassay data. We only write the AID1706 bioassay loader for now. But please check the source code since it is easy to produce another one. There are several things to look at.1. we have a new balanced parameter. Since bioassay data usually are highly skewed (i.e. only few are hits and most of them are not), for a better training purpose, we can make the data slightly more balanced. 2. The ratio of balancing can be tuned by the oversample_num parameter. It states the percentage of unbalanced:balanced data points.
###Code
X_drugs, X_targets, y = dataset.load_AID1706_SARS_CoV_3CL(path = './data', binary = True, threshold = 15, balanced = True, oversample_num = 30, seed = 1)
###Output
Beginning Processing...
###Markdown
In the end, we show how to load customized bioassay training data. We assume the following format:AID1706.txtSGFKKLVSP...GVRLQ \CCOC1...C=N4 0 \CCCCO...=CS2 0 \COC1=...23)F 0 \C1=CC...)CN 1 \CC(=O...3.Cl 1
###Code
X_drugs, X_targets, y = dataset.read_file_training_dataset_bioassay('./toy_data/AID1706.txt')
X_drugs[:2]
###Output
_____no_output_____
###Markdown
Dataset TutorialLet's first load several packages from DeepPurpose
###Code
import os
os.chdir('../')
from DeepPurpose import utils, dataset, CompoundPred
from DeepPurpose import DTI as models
###Output
_____no_output_____
###Markdown
There are mainly three types of input data for DeepPurpose.1. Target Sequence and its name to be repurposed.2. Drug repurposing library.3. Training drug-target pairs, along with the binding scores.There are two ways to load the data.The first is to use the DeepPurpose.dataset library loader, which is very simple and we preprocess the data for you. The list of dataset supported is listed here:https://github.com/kexinhuang12345/DeepPurpose/blob/master/README.mddataThe second way is to read from local files, which should follow our data format, as we illustrated below. Here are some examples. First, let's show how to load some target sequences for COVID19.
###Code
target, target_name = dataset.load_SARS_CoV_Protease_3CL()
print('The target is: ' + target)
print('The target name is: ' + target_name)
target, target_name = dataset.load_SARS_CoV2_Protease_3CL()
print('The target is: ' + target)
print('The target name is: ' + target_name)
###Output
The target is: SGFRKMAFPSGKVEGCMVQVTCGTTTLNGLWLDDVVYCPRHVICTSEDMLNPNYEDLLIRKSNHNFLVQAGNVQLRVIGHSMQNCVLKLKVDTANPKTPKYKFVRIQPGQTFSVLACYNGSPSGVYQCAMRPNFTIKGSFLNGSCGSVGFNIDYDCVSFCYMHHMELPTGVHAGTDLEGNFYGPFVDRQTAQAAGTDTTITVNVLAWLYAAVINGDRWFLNRFTTTLNDFNLVAMKYNYEPLTQDHVDILGPLSAQTGIAVLDMCASLKELLQNGMNGRTILGSALLEDEFTPFDVVRQCSGVTFQ
The target name is: SARS-CoV2 3CL Protease
###Markdown
We also support to read from local txt files. For target sequence, we assume it has one line, and the first is the target name, and space, and followed by targe amino acid sequence.RNA_polymerase_SARS_CoV2_target_seq.txt:RNA_polymerase_SARS_CoV2 SADAQS...PHTVLQ
###Code
pwd()
#os.chdir('DeepPurpose')
target, target_name = dataset.read_file_target_sequence('./toy_data/RNA_polymerase_SARS_CoV2_target_seq.txt')
print('The target is: ' + target)
print('The target name is: ' + target_name)
###Output
The target is: SADAQSFLNRVCGVSAARLTPCGTGTSTDVVYRAFDIYNDKVAGFAKFLKTNCCRFQEKDEDDNLIDSYFVVKRHTFSNYQHEETIYNLLKDCPAVAKHDFFKFRIDGDMVPHISRQRLTKYTMADLVYALRHFDEGNCDTLKEILVTYNCCDDDYFNKKDWYDFVENPDILRVYANLGERVRQALLKTVQFCDAMRNAGIVGVLTLDNQDLNGNWYDFGDFIQTTPGSGVPVVDSYYSLLMPILTLTRALTAESHVDTDLTKPYIKWDLLKYDFTEERLKLFDRYFKYWDQTYHPNCVNCLDDRCILHCANFNVLFSTVFPPTSFGPLVRKIFVDGVPFVVSTGYHFRELGVVHNQDVNLHSSRLSFKELLVYAADPAMHAASGNLLLDKRTTCFSVAALTNNVAFQTVKPGNFNKDFYDFAVSKGFFKEGSSVELKHFFFAQDGNAAISDYDYYRYNLPTMCDIRQLLFVVEVVDKYFDCYDGGCINANQVIVNNLDKSAGFPFNKWGKARLYYDSMSYEDQDALFAYTKRNVIPTITQMNLKYAISAKNRARTVAGVSICSTMTNRQFHQKLLKSIAATRGATVVIGTSKFYGGWHNMLKTVYSDVENPHLMGWDYPKCDRAMPNMLRIMASLVLARKHTTCCSLSHRFYRLANECAQVLSEMVMCGGSLYVKPGGTSSGDATTAYANSVFNICQAVTANVNALLSTDGNKIADKYVRNLQHRLYECLYRNRDVDTDFVNEFYAYLRKHFSMMILSDDAVVCFNSTYASQGLVASIKNFKSVLYYQNNVFMSEAKCWTETDLTKGPHEFCSQHTMLVKQGDDYVYLPYPDPSRILGAGCFVDDIVKTDGTLMIERFVSLAIDAYPLTKHPNQEYADVFHLYLQYIRKLHDELTGHMLDMYSVMLTNDNTSRYWEPEFYEAMYTPHTVLQ
The target name is: RNA_polymerase_SARS_CoV2
###Markdown
Now, let's move on to drug repurposing library. We currently support an antiviral drugs library and the broad repurposing library.
###Code
X_repurpose, Drug_Names, Drug_CIDs = dataset.load_antiviral_drugs()
X_repurpose[:3]
Drug_Names[:3]
Drug_CIDs[:3]
###Output
_____no_output_____
###Markdown
In the above example, the data is downloaded from the cloud and saved into default folder *'./data'*, you can also specify your PATH by *dataset.load_antiviral_drugs(PATH)*. We also allow option to not output PubChem CID by setting *dataset.load_antiviral_drugs(no_cid = True)*, this allows less lines for one line mode DeepPurpose, since in one line mode, the function expects only X_repurpose and Drug_Names. Similarly for Broad Repurposing Hub, we can do the same:
###Code
X_drug, Drug_Names, Drug_CIDs = dataset.load_broad_repurposing_hub()
X_drug[:3]
Drug_Names[:3]
###Output
_____no_output_____
###Markdown
This will first download the file from cloud to local default *'./data'* folder or you can input your data folder. Note that in the one line mode (*oneliner.repurpose()*), if you don't specify any *X_repurpose* library, the method will automatically use the Broad Repurposing Hub data and use the PubChem CIDs as the drug names since some drugs (as you can see from the above examples) are way too long.Now, let's show how you can load your own library using txt file!We assume the txt file consists of the following structure:repurposing_library.txtRufloxacin CN1CCN(CC1)c1c(F)cc2c3c1SCCn3cc(C(O)=O)c2=O\Sparfloxacin C[C@H]1CN(C[C@@H](C)N1)c1c(F)c(N)c2c(c1F)n(cc(C(O)=O)c2=O)C1CC1
###Code
X_drug, Drug_Names = dataset.read_file_repurposing_library('./toy_data/repurposing_data_examples.txt')
X_drug
Drug_Names
###Output
_____no_output_____
###Markdown
Okay, let's now move to the final training dataset! There are in general two types of training dataset that we expect.1. The drug-target pairs with the binding score or the interaction 1/0 label.2. The bioassay data where there is only one target and many drugs are screened.For the first one, we provide three data loaders for public available drug-target interaction datasets: KIBA, DAVIS, and BindingDB. Let's first talk about DAVIS.
###Code
X_drugs, X_targets, y = dataset.load_process_DAVIS(path = './data', binary = False, convert_to_log = True, threshold = 30)
X_drugs[:2]
X_targets[:1]
y[:2]
###Output
_____no_output_____
###Markdown
DAVIS dataloader has several default parameters. The path is the saving path. The binary parameter asks if you want to convert the binding score to binary classification since lots of the models are aimed to do that. The convert_to_log is to transform from the Kd unit from nM to p which has a more normal distribution for easier regression. The threshold is for binary classification, the default is recommended but you could also tune your own.Similarly, for KIBA.
###Code
X_drugs, X_targets, y = dataset.load_process_KIBA(path = './data', binary = False, threshold = 9)
###Output
Beginning Processing...
100% [............................................................................] 338300 / 338300Beginning to extract zip file...
Done!
###Markdown
Another large dataset we support is BindingDB. There are three different thing from the KIBA and DAVIS data loader:1. BindingDB is big (several GBs). So we provide a separate function to download BindingDB *download_BindingDB()*, which will return the downloaded file path for you. Then you can set the *path = download_BindingDB()* in the *process_BindingDB()* function.2. BindingDB has four Binding values for drug target pairs: IC50, EC50, Kd, Ki. You should set the 'y' to one of them for the drug target pairs you would like.3. The loading of BindingDB from local file to Pandas is also pretty slow. So instead of putting path into the function, you could also set the df = the bindingDB pandas dataframe object.
###Code
data_path = dataset.download_BindingDB('./data/')
print(data_path)
X_drugs, X_targets, y = dataset.process_BindingDB(path = data_path, df = None, y = 'Kd', binary = False, convert_to_log = True, threshold = 30)
print('There are ' + str(len(X_drugs)) + ' drug-target pairs.')
###Output
There are 66444 drug-target pairs.
###Markdown
Now, let's show how to load it from txt file. We assume it has the following format:dti.txtCC1=C...C4)N MKK...LIDL 7.365 \CC1=C...C4)N QQP...EGKH 4.999
###Code
X_drugs, X_targets, y = dataset.read_file_training_dataset_drug_target_pairs('./toy_data/dti.txt')
X_drugs
###Output
_____no_output_____
###Markdown
We are almost here! Now, in the end, let's look at bioassay data. We only write the AID1706 bioassay loader for now. But please check the source code since it is easy to produce another one. There are several things to look at.1. we have a new balanced parameter. Since bioassay data usually are highly skewed (i.e. only few are hits and most of them are not), for a better training purpose, we can make the data slightly more balanced. 2. The ratio of balancing can be tuned by the oversample_num parameter. It states the percentage of unbalanced:balanced data points.
###Code
X_drugs, X_targets, y = dataset.load_AID1706_SARS_CoV_3CL(path = './data', binary = True, threshold = 15, balanced = True, oversample_num = 30, seed = 1)
###Output
Beginning Processing...
-1 / unknown
###Markdown
In the end, we show how to load customized bioassay training data. We assume the following format:AID1706.txtSGFKKLVSP...GVRLQ \CCOC1...C=N4 0 \CCCCO...=CS2 0 \COC1=...23)F 0 \C1=CC...)CN 1 \CC(=O...3.Cl 1
###Code
X_drugs, X_targets, y = dataset.read_file_training_dataset_bioassay('./toy_data/AID1706.txt')
X_drugs[:2]
###Output
_____no_output_____ |
src/Capital Bikeshare Final Project.ipynb | ###Markdown
Exploratory Data Analysis
###Code
# feature engineering check, used new name dataframe to play with data without touching my original dataframe
EDA_DF= df.copy()
EDA_DF.head(10)
# Exploratory plot to view the data for each col vs count
EDA_DF.plot.scatter(x= 'year', y='count')
EDA_DF.plot.scatter(x= 'weather', y='count')
EDA_DF.plot.scatter(x= 'temp', y='count')
EDA_DF.plot.scatter(x= 'humidity', y='count')
EDA_DF.plot.scatter(x= 'windspeed', y='count')
EDA_DF['count'].hist()
EDA_DF['season'].hist()
EDA_DF['holiday'].hist()
EDA_DF['workingday'].hist()
EDA_DF['weather'].hist()
EDA_DF['temp'].hist()
EDA_DF['atemp'].hist()
EDA_DF['humidity'].hist()
EDA_DF['windspeed'].hist()
sns.barplot(data=EDA_DF, x='hour', y='count', hue='season')
sns.set(rc = {'figure.figsize':(16,8)})
plt.title('Frequency of Bike hires')
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
ohc = OneHotEncoder(sparse=False, handle_unknown='ignore')
ohc.fit(X_train[['month','day','hour','minute']])
onehot_mix_set = ohc.transform(X_train[['month','day','hour','minute']])
onehot_mix_set = pd.DataFrame (onehot_mix_set)
onehot_mix_set.head()
k = KBinsDiscretizer(n_bins=5, encode='onehot', strategy='quantile')
k.fit(X_train[['temp']])
bins = k.transform(X_train[['temp']])
bins = pd.DataFrame(bins.todense())
bins.head()
X_train.reset_index(inplace=True)
unmodified = X_train[['weather', 'temp', 'humidity', 'windspeed','year','month','day','hour','minute']]
unmodified.head() # drop feature engineered cols
unmodified2 = unmodified.copy() #unmodified.drop('temp', axis=1, inplace=True)
unmodified2.columns
unmodified2.drop('temp',axis=1, inplace=True)
unmodified2.drop('month',axis=1, inplace=True)
unmodified2.drop('day', axis=1, inplace=True)
unmodified2.drop('hour', axis=1, inplace=True)
unmodified2.drop('minute',axis=1, inplace=True)
#unmodifed2.drop(['temp','month','day','hour','minute'], axis=1, inplace=True)
#unmodifed2.drop('temp','month','day','hour','minute', axis=1, inplace=True)
unmodified2
onehot_mix_set
bins
onehot_mix_set.shape, bins.shape, unmodified2.shape
merge_first = pd.merge(left=unmodified2, right=onehot_mix_set, how='outer', left_index=True, right_index=True)
merge_second = pd.merge(left=merge_first, right=bins, how='outer', left_index=True, right_index=True)
merge_second
X_train_fe = merge_second
X_train_fe.shape
scaler = MinMaxScaler()
scaler.fit(X_train_fe)
X_train_scaled = scaler.transform(X_train_fe)
pd.DataFrame(X_train_scaled).head(10)
m = LinearRegression()
m.fit(X_train_scaled, y_train)
train_accuracy = m.score(X_train_scaled, y_train)
train_accuracy
test_ohc = ohc.transform(X_test[['month','day','hour','minute']])
test_ohc = pd.DataFrame(test_ohc)
test_ohc.shape,
test_bins = k.transform(X_test[['temp']])
test_bins = pd.DataFrame(test_bins.todense())
test_bins.shape
X_test.reset_index(inplace=True)
unmodified_test = X_test[['weather', 'temp', 'humidity', 'windspeed','year','month','day','hour','minute']]
unmodified_test.head()
unmodified_test_2 = unmodified_test.copy()
unmodified_test_2.columns
unmodified_test_2.drop('temp',axis=1, inplace=True)
unmodified_test_2.drop('month',axis=1, inplace=True)
unmodified_test_2.drop('day', axis=1, inplace=True)
unmodified_test_2.drop('hour', axis=1, inplace=True)
unmodified_test_2.drop('minute',axis=1, inplace=True)
test_ohc.shape, test_bins.shape, unmodified_test_2.shape
merge_first_test = pd.merge(left=unmodified_test_2, right=test_ohc, how='outer', left_index=True, right_index=True)
merge_second_test = pd.merge(left=merge_first_test, right=test_bins, how='outer', left_index=True, right_index=True)
merge_second_test
X_test_fe = merge_second_test
X_test_fe.shape
X_test_scaled = scaler.transform(X_test_fe)
X_test_scaled.shape
X_test_scaled.shape
pd.DataFrame(X_test_scaled).head(10)
train_accuracy
test_accuracy = m.score(X_test_scaled, y_test)
test_accuracy
###Output
_____no_output_____
###Markdown
Random Forest Regressor
###Code
rf = RandomForestRegressor(max_depth=5, random_state=0)
rf.fit(X_train_scaled, y_train)
X_train_scaled_df = pd.DataFrame(X_train_scaled)
pd.DataFrame({'importance': rf.feature_importances_, 'feature': X_train_scaled_df.columns}).\
sort_values('importance', ascending=False)
###Output
_____no_output_____ |
8-Labs/Lab21/src/Lab21-Old.ipynb | ###Markdown
Laboratory 13 Probability Modeling Full name: R: HEX: Title of the notebook Date: Estimate the magnitude of the annual peak flow at Spring Ck near Spring, TX.The file `08068500.pkf` is an actual WATSTORE formatted file for a USGS gage at Spring Creek, Texas. The first few lines of the file look like: Z08068500 USGS H08068500 3006370952610004848339SW12040102409 409 72.6 N08068500 Spring Ck nr Spring, TX Y08068500 308068500 19290530 483007 34.30 1879 308068500 19390603 838 13.75 308068500 19400612 3420 21.42 308068500 19401125 42700 33.60 308068500 19420409 14200 27.78 308068500 19430730 8000 25.09 308068500 19440319 5260 23.15 308068500 19450830 31100 32.79 308068500 19460521 12200 27.97 The first column are some agency codes that identify the station , the second column after the fourth row is a date in YYYYMMDD format, the third column is a discharge in CFS, the fourth and fifth column are not relevant for this laboratory exercise. The file was downloadef fromhttps://nwis.waterdata.usgs.gov/tx/nwis/peak?site_no=08068500&agency_cd=USGS&format=hn2In the original file there are a couple of codes that are manually removed:- 19290530 483007; the trailing 7 is a code identifying a break in the series (non-sequential)- 20170828 784009; the trailing 9 identifies the historical peakThe laboratory task is to fit the data models to this data, decide the best model from visual perspective, and report from that data model the magnitudes of peak flow associated with the probebilitiess below (i.e. populate the table)|Exceedence Probability|Flow Value|Remarks||:---|:---|:---||25% |????| 75% chance of greater value| |50% |????| 50% chance of greater value| |75% |????| 25% chance of greater value| |90% |????| 10% chance of greater value||99% |????| 1% chance of greater value (in flood statistics, this is the 1 in 100-yr chance event)||99.8%|????| 0.002% chance of greater value (in flood statistics, this is the 1 in 500-yr chance event)||99.9%|????| 0.001% chance of greater value (in flood statistics, this is the 1 in 1000-yr chance event)|The first step is to read the file, skipping the first part, then build a dataframe:
###Code
# Read the data file
amatrix = [] # null list to store matrix reads
rowNumA = 0
matrix1=[]
col0=[]
col1=[]
col2=[]
with open('08068500.pkf','r') as afile:
lines_after_4 = afile.readlines()[4:]
afile.close() # Disconnect the file
howmanyrows = len(lines_after_4)
for i in range(howmanyrows):
matrix1.append(lines_after_4[i].strip().split())
for i in range(howmanyrows):
col0.append(matrix1[i][0])
col1.append(matrix1[i][1])
col2.append(matrix1[i][2])
# col2 is date, col3 is peak flow
#now build a datafranem
import pandas
df = pandas.DataFrame(col0)
df['date']= col1
df['flow']= col2
df.head()
###Output
_____no_output_____
###Markdown
Now explore if you can plot the dataframe as a plot of peaks versus date.
###Code
# Plot here
###Output
_____no_output_____
###Markdown
From here on you can proceede using the lecture notebook as a go-by, although you should use functions as much as practical to keep your work concise
###Code
# Descriptive Statistics
# Weibull Plotting Position Function
# Normal Quantile Function
# Fitting Data to Normal Data Model
###Output
_____no_output_____
###Markdown
Normal Distribution Data Model|Exceedence Probability|Flow Value|Remarks||:---|:---|:---||25% |????| 75% chance of greater value| |50% |????| 50% chance of greater value| |75% |????| 25% chance of greater value| |90% |????| 10% chance of greater value||99% |????| 1% chance of greater value (in flood statistics, this is the 1 in 100-yr chance event)||99.8%|????| 0.002% chance of greater value (in flood statistics, this is the 1 in 500-yr chance event)||99.9%|????| 0.001% chance of greater value (in flood statistics, this is the 1 in 1000-yr chance event)|
###Code
# Log-Normal Quantile Function
# Fitting Data to Normal Data Model
###Output
_____no_output_____
###Markdown
Log-Normal Distribution Data Model|Exceedence Probability|Flow Value|Remarks||:---|:---|:---||25% |????| 75% chance of greater value| |50% |????| 50% chance of greater value| |75% |????| 25% chance of greater value| |90% |????| 10% chance of greater value||99% |????| 1% chance of greater value (in flood statistics, this is the 1 in 100-yr chance event)||99.8%|????| 0.002% chance of greater value (in flood statistics, this is the 1 in 500-yr chance event)||99.9%|????| 0.001% chance of greater value (in flood statistics, this is the 1 in 1000-yr chance event)|
###Code
# Gumbell EV1 Quantile Function
# Fitting Data to Gumbell EV1 Data Model
###Output
_____no_output_____
###Markdown
Gumbell Double Exponential (EV1) Distribution Data Model|Exceedence Probability|Flow Value|Remarks||:---|:---|:---||25% |????| 75% chance of greater value| |50% |????| 50% chance of greater value| |75% |????| 25% chance of greater value| |90% |????| 10% chance of greater value||99% |????| 1% chance of greater value (in flood statistics, this is the 1 in 100-yr chance event)||99.8%|????| 0.002% chance of greater value (in flood statistics, this is the 1 in 500-yr chance event)||99.9%|????| 0.001% chance of greater value (in flood statistics, this is the 1 in 1000-yr chance event)|
###Code
# Gamma (Pearson Type III) Quantile Function
# Fitting Data to Pearson (Gamma) III Data Model
# This is new, in lecture the fit was to log-Pearson, same procedure, but not log transformed
###Output
_____no_output_____
###Markdown
Pearson III Distribution Data Model |Exceedence Probability|Flow Value|Remarks||:---|:---|:---||25% |????| 75% chance of greater value| |50% |????| 50% chance of greater value| |75% |????| 25% chance of greater value| |90% |????| 10% chance of greater value||99% |????| 1% chance of greater value (in flood statistics, this is the 1 in 100-yr chance event)||99.8%|????| 0.002% chance of greater value (in flood statistics, this is the 1 in 500-yr chance event)||99.9%|????| 0.001% chance of greater value (in flood statistics, this is the 1 in 1000-yr chance event)|
###Code
# Fitting Data to Log-Pearson (Log-Gamma) III Data Model
###Output
_____no_output_____ |
code/notebooks/Injection_grid.ipynb | ###Markdown
Figure out the magnitude limit of CPM method.
###Code
import sys, os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.io import fits
from tess_stars2px import tess_stars2px_function_entry
import eleanor
import tess_rotation as tr
import starspot as ss
import starry
from contextlib import contextmanager
import warnings
warnings.filterwarnings('ignore')
@contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
###Output
_____no_output_____
###Markdown
Define functions for injection and recovery.
###Code
# Calculate counts for a star of a certain magnitude
def mag_to_counts(mag, seconds):
"""
Convert stellar magnitude to electron counts.
"15,000 e−/s for a star of m = 10: thus, a star of m = 5 will create 3 × 106 electrons in a two-second exposure"
10 x fainter is a magnitude increase of 2.5
F2/F1 approx 2.5^(delta m)
delta m = 1
F2/F1 = 2.5
delta m = log(F2/F1)/log(2.5)
for F2/F1 = 10, delta m = 2.5
For a change of 1 mag, the change in brightness is 2.5.
For a change of 2.5 mag, the change in brightness is 10.
For a change of 5 mag, the change in brightness is 100.
"""
m = 10
e = 15000
delta_m = m - mag
factor = 2.51**delta_m
counts_per_sec = e * factor
return counts_per_sec * seconds
###Output
_____no_output_____
###Markdown
Instantiate injection class.
###Code
ticid = 765143490
tesscut_path = "/Users/rangus/projects/TESS-rotation/data/TESScut/"
star = tr.InjectStar(ticid, tesscut_path, upper_sector_limit=14)
magnitude = 15
baseline = mag_to_counts(magnitude, 2)
print(baseline, baseline*.01)
period, amplitude = 50, baseline*.01 # 1% amplitude.
star.generate_signal(period, amplitude, baseline)
plt.plot(star.time_array, star.signal)
print((max(star.signal) - min(star.signal))/baseline*100, "%")
star.generate_signal(period, amplitude, baseline)
plt.plot(star.time_array, star.signal)
print((max(star.signal) - min(star.signal))/baseline*100, "%")
def loop(period, amplitude, baseline, nsamps=10):
true_periods, recovered_periods, true_amp = [np.zeros(nsamps) for i in range(3)]
for i in range(nsamps):
# print(i, "of", nsamps)
# Inject
with suppress_stdout():
star.generate_signal(period, amplitude, baseline)
star.inject_signal()
plt.plot(star.time_array, star.signal)
# Recover
with suppress_stdout():
time_cpm, flux_cpm = star.CPM_recover()
# Stitch
with suppress_stdout():
time, flux, flux_err = tr.stitch_light_curve(ticid, time_cpm, flux_cpm)
# Measure
p = np.polyfit(time, flux, 1)
rotate = ss.RotationModel(time, flux-np.polyval(p, time), flux_err)
ls_period = rotate.ls_rotation(max_period=200.)
true_periods[i] = period
recovered_periods[i] = ls_period
true_amp[i] = (max(star.signal) - min(star.signal))/baseline*100
# fig = plt.figure(figsize=(16, 8), dpi=200)
# ax1 = fig.add_subplot(211)
# xs = np.linspace(min(time), max(time), 1000)
# ax1.errorbar(time, flux-np.polyval(p, time), yerr=flux_err, fmt="k.", alpha=.2, label="$\mathrm{Recovered~signal}$", rasterized=True)
# ax1.legend(fontsize=18)
# ax1.set_xlabel("$\mathrm{Time~[days]}$")
# ax1.set_ylabel("$\mathrm{Flux~[arbitrary~units]}$");
# ax2 = fig.add_subplot(212)
# ax2.plot(1./rotate.freq, rotate.power)
# ax2.set_xlabel("$\mathrm{Period~[days]}$")
# ax2.set_ylabel("$\mathrm{Power}$");
# plt.tight_layout()
# plt.show()
return true_periods, recovered_periods, true_amp
periods = np.array([50, 100, 120, 150, 170])
magnitudes = np.array([14.5, 14.85, 15, 15.25, 15.5])
baselines = mag_to_counts(magnitudes, 2)
amplitudes = baselines*.01
P, B = np.meshgrid(periods, baselines, indexing="ij")
_, A = np.meshgrid(periods, amplitudes, indexing="ij")
N = np.zeros_like(P)
for i, p in enumerate(periods):
for j, b in enumerate(baselines):
print(f"{i} of {len(periods)} periods, {j} of {len(baselines)} baselines")
true, recovered, amp = loop(P[i, j], A[i, j], B[i, j], nsamps=10)
relative = true/recovered
correct_mask = (.9 < relative) & (relative < 1.1)
N[i, j] = len(true[correct_mask])
N
print((8 + 7 + 7 + 6) / 40 * 100)
P
B
true50, recovered50, amp50 = loop(period, amplitude, baseline, nsamps=10)mag_to_counts
relative = true50/recovered50
correct_mask = (.9 < relative) & (relative < 1.1)
print(len(true50[correct_mask]), "correct out of 20")
true100, recovered100, amp100 = loop(100, amplitude, baseline, nsamps=20)
relative = true100/recovered100
correct_mask = (.9 < relative) & (relative < 1.1)
print(len(true100[correct_mask]), "correct out of 20")
print(amp100)
true150, recovered150, amp150 = loop(150, amplitude, baseline, nsamps=10)
relative = true150/recovered150
correct_mask = (.9 < relative) & (relative < 1.1)
print(len(true150[correct_mask]), "correct out of 10")
print(amp150)
true200, recovered200, amp200 = loop(200, amplitude, baseline, nsamps=10)
relative = true200/recovered200
correct_mask = (.9 < relative) & (relative < 1.1)
print(len(true200[correct_mask]), "correct out of 20")
print(amp200)
print(true, recovered)
plt.plot(true, recovered, ".");
###Output
_____no_output_____ |
hw2/knn/knn.ipynb | ###Markdown
k-Nearest Neighbor (kNN) on CIFAR-10The kNN classifier consists of two stages: training and testing.During training, the classifier takes the training data and simply remembers it.During testing, kNN classifies a test image by comparing it to all training images and selecting the majority label among the k most similar training examples.The value of k is computed by cross-validation.In this exercise you will implement these steps and gain proficiency in writing efficient, vectorized code. You will select the best value of k by cross-validation. You will be using a version of the CIFAR-10 object recognition dataset for this exercise.
###Code
import random
import numpy as np
from data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
# Run the CIFAR-10 dataset load script in the folder datasets, before you run this cell
cifar10_dir = './datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: %s' %(X_train.shape,))
print('Training labels shape: %s' %(y_train.shape,))
print('Test data shape: %s' %(X_test.shape,))
print('Test labels shape: %s' %(y_test.shape))
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
###Output
(5000, 3072) (500, 3072)
###Markdown
Creating a kNN classifierRemember that training a kNN classifier is a no-op. The classifier simply remembers the data and does no further processing
###Code
from k_nearest_neighbor import KNearestNeighbor
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
###Output
_____no_output_____
###Markdown
Classifying test data with a kNN-classifierWe would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:- First we must compute the distances between all test examples and all train examples.- Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are M training examples and N test examples, this stage should result in a N x M matrix where each element (i,j) is the distance between the i-th test set example and j-th training set example.First, open k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
###Code
# Open k_nearest_neighbor.py and implement compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
###Output
_____no_output_____
###Markdown
**Question**: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns?**Answer**:1. The bright rows are because the test sample for that row has a higher distance than most training samples. The test sample is very different from most training samples, which could mean that it might not have be classified in any classes within the dataset.2. The bright columns are because the training sample for that column has a higher distance than most testing samples. The training sample is very different from most testing samples, which could mean that no test samples are classified in its class.
###Code
# Now implement the function predict_labels in k_nearest_neighbor.py and run the code below:
# We use k = 1 (which is 1- Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 137 / 500 correct => accuracy: 0.274000
###Markdown
You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5.
###Code
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 139 / 500 correct => accuracy: 0.278000
###Markdown
You should expect to see a slightly better performance than with k = 1. Speeding up distance computations
###Code
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
###Output
Two loop version took 22.792397 seconds
One loop version took 57.772584 seconds
No loop version took 0.441808 seconds
###Markdown
Choosing k by cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
###Code
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
x_train_folds = np.array(np.array_split(X_train, num_folds))
y_train_folds = np.array(np.array_split(y_train, num_folds))
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
classifier = KNearestNeighbor()
for k in k_choices:
k_to_accuracies[k] = []
for i in range(num_folds):
new_x_train = np.append(x_train_folds[:i], x_train_folds[i + 1:]).reshape(-1, X_train.shape[1])
new_y_train = np.append(y_train_folds[:i], y_train_folds[i + 1:])
classifier.train(new_x_train, new_y_train)
y_test_pred = classifier.predict(x_train_folds[i], k=k, num_loops=0)
num_correct = np.sum(y_test_pred == y_train_folds[i])
accuracy = float(num_correct) / y_train_folds[i].shape[0]
k_to_accuracies[k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 141 / 500 correct => accuracy: 0.282000
###Markdown
k-Nearest Neighbor (kNN) The kNN classifier consists of two stages: training and testing.During training, the classifier takes the training data and simply remembers it.During testing, kNN classifies a test image by comparing it to all training images and selecting the majority label among the k most similar training examples.The value of k is computed by cross-validation.In this exercise you will implement these steps and gain proficiency in writing efficient, vectorized code. You will select the best value of k by cross-validation. You will be using a version of the CIFAR-10 object recognition dataset for this exercise.
###Code
import random
import numpy as np
from data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
# Run the CIFAR-10 dataset load script in the folder datasets, before you run this cell
cifar10_dir = './datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
print 'y_train[1] ', y_train[1]
plt.imshow(X_train[1].astype('uint8'))
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
print y_test
print y_train
###Output
(5000, 3072) (500, 3072)
[3 8 8 0 6 6 1 6 3 1 0 9 5 7 9 8 5 7 8 6 7 0 4 9 5 2 4 0 9 6 6 5 4 5 9 2 4
1 9 5 4 6 5 6 0 9 3 9 7 6 9 8 0 3 8 8 7 7 4 6 7 3 6 3 6 2 1 2 3 7 2 6 8 8
0 2 9 3 3 8 8 1 1 7 2 5 2 7 8 9 0 3 8 6 4 6 6 0 0 7 4 5 6 3 1 1 3 6 8 7 4
0 6 2 1 3 0 4 2 7 8 3 1 2 8 0 8 3 5 2 4 1 8 9 1 2 9 7 2 9 6 5 6 3 8 7 6 2
5 2 8 9 6 0 0 5 2 9 5 4 2 1 6 6 8 4 8 4 5 0 9 9 9 8 9 9 3 7 5 0 0 5 2 2 3
8 6 3 4 0 5 8 0 1 7 2 8 8 7 8 5 1 8 7 1 3 0 5 7 9 7 4 5 9 8 0 7 9 8 2 7 6
9 4 3 9 6 4 7 6 5 1 5 8 8 0 4 0 5 5 1 1 8 9 0 3 1 9 2 2 5 3 9 9 4 0 3 0 0
9 8 1 5 7 0 8 2 4 7 0 2 3 6 3 8 5 0 3 4 3 9 0 6 1 0 9 1 0 7 9 1 2 6 9 3 4
6 0 0 6 6 6 3 2 6 1 8 2 1 6 8 6 8 0 4 0 7 7 5 5 3 5 2 3 4 1 7 5 4 6 1 9 3
6 6 9 3 8 0 7 2 6 2 5 8 5 4 6 8 9 9 1 0 2 2 7 3 2 8 0 9 5 8 1 9 4 1 3 8 1
4 7 9 4 2 7 0 7 0 6 6 9 0 9 2 8 7 2 2 5 1 2 6 2 9 6 2 3 0 3 9 8 7 8 8 4 0
1 8 2 7 9 3 6 1 9 0 7 3 7 4 5 0 0 2 9 3 4 0 6 2 5 3 7 3 7 2 5 3 1 1 4 9 9
5 7 5 0 2 2 2 9 7 3 9 4 3 5 4 6 5 6 1 4 3 4 4 3 7 8 3 7 8 0 5 7 6 0 5 4 8
6 8 5 5 9 9 9 5 0 1 0 8 1 1 8 0 2 2 0]
[6 9 9 ..., 5 4 6]
###Markdown
Creating a knn classifierRemember that training a kNN classifier is a no-op. The classifier simply remembers the data and does no further processing
###Code
from k_nearest_neighbor import KNearestNeighbor
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
###Output
_____no_output_____
###Markdown
Classifying test data with a knn-classifierWe would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:- First we must compute the distances between all test examples and all train examples.- Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are M training examples and N test examples, this stage should result in a N x M matrix where each element (i,j) is the distance between the i-th test set example and j-th training set example.First, open k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
###Code
# Open k_nearest_neighbor.py and implement compute_distances_two_loops.
# Test your implementation:
# dists = classifier.compute_distances_two_loops(X_test)
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
print X_test.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
###Output
_____no_output_____
###Markdown
Question: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns?Answer: - bright row: the test sample corresponding to a bright row differs significantly from most of the training samples.- bright column: the training sample corresponding to a bright column differs significantly from most of the testing samples.
###Code
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is 1- Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
###Output
Got 137 / 500 correct => accuracy: 0.274000
###Markdown
You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5.
###Code
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
###Output
Got 139 / 500 correct => accuracy: 0.278000
###Markdown
You should expect to see a slightly better performance than with k = 1. Speeding up distance computations
###Code
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
print dists_two.shape
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
###Output
Two loop version took 32.619446 seconds
One loop version took 27.051590 seconds
No loop version took 0.215434 seconds
###Markdown
Choosing k by cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
###Code
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for cur_k in k_choices:
k_to_accuracies[cur_k] = []
for i in xrange(num_folds):
# print "i = ", i
# print np.vstack(X_train_folds[:i] + X_train_folds[i+1:]).shape
# print "y:", np.hstack(y_train_folds[i+1:]).shape
cur_x_train = np.vstack(X_train_folds[:i] + X_train_folds[i+1:])
cur_y_train = np.hstack(y_train_folds[:i] + y_train_folds[i+1:])
# print "cur_y_train.shape: ", cur_y_train.shape
classifier.train(cur_x_train, cur_y_train)
predicted_y_test = classifier.predict(X_train_folds[i], k = cur_k)
num_correct = np.sum(predicted_y_test == y_train_folds[i])
accuracy = float(num_correct) / y_train_folds[i].shape[0]
k_to_accuracies[cur_k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
###Output
Got 141 / 500 correct => accuracy: 0.282000
|
notebook/band-theory/pseudopotential.ipynb | ###Markdown
**Norm-Conserving Pseudopotentials****Authors:** Dou Du, Taylor James Baird and Giovanni Pizzi Go back to index**Source code:** https://github.com/osscar-org/quantum-mechanics/blob/master/notebook/band-theory/pseudopotential.ipynbThe pseudopotential method is a technique employed to simplify the description of a system of interacting electrons and nuclei. It is used to construct an effective potentialthat includes both the effects of a nucleus and of its core electrons,allowing one to consider explicitly only the valence electrons.This notebook illustrates a method of constructing norm-conserving pseudopotentialsand to display them interactively, together with the resulting pseudowavefunctions. **Goals*** Understand why pseudopotentials are needed.* Learn how to construct pseudopotentials using Kerker's method.* Examine the results for various values of the principal quantum number n and of the angular quantum number l.* Examine the effect of changing the cutoff radius. **Background theory** [More on the background theory.](./theory/theory_pseudopotential.ipynb) **Tasks and exercises**1. Investigate the role of the cutoff radius by varying the $R_c$ slider. Solution Move the slider for $R_c$ and press the button "Compute pseudopotential" to obtain the results. Check if there are values for which no solutions can be found. Inspect how different the pseudopotential is from the Coulomb potential. 2. Investigate how the pseudopotential changes for different values of the quantum numbers Solution Try to construct the pseudopotential for various values of n and l. Check what happens when constructing a pseudopotential for a nodeless wavefunction (e.g. $n=1$ and $l=0$, or $n=2$ and $l=1$). 3. Why do we need pseudopotentials? Solution Wavefunction oscillates rapidly in the core region. In a plane-wave approach, this would require a huge basis set (i.e., a huge number of plane waves) to be described accurately. What is most relevant, however, is that while the largest part of the contribution to the total energy of the system comes from the core electrons, these electrons are essentially frozen and do not participate in chemistry and the creation of bonds; the electronic structure is instead determined by the valence electrons. Avoiding to treat explicitly core electrons avoids that small relative errors on the core electrons completely spoil the calculation of the energy of the full system and, in particular, of the (small, but crucial) energy differences between different atomic configurations or crystalline phases. 4. What is the meaning of the norm-conservation condition? Solution The condition ensures that the total charge inside the cutoff radius $R_c$ is correct. However, there are more profound consequences that are implied by this condition: it turns out that imposing norm conservation implies also that the first energy derivative of the logarithmic derivatives of the all-electron wavefunction and the pseduowafevuntion agree at $R_c$. This is a very important condition for the transferability of the pseudopotential. It can be shown that this means that the true atom with its electrons and the pseudopotential generate the same phase shift when a plane wave is scattered into a spherical wave. A detailed discussion can be found in Section 11.4 of the book "Electronic Structure: Basic Theory and Practical Methods" by Richard M. Martin, Cambridge University Press (2004). 5. Are the pseudopotentials constructed with this method local? Solution As discussed earlier, we obtain a different pseudopotential for different values of the quantum numbers n and l. Therefore, the pseudopotential is not local (its action is not not just the product of the same a single function $V^{PS}(r)$ times the wavefunction). Interactive visualization(be patient, it might take a few seconds to load)
###Code
from sympy.physics.hydrogen import R_nl, E_nl
from sympy.abc import r
from sympy.functions import exp
from sympy import lambdify, diff, log
import numpy as np
import matplotlib.pyplot as plt
from sympy.solvers.solveset import linsolve
from ipywidgets import FloatSlider, Button, IntSlider, Layout, HBox, VBox, Label, Tab, Layout, Text
from scipy.optimize import newton
import matplotlib.gridspec as gridspec
%matplotlib widget
# sn: the slider to control the quantum number n
# sl: the slider to control the quantum number l
sn = IntSlider(value=3, min=1, max=5, description="$n$")
sl = IntSlider(value=0, min=0, max=sn.value-1, description="$l$")
labeln = Label(value="(principal quantum number)")
labell = Label(value="(angular quantum number)")
#init the quantum number n and l
n = sn.value
l = sl.value
# We consider a H atom, with charge +1
Z = 1
#ho: is the Hydrogen radial wavefunction
#Ea: is the eigenvalue
#rf: make the analytical formula numerically
ho = R_nl(n, l, r, Z=Z)
Ea = E_nl(n, Z=Z)
rf = lambdify(r, ho, "numpy")
def update_functions_and_plot():
"""Update the functions and then the plot.
"""
global ho, Ea, rf
ho = R_nl(n, l, r, Z=Z)
# Check that the value of the radial wavefunction at large radius is positive
# (i.e., that the wavefunction goes to zero from positive values as r->infinity)
# If negative: flip it! (This is because due to the formula of the polynomial
# that we want to use for r < R_c, $\Psi(R_c)R_c$ must be positive at R_c, and thus after R_c as well).
# Because of the form of R_nl, the sign needs to be swapped if n-l is even.
if (n - l) % 2 == 0:
ho = -ho
Ea = E_nl(n, Z=Z)
rf = lambdify(r, ho, "numpy")
update_plot()
def nvalue_change(c):
"""Observe the change of the sn and update plot.
"""
global n
n = c["new"]
sl.value = 0
sl.max = n - 1
update_functions_and_plot()
sn.observe(nvalue_change, names="value")
def lvalue_change(c):
"""Observe the change of the sl and upate plot.
"""
global l
l = c["new"]
update_functions_and_plot()
sl.observe(lvalue_change, names="value")
## NOTE: the widgets in this cell are currently not shown
text_l0 = Text(description = r"Lt. $\psi^{(0)}(r_c)$:")
text_l1 = Text(description = r"Lt. $\psi^{(1)}(r_c)$:")
text_l2 = Text(description = r"Lt. $\psi^{(2)}(r_c)$:")
text_r0 = Text(description = r"Rt. $\psi^{(0)}(r_c)$:")
text_r1 = Text(description = r"Rt. $\psi^{(1)}(r_c)$:")
text_r2 = Text(description = r"Rt. $\psi^{(2)}(r_c)$:")
cof_0 = Text(description = r"$\lambda_0$:")
cof_2 = Text(description = r"$\lambda_2$:")
cof_3 = Text(description = r"$\lambda_3$:")
cof_4 = Text(description = r"$\lambda_4$:")
def clear_texts():
text_l0.value = ""
text_l1.value = ""
text_l2.value = ""
text_r0.value = ""
text_r1.value = ""
text_r2.value = ""
cof_0.value = ""
cof_2.value = ""
cof_3.value = ""
cof_4.value = ""
output1 = VBox([HBox([text_l0, text_r0]), HBox([text_l1, text_r1]), HBox([text_l2, text_r2])]);
output2 = VBox([cof_0, cof_2, cof_3, cof_4])
tab = Tab(layout=Layout(width='700px'))
tab.children = [output1, output2]
tab.set_title(0, r"Continuity at Rc")
tab.set_title(1, r"Polynomial coeff.")
## Uncomment this if you want to also see some text boxes with the values of the wavefunction and its derivatives
## at the cutoff radius
#display(tab)
s_rc = FloatSlider(value = 21.0, min = 1.0, max = 50, description = "$R_c$",
layout={'width':'300px'})
compute = Button(description="Compute pseudopotential", style={'description_width': 'initial'}, layout={'width':'200px'})
display(HBox([sn, labeln]), HBox([sl, labell]))
display(HBox([s_rc, compute]))
img = plt.figure(tight_layout=True, figsize=(7,7))
img.canvas.header_visible = False
gs = gridspec.GridSpec(2, 1)
ax1 = img.add_subplot(gs[0, 0])
ax2 = img.add_subplot(gs[1, 0])
x1 = np.arange(0, 50.0, 0.01)
y1 = rf(x1)*x1
line_rho, = ax1.plot(x1, y1, 'r-', label="$R_{"+str(sn.value)+str(sl.value)+"}(r)r$")
ax1.fill_between(x1, y1, 0, where=x1<s_rc.value, facecolor='yellow', alpha=0.5)
ax1.set_xlim([0, 50.0])
ax1.hlines(0, 0, 50, 'k','--')
line_rc1 = ax1.axvline(s_rc.value)
line_pswf, = ax1.plot([],[],'b-', linewidth=1.5, label="$R^{PS}_{"+str(sn.value)+str(sl.value)+"}(r)r$")
ann_rc = ax1.annotate("$R_c$", xy=(s_rc.value + 1.0, rf(s_rc.value)*s_rc.value), fontsize=20)
point, = ax1.plot(s_rc.value, rf(s_rc.value)*s_rc.value, 'ko')
ann_norm1 = ax1.annotate("Yellow (squared):", xy=(250, 30), xycoords='axes points', fontsize=9,
bbox=dict(boxstyle='round', facecolor='yellow', alpha=0.5))
ann_norm2 = ax1.annotate("Green (squared):", xy=(250, 10), xycoords='axes points', fontsize=9,
bbox=dict(boxstyle='round', facecolor='green', alpha=0.5))
ann_logl = ax1.annotate("Logarithmic deriv. $\psi(r_c)^{PS}$:", xy=(50, 30), xycoords='axes points', fontsize=9,
bbox=dict(boxstyle='round', facecolor='blue', alpha=0.5))
ann_logr = ax1.annotate("Logarithmic deriv. $\psi(r_c)$:", xy=(50, 10), xycoords='axes points', fontsize=9,
bbox=dict(boxstyle='round', facecolor='red', alpha=0.5))
ax1.set_xlabel("r", fontsize = 15)
ax1.set_ylabel("$R_{nl}(r)r$", fontsize = 15)
ax1.legend(loc=1, fontsize=15)
#ax1.set_ylim([-0.3, 0.5])
x2 = np.linspace(0.001, 50, 500)
y2 = -Z/x2
ax2.plot(x2, y2, 'r--', linewidth=1.0, label="$-Z/r$")
ax2.set_xlim([0, 50.0])
#ax2.set_ylim([-1.0, 0.10])
line_rc2 = ax2.axvline(s_rc.value)
line_psv, = ax2.plot([],[], 'b-', linewidth=1.5, label="$V^{SP}(r)$")
ax2.hlines(0, 0, 50, 'k','--')
ax2.set_xlabel("r", fontsize = 15)
ax2.set_ylabel("$V$", fontsize = 15)
ax2.legend(loc=1, fontsize=15)
def update_plot():
""" Update the plot when quantum number n and l changing.
"""
x1 = np.arange(0, 50.0, 0.01)
y1 = rf(x1)*x1
line_rho.set_data([x1, y1])
line_rc1.set_data(s_rc.value, [-1, 1])
line_rc2.set_data(s_rc.value, [-1, 1])
ann_rc.set_position((s_rc.value + 1.0, rf(s_rc.value)*s_rc.value))
ax1.collections.clear()
ax1.fill_between(x1, y1, 0, where=x1<s_rc.value, facecolor='yellow', alpha=0.5)
ax1.hlines(0, 0, 50, 'k','--')
line_pswf.set_data([],[])
line_psv.set_data([],[])
point.set_data(s_rc.value, rf(s_rc.value)*s_rc.value)
ax1.set_ylim([y1.min()-0.04, y1.max()+0.04])
line_rho.set_label("$R_{"+str(sn.value)+str(sl.value)+"}(r)r$")
line_pswf.set_label("$R^{PS}_{"+str(sn.value)+str(sl.value)+"}(r)r$")
ax1.legend(loc=1, fontsize=15)
ax2.legend(loc=1, fontsize=15)
#clear_texts()
def on_rc_change(b):
""" Update the plot when the slider of the Rc changing.
"""
x1 = np.arange(0, 50.0, 0.01)
y1 = rf(x1)*x1
line_rc1.set_data(s_rc.value, [-1, 1])
line_rc2.set_data(s_rc.value, [-1, 1])
ann_rc.set_position((s_rc.value + 1.0, rf(s_rc.value)*s_rc.value))
ax1.collections.clear()
ax1.fill_between(x1, y1, 0, where=x1<s_rc.value, facecolor='yellow', alpha=0.5)
ax1.hlines(0, 0, 50, 'k','--')
line_pswf.set_data([],[])
line_psv.set_data([],[])
point.set_data(s_rc.value, rf(s_rc.value)*s_rc.value)
clear_texts()
s_rc.observe(on_rc_change, names='value')
def compute_right_derivative(rc):
k0 = rf(rc)
k1 = diff(ho, r).subs(r, rc).evalf()
k2 = diff(ho, r, 2).subs(r, rc).evalf()
return np.array([float(k0), float(k1), float(k2)])
def compute_left_derivative(ps, rc):
psf = lambdify(r, ps, "numpy")
k0 = psf(rc)
k1 = diff(ps, r).subs(r, rc).evalf()
k2 = diff(ps, r, 2).subs(r, rc).evalf()
return np.array([float(k0), float(k1), float(k2)])
def solver_kernel(devs, rc, l, b):
A = np.zeros([3,4])
A[0, :] = np.array([1, rc**2, rc**3, rc**4]);
A[1, :] = np.array([0, 2*rc, 3*rc**2, 4*rc**3]);
A[2, :] = np.array([0, 2, 6*rc, 12*rc**2])
B = np.zeros(3)
B[0] = log(devs[0]/rc**l);
B[1] = devs[1]/devs[0] - l/rc;
B[2] = devs[2]/devs[0] - devs[1]**2/devs[0]**2 + l/rc**2;
B-=b*A[:, 1];
A = np.delete(A, (1), axis=1);
coff = np.linalg.solve(A, B);
coff = np.insert(coff, 1, b)
return coff
def diff_norms(b, rc, l):
devs = compute_right_derivative(rc)
coff = solver_kernel(devs, rc, l, b)
ps = r**l*exp(coff[0] + coff[1]*r**2 + coff[2]*r**3 + coff[3]*r**4)
psf = lambdify(r, ps, "numpy")
psr = lambdify(r, ps*ps*r*r, "numpy")
hor = lambdify(r, ho*ho*r*r, "numpy")
x1 = np.linspace(0, rc, 800);
norm1 = np.sum(hor(x1))*(x1[1]-x1[0])
norm2 = np.sum(psr(x1))*(x1[1]-x1[0])
return float(norm1 - norm2)
def compute_norms(b, rc, l):
devs = compute_right_derivative(rc)
coff = solver_kernel(devs, rc, l, b)
ps = r**l*exp(coff[0] + coff[1]*r**2 + coff[2]*r**3 + coff[3]*r**4)
psf = lambdify(r, ps, "numpy")
psr = lambdify(r, ps*ps*r*r, "numpy")
hor = lambdify(r, ho*ho*r*r, "numpy")
x1 = np.linspace(0, rc, 800);
norm1 = np.sum(hor(x1))*(x1[1]-x1[0])
norm2 = np.sum(psr(x1))*(x1[1]-x1[0])
return norm1, norm2
def compute_potential(l):
psf = Ea - l*(l+1)/(2*r*r) + 1/(2*ho*r)*diff(ho*r, r, r)
return lambdify(r, psf, "numpy")
def plot_ps_wavefunction(b, rc, l):
devs = compute_right_derivative(rc)
coff = solver_kernel(devs, rc, l, b)
ps = r**l*exp(coff[0] + coff[1]*r**2 + coff[2]*r**3 + coff[3]*r**4)
psf = lambdify(r, ps*r, "numpy")
devl = compute_left_derivative(ps, rc)
x1 = np.linspace(0, rc, 800);
line_pswf.set_data(x1, psf(x1));
ax1.fill_between(x1, psf(x1), 0, where=x1<s_rc.value, facecolor='green', alpha=0.5)
logl = diff(ps, r).subs(r, rc).evalf()/ps.subs(r, rc).evalf()
logr = diff(ho, r).subs(r, rc).evalf()/ho.subs(r, rc).evalf()
ann_logl.set_text("Logarithmic deriv. $\psi(r_c)^{PS}$:" + str("{:.10f}".format(logl)))
ann_logr.set_text("Logarithmic deriv. $\psi(r_c)$:" + str("{:.10f}".format(logr)))
#text_l0.value = str(ps.subs(r, rc).evalf())
#text_l1.value = str(diff(ps, r).subs(r, rc).evalf())
#text_l2.value = str(diff(ps, r, r).subs(r, rc).evalf())
#text_r0.value = str(ho.subs(r, rc).evalf())
#text_r1.value = str(diff(ho, r).subs(r, rc).evalf())
#text_r2.value = str(diff(ho, r, r).subs(r, rc).evalf())
#cof_0.value = str(coff[0])
#cof_2.value = str(coff[1])
#cof_3.value = str(coff[2])
#cof_4.value = str(coff[3])
def plot_ps_potential(b, rc, l):
devs = compute_right_derivative(rc)
coff = solver_kernel(devs, rc, l, b)
pf = r**(l+1)*exp(coff[0] + coff[1]*r**2 + coff[2]*r**3 + coff[3]*r**4)
psf = Ea - l*(l+1)/(2*r*r) + 1/(2*pf)*diff(pf, r, r)
psfnl = lambdify(r, psf, "numpy")
psfnr = compute_potential(l)
x1 = np.linspace(0.001, rc, 800);
x2 = np.linspace(rc, 50, 800);
line_psv.set_data(np.concatenate((x1,x2)), np.concatenate((psfnl(x1),psfnr(x2))));
ax2.set_ylim([psfnl(x1).min(axis=0)-0.05, max(psfnl(x1).max(axis=0)+0.1, 0.05)])
def compute_pseudopotential(c):
global compute
compute.disabled = True
old_description = compute.description
compute.description = "Computing..."
try:
try:
b = newton(lambda x: diff_norms(x, s_rc.value, l), x0 = 0.00001, tol = 1e-10, maxiter=100)
if abs(diff_norms(b, s_rc.value, l)) > 0.001:
ann_norm1.set_text("No numerical solution found!");
ann_norm2.set_text("Please change $R_c$!");
ann_logl.set_text("");
ang_logr.set_text("");
return None
except Exception:
ann_norm1.set_text("No numerical solution found!");
ann_norm2.set_text("Please change $R_c$!");
ann_logl.set_text("");
ann_logr.set_text("");
return None
update_plot()
plot_ps_wavefunction(b, s_rc.value, l)
plot_ps_potential(b, s_rc.value, l)
norm1, norm2 = compute_norms(b, s_rc.value, l)
ann_norm1.set_text("Yellow (squared): " + str("{:.10f}".format(norm1)))
ann_norm2.set_text("Green (squared): " + str("{:.10f}".format(norm2)))
finally:
compute.disabled = False
compute.description = old_description
compute.on_click(compute_pseudopotential)
compute_pseudopotential("init")
###Output
_____no_output_____ |
1.Plot Open Data Files using Dash-workshop.ipynb | ###Markdown
Open Data Sabadellhttp://opendata.sabadell.cat/ca/From main page **Catàleg** we select **Medi Ambient** (Environment) category. In there, we can find a file related to municipal waste that can be downloaded from [here.](http://opendata.sabadell.cat/index.php?option=com_iasopendata&view=download&format=raw&urlOData=aHR0cDovL29kYXRhLnNhYmFkZWxsLmNhdC9vZGF0YTRQcm9kdWN0b3Ivb2RhdGE0UHJvZHVjdG9yLnN2Yy9NYXRlcmlhbHNSZXNpZHVzLz9mb3JtYXQ9Y3N2JmlkZGlzdD0xNDYyJiRzZWxlY3Q9T3JkcmUsQW55byxJZE1hdGVyaWFsLE5vbU1hdGVyaWFsLFF1YW50aXRhdCxVbml0YXRz) But you can find a copy of this file named **residus.csv** in this tutorial dataset in **DadesSabadell** folder.The file obtained is in a CSV format (Comma Separated Value). This means that it is like a table where the rows are registers and the columns are fields or values associated to this register. You can take a look into this file in table-like representation provided in the open data web page following this link: [OpenData Sabadell](http://opendata.sabadell.cat/ca/inici/odata?iddist=1462). Looking this table we can observe that every register represents an amount of waste origined in Sabadell in tones associated with a particular year and a waste type classification.In this exercice we are going to follow this steps:* Read residus.csv file.* Group data to obtain more readable information. Every row will be a type of waste and the columns will be a year related to this data.* Represent data in an interactive way. **Important:** Nan values indicate that there is no infomation related to this type of waste and year. To read data and manage it we are going to use [pandas](https://pandas.pydata.org/) library:
###Code
import pandas as pd
#df = pd.read_csv(root_dir + "DadesSabadell/residus.csv",sep=';')
df = pd.read_csv("DadesSabadell/residus.csv",sep=';')
df.sort_values(by="Anyo",inplace = True)
df.head()
materials = list(df.NomMaterial.unique())
materials.remove("Resta")
print ('{} diferent types of wastes:'.format(len(materials)))
materials
pd.pivot_table(df,columns="Anyo", index = "NomMaterial", values="Quantitat")
###Output
_____no_output_____
###Markdown
All is prepared to plot our data. We are going to us Dash (web-based interfaces in Python) to plot information and give them interactivity.If you are interested to learn more about Dash you can follow the offical [tutorial.](https://dash.plot.ly/)[Dash installation:](https://dash.plot.ly/installation)* **pip install dash** The core dash backend* **pip install dash-html-components** HTML components* **pip install dash-core-components** Supercharged components* **pip install dash-table** Interactive DataTable component (new!)
###Code
!pip install dash # The core dash backend
!pip install dash-html-components # HTML components
!pip install dash-core-components # Supercharged components
!pip install dash-table # Interactive DataTable component (new!)
%%writefile my_app1.py
#Dash empty structure
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div('Hello Dash!')
if __name__ == '__main__':
app.run_server(debug=True)
!python my_app1.py
%%writefile my_app2.py
#Dash simple example
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div([html.H1("Hello Dash!"),
html.Div('''
Dash: A web application framework for Python.
'''),
dcc.Graph(id='exmaple-graph',
figure = {
'data':[
{'x': [1, 2, 3], 'y': [4.2, 1.8, 2.7], 'type': 'bar', 'name': 'Sabadell'},
{'x': [1, 2, 3], 'y': [2.8, 4.9, 5.1], 'type': 'bar', 'name': 'Barcelona'},
],
'layout':{
'title' : 'Dash Data Visualisation'
}
})
])
if __name__ == '__main__':
app.run_server(debug=True)
!python my_app2.py
%%writefile my_app3.py
#Dash simple example
import pandas as pd
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objs as go
from plotly import tools
df = pd.read_csv("DadesSabadell/residus.csv",sep=';')
df.sort_values(by="Anyo",inplace = True)
materials = list(df.NomMaterial.unique())
materials.remove("Resta")
pd.pivot_table(df,columns="Anyo", index = "NomMaterial", values="Quantitat")
minim = 0
maxim = 1000
pas = 100
app = dash.Dash()
app.css.append_css({"external_url": "https://codepen.io/chriddyp/pen/bWLwgP.css"})
app.layout = html.Div([html.Div([dcc.Graph(id='residus_graph')],
style={'height':'80%','padding': '0px 20px 20px 20px'}),
html.Div([html.H5("Materials amb mitjana de tones per any més grans que:"),
dcc.Slider(id='avg-tones',step = pas,min=minim,max=maxim,value=maxim/2,
marks={ str(tones): {'label':str(tones)} for tones in range(minim,maxim+pas,pas)})],
style={'margin':'auto','height':'20%','width': '70%', 'padding': '0px 0px 40px 40px',"display":'inline_block'})])
@app.callback(
dash.dependencies.Output('residus_graph', 'figure'),
[dash.dependencies.Input('avg-tones', 'value')])
def update_figure(avg_tones):
fig = tools.make_subplots(rows=2, cols=1,shared_xaxes=True, vertical_spacing=0.001)
traces = []
filtered= df[df['NomMaterial'] =="Resta"]
trace_Resta = go.Scatter(
x=filtered['Anyo'],y=filtered['Quantitat'],text="Resta",
mode='lines+markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name="Resta"
)
filtered = df[df['NomMaterial'].isin(materials)].groupby("Anyo").sum()
trace_Total= go.Scatter(
x=filtered.index,y=filtered['Quantitat'],text="Total Materials",
mode='lines+markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name="Total Materials",
)
fig.append_trace(trace_Resta, 1, 1)
fig.append_trace(trace_Total, 1, 1)
for i in materials :
filtered= df[df['NomMaterial'] == i]
y=filtered['Quantitat']
if (y.mean()>avg_tones) :
x=filtered['Anyo']
traces.append(go.Scatter(x=x,y=y,text=i,mode='markers',opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
}, name=i[0:30]))
for trace in traces:
fig.append_trace(trace, 2, 1)
fig['layout'].update(height=600,title="Residus a Sabadell", margin={'l': 50, 'b': 40, 't': 40, 'r': 50},
yaxis1={"title":"Totals"}, yaxis2={"title":"Materials"},hovermode="closest")
return fig
if __name__ == '__main__':
app.run_server()
!python my_app3.py
###Output
* Running on http://127.0.0.1:8050/ (Press CTRL+C to quit)
127.0.0.1 - - [29/Jan/2019 19:34:27] "[37mGET / HTTP/1.1[0m" 200 -
127.0.0.1 - - [29/Jan/2019 19:34:30] "[37mGET /_dash-layout HTTP/1.1[0m" 200 -
127.0.0.1 - - [29/Jan/2019 19:34:30] "[37mGET /_dash-dependencies HTTP/1.1[0m" 200 -
This is the format of your plot grid:
[ (1,1) x1,y1 ]
[ (2,1) x1,y2 ]
127.0.0.1 - - [29/Jan/2019 19:34:37] "[37mPOST /_dash-update-component HTTP/1.1[0m" 200 -
This is the format of your plot grid:
[ (1,1) x1,y1 ]
[ (2,1) x1,y2 ]
127.0.0.1 - - [29/Jan/2019 19:34:38] "[37mPOST /_dash-update-component HTTP/1.1[0m" 200 -
This is the format of your plot grid:
[ (1,1) x1,y1 ]
[ (2,1) x1,y2 ]
127.0.0.1 - - [29/Jan/2019 19:34:42] "[37mPOST /_dash-update-component HTTP/1.1[0m" 200 -
^C
###Markdown
Open Data Barcelona **EXERCICE**Using information of 2017's births in Barcelona Districts that you can find at [Open Data Barcelona](http://opendata-ajuntament.barcelona.cat/en/) that you can download [here](http://opendata-ajuntament.barcelona.cat/data/en/dataset/est-demo-naixements-sexe), try to obtain a fancy plot showing girls and boys births per different Barcelona District(Slicer is not necessary). You also can find this the CSV file with this information in `DadesBarcelona/2017_naixements_sexe.csv`
###Code
import pandas as pd
df = pd.read_csv("DadesBarcelona/2017_naixements_sexe.csv",sep=',')
df.head(20)
pd.pivot_table(df, columns="Sexe", index = "Nom_Districte", values="Nombre", aggfunc='sum')
%%writefile my_exercice.py
#Dash exercice
import pandas as pd
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objs as go
from plotly import tools
import numpy as np
df = pd.read_csv("DadesBarcelona/2017_naixements_sexe.csv",sep=',')
pv = pd.pivot_table(df, columns="Sexe", index = "Nom_Districte", values="Nombre", aggfunc='sum')
app = dash.Dash()
app.css.append_css({"external_url": "https://codepen.io/chriddyp/pen/bWLwgP.css"})
app.layout = html.Div([html.Div(
[dcc.Graph(id='birth_bcn',figure = {
'data':[
{'x': pv.index, 'y': pv['Nenes'], 'type': 'bar', 'name': 'Nenes'},
{'x': pv.index, 'y': pv['Nens'], 'type': 'bar', 'name': 'Nens'},
{'x': pv.index, 'y': pv['Nenes'], 'type': 'lines+markers', 'name': 'Nenes'},
{'x': pv.index, 'y': pv['Nens'], 'type': 'lines+markers', 'name': 'Nens'},
],
'layout':{
'title' : 'Niñas y niños nacidos en Barcelona por distrito'
}
} )
],
style={'height':'80%','padding': '0px 20px 20px 20px'},
)])
if __name__ == '__main__':
app.run_server(debug=True)
if __name__ == '__main__':
app.run_server()
!python my_exercice.py
###Output
Running on http://127.0.0.1:8050/
Debugger PIN: 357-138-377
Running on http://127.0.0.1:8050/
Debugger PIN: 635-344-027
^C
Traceback (most recent call last):
File "my_exercice.py", line 41, in <module>
app.run_server()
File "/Users/Pablo/anaconda3/lib/python3.6/site-packages/dash/dash.py", line 1288, in run_server
**flask_run_options)
File "/Users/Pablo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 841, in run
run_simple(host, port, self, **options)
File "/Users/Pablo/anaconda3/lib/python3.6/site-packages/werkzeug/serving.py", line 814, in run_simple
inner()
File "/Users/Pablo/anaconda3/lib/python3.6/site-packages/werkzeug/serving.py", line 774, in inner
fd=fd)
File "/Users/Pablo/anaconda3/lib/python3.6/site-packages/werkzeug/serving.py", line 666, in make_server
passthrough_errors, ssl_context, fd=fd)
File "/Users/Pablo/anaconda3/lib/python3.6/site-packages/werkzeug/serving.py", line 574, in __init__
socket.SOCK_STREAM)
File "/Users/Pablo/anaconda3/lib/python3.6/socket.py", line 460, in fromfd
nfd = dup(fd)
OSError: [Errno 9] Bad file descriptor
|
dev/william/notebook/email-spam-naive-bayes-model.ipynb | ###Markdown
IntroductionThis analysis aims to build a spam detection model using the Naive Bayes approach using the email spam [dataset](https://www.kaggle.com/veleon/ham-and-spam-dataset). This notebook is run on Kaggle. Initialize
###Code
# Import libraries
import os
import email
import random
import email.policy
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
# Construct a panda data frame of spam and ham email.
"""
From Spam Filter using Word Embedding & LSTM
https://www.kaggle.com/lonnieqin/spam-filter-using-word-embedding-lstm
"""
base_directory = "/kaggle/input/ham-and-spam-dataset/hamnspam/"
spam_email_names = os.listdir(base_directory + "spam")
normal_email_names = os.listdir(base_directory + "ham")
def load_email(is_spam, filename):
directory = base_directory + ("spam" if is_spam else "ham")
with open(os.path.join(directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
spam_emails = [load_email(True, filename) for filename in spam_email_names]
normal_emails = [load_email(False, filename) for filename in normal_email_names]
random.shuffle(spam_emails)
random.shuffle(normal_emails)
def process_email(emails, label, data_dictionary, default_topic=None, validation=0):
for mail in emails:
payload = mail.get_payload()
if isinstance(payload, list):
process_email(payload, label, data_dictionary, default_topic=mail["Subject"], validation=validation)
else:
if "Content-Type" in mail.keys():
if "html" in mail["Content-Type"].lower():
try:
soup = BeautifulSoup(mail.get_content())
topic = mail["Subject"]
if topic == None:
topic = default_topic
content = soup.body.text
data_dictionary["topic"].append(topic)
data_dictionary["content"].append(content)
data_dictionary["label"].append(label)
data_dictionary["validation"].append(validation)
except:
pass
elif "plain" in mail["Content-Type"].lower():
try:
topic = mail["Subject"]
if topic == None:
topic = default_topic
content = mail.get_content()
data_dictionary["topic"].append(topic)
data_dictionary["content"].append(content)
data_dictionary["label"].append(label)
data_dictionary["validation"].append(validation)
except:
pass
else:
pass
validation_split = 0.15
data_dictionary = {"topic": [], "content": [], "label": [], "validation": []}
for i in range(5):
validation_count = int(validation_split * len(spam_emails))
process_email(spam_emails[: validation_count], 1, data_dictionary, validation=1)
process_email(spam_emails[validation_count: ], 1, data_dictionary, validation=0)
validation_count = int(validation_split * len(normal_emails))
process_email(normal_emails[: validation_count], 0, data_dictionary, validation=1)
process_email(normal_emails[validation_count: ], 0, data_dictionary, validation=0)
df = pd.DataFrame(data_dictionary)
df.dropna(inplace=True)
df = df.sample(frac=1)
df.head(30)
###Output
_____no_output_____ |
lookup-purkinje.ipynb | ###Markdown
Introduction Someone discovered an odd thing about our pool split experiment and we need to make sure everything was uploaded correctly
###Code
import pandas
import os
import sys
HTSW = os.path.expanduser('~/proj/htsworkflow')
if HTSW not in sys.path:
sys.path.append(HTSW)
from htsworkflow.submission import encoded
server = encoded.ENCODED('www.encodeproject.org')
text = """13625 Illumina index__N701_N501_Paired_ends_LC 545_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13626 Illumina index__N702_N502_Paired_ends_LC 546_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13627 Illumina index__N703_N503_Paired_ends_LC 547_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13628 Illumina index__N704_N504_Paired_ends_LC 548_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13629 Illumina index__N705_N505_Paired_ends_LC 549_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13630 Illumina index__N706_N506_Paired_ends_LC 550_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13631 Illumina index__N707_N507_Paired_ends_LC 551_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13632 Illumina index__N708_N508_Paired_ends_LC 552_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13633 Illumina index__N709_N501_Paired_ends_LC 553_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13634 Illumina index__N710_N502_Paired_ends_LC 554_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13635 Illumina index__N711_N503_Paired_ends_LC 555_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13636 Illumina index__N712_N504_Paired_ends_LC 556_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13637 Illumina index__N701_N505_Paired_ends_LC 557_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13638 Illumina index__N702_N506_Paired_ends_LC 558_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13639 Illumina index__N703_N507_Paired_ends_LC 559_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13640 Illumina index__N704_N508_Paired_ends_LC 560_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13641 Illumina index__N705_N501_Paired_ends_LC 561_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13642 Illumina index__N706_N502_Paired_ends_LC 562_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13643 Illumina index__N707_N503_Paired_ends_LC 563_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13644 Illumina index__N708_N504_Paired_ends_LC 564_Hs_UMB4727_20_M_CN_Cb_Purkinje single cell_
13645 Illumina index__N709_N505_Paired_ends_LC 565_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13646 Illumina index__N710_N506_Paired_ends_LC 566_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13647 Illumina index__N711_N507_Paired_ends_LC 567_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13648 Illumina index__N712_N508_Paired_ends_LC 568_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13649 Illumina index__N701_N501_Paired_ends_LC 569_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13650 Illumina index__N702_N502_Paired_ends_LC 570_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13651 Illumina index__N703_N503_Paired_ends_LC 571_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13652 Illumina index__N704_N504_Paired_ends_LC 572_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13653 Illumina index__N705_N505_Paired_ends_LC 573_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13654 Illumina index__N706_N506_Paired_ends_LC 574_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13655 Illumina index__N707_N507_Paired_ends_LC 575_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13656 Illumina index__N708_N508_Paired_ends_LC 576_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13657 Illumina index__N709_N501_Paired_ends_LC 577_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13658 Illumina index__N710_N502_Paired_ends_LC 578_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13659 Illumina index__N711_N503_Paired_ends_LC 579_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13660 Illumina index__N712_N504_Paired_ends_LC 580_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13661 Illumina index__N701_N505_Paired_ends_LC 581_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13662 Illumina index__N702_N506_Paired_ends_LC 582_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13663 Illumina index__N703_N507_Paired_ends_LC 583_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_
13664 Illumina index__N704_N508_Paired_ends_LC 584_Hs_UMB4727_20_M_CN_Cb_Purkinje_poolsplit_ """.split('\n')
data = [(x[:5], x[6:].strip()) for x in text]
results = {}
for row in data:
jumpgate = row[0]
alias = 'barbara-wold:{}'.format(jumpgate)
library = server.get_json(alias)
library_id = library['accession']
graph = server.search_jsonld(searchTerm=library_id)
experiment = graph['@graph'][0]
experiment_id = experiment['accession']
description = experiment['description']
results.setdefault('jumpgate', []).append(jumpgate)
results.setdefault('experiment_id', []).append(experiment_id)
results.setdefault('library_id', []).append(library_id)
results.setdefault('description', []).append(description)
results.setdefault('jumpgate_description', []).append(row[1])
df = pandas.DataFrame(results)
df.head()
df.to_csv('purkinje-cross-reference.csv')
library['accession']
###Output
_____no_output_____ |
site/en/tutorials/keras/overfit_and_underfit.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
from tensorflow.compat.v2 import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies to prevent overfitting Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization introduces sparsity to make some of your weight parameters zero. L2 regularization will penalize the weights parameters without making them sparse—one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid')
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply prints a `.` for each epoch, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Open an embedded TensorBoard viewer
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for its regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a beseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having a the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss = tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explaination for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve teh behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `repack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a resuable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1, activation='sigmoid')
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting alltogether, and each of the larger models overfit the data more quickly. This becomes so sever for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
row = list(row)
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `repack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a resuable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1, activation='sigmoid')
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting alltogether, and each of the larger models overfit the data more quickly. This becomes so sever for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a beseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having a the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss = tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explaination for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve teh behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting housing prices—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it. Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or what the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network. We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting housing prices—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it. Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or what the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network. We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
row = list(row)
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `repack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a resuable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1, activation='sigmoid')
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossenrtropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting alltogether, and each of the larger models overfit the data more quickly. This becomes so sever for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a beseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having a the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss = tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explaination for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve teh behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3
!pip install tf_nightly
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply prints a `.` for each epoch, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Open an embedded TensorBoard viewer
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for its regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it. Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network. We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies to prevent overfitting Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization introduces sparsity to make some of your weight parameters zero. L2 regularization will penalize the weights parameters without making them sparse—one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid')
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it. Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network. We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1, activation='sigmoid')
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss = tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for its regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3
!pip install tf_nightly
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a resuable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a linear model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1, activation='sigmoid')
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting alltogether, and each of the larger models overfit the data more quickly. This becomes so sever for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.To open an embedded TensorBoard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/sizes``` You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a beseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having a the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss = tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" Is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explaination for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve teh behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convienience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it. Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network. We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](text_classification_with_hub.ipynb) and [predicting fuel efficiency](regression.ipynb)—the accuracy of models on the validation data would peak after training for a number of epochs and then stagnate or start decreasing.In other words, your model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what you really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the train data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. You need to strike a balance. Understanding how to train for an appropriate number of epochs as you'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, you'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs datasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11,000,000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So, instead of repacking each row individually make a new `tf.data.Dataset` that takes batches of 10,000 examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Inspect some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short, use just the first 1,000 samples for validation, and the next 10,000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `Dataset.batch` method to create batches of an appropriate size for training. Before batching, also remember to use `Dataset.shuffle` and `Dataset.repeat` on the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only densely-connected layers (`tf.keras.layers.Dense`) as a baseline, then create larger models, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `tf.keras.optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `tf.keras.optimizers.schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1,000 epochs, 1/3 at 2,000 epochs, and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply prints a `.` for each epoch, and a full set of metrics every 100 epochs.Next include `tf.keras.callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To check if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try three hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model and check how quickly it begins overfitting. Next, add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really figure out what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Open an embedded TensorBoard viewer
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as demonstrated in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero, encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights—one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Add L2 weight regularization:
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As demonstrated in the diagram above, the `"L2"` regularized model is now much more competitive with the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization:1. If you are writing your own training loop, then you need to be sure to ask the model for its regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
2. This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "decoupled weight decay" is used in optimizers like `tf.keras.optimizers.Ftrl` and `tfa.optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. For example, a given layer would normally have returned a vector `[0.2, 0.5, 1.3, 0.8, 1.1]` for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. `[0, 0.5, 1.3, 0, 1.1]`.The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In Keras, you can introduce dropout in a network via the `tf.keras.layers.Dropout` layer, which gets applied to the output of layer right before.Add two dropout layers to your network to check how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting View on TensorFlow.org Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—classifying movie reviews, and predicting housing prices—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
1.11.0
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it. Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or what the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network. We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
- 4s - loss: 8.3463e-04 - acc: 1.0000 - binary_crossentropy: 8.3463e-04 - val_loss: 0.9406 - val_acc: 0.8528 - val_binary_crossentropy: 0.9406
Epoch 2/20
- 4s - loss: 7.6335e-04 - acc: 1.0000 - binary_crossentropy: 7.6335e-04 - val_loss: 0.9517 - val_acc: 0.8520 - val_binary_crossentropy: 0.9517
Epoch 3/20
- 4s - loss: 6.9607e-04 - acc: 1.0000 - binary_crossentropy: 6.9607e-04 - val_loss: 0.9616 - val_acc: 0.8520 - val_binary_crossentropy: 0.9616
Epoch 4/20
- 4s - loss: 6.3856e-04 - acc: 1.0000 - binary_crossentropy: 6.3856e-04 - val_loss: 0.9687 - val_acc: 0.8524 - val_binary_crossentropy: 0.9687
Epoch 5/20
- 4s - loss: 5.8883e-04 - acc: 1.0000 - binary_crossentropy: 5.8883e-04 - val_loss: 0.9777 - val_acc: 0.8524 - val_binary_crossentropy: 0.9777
Epoch 6/20
- 4s - loss: 5.4410e-04 - acc: 1.0000 - binary_crossentropy: 5.4410e-04 - val_loss: 0.9875 - val_acc: 0.8516 - val_binary_crossentropy: 0.9875
Epoch 7/20
- 4s - loss: 5.0169e-04 - acc: 1.0000 - binary_crossentropy: 5.0169e-04 - val_loss: 0.9938 - val_acc: 0.8521 - val_binary_crossentropy: 0.9938
Epoch 8/20
- 4s - loss: 4.6464e-04 - acc: 1.0000 - binary_crossentropy: 4.6464e-04 - val_loss: 1.0028 - val_acc: 0.8518 - val_binary_crossentropy: 1.0028
Epoch 9/20
- 4s - loss: 4.3221e-04 - acc: 1.0000 - binary_crossentropy: 4.3221e-04 - val_loss: 1.0107 - val_acc: 0.8518 - val_binary_crossentropy: 1.0107
Epoch 10/20
- 4s - loss: 4.0192e-04 - acc: 1.0000 - binary_crossentropy: 4.0192e-04 - val_loss: 1.0184 - val_acc: 0.8516 - val_binary_crossentropy: 1.0184
Epoch 11/20
- 4s - loss: 3.7455e-04 - acc: 1.0000 - binary_crossentropy: 3.7455e-04 - val_loss: 1.0259 - val_acc: 0.8516 - val_binary_crossentropy: 1.0259
Epoch 12/20
- 4s - loss: 3.4951e-04 - acc: 1.0000 - binary_crossentropy: 3.4951e-04 - val_loss: 1.0304 - val_acc: 0.8517 - val_binary_crossentropy: 1.0304
Epoch 13/20
- 4s - loss: 3.2618e-04 - acc: 1.0000 - binary_crossentropy: 3.2618e-04 - val_loss: 1.0377 - val_acc: 0.8517 - val_binary_crossentropy: 1.0377
Epoch 14/20
- 4s - loss: 3.0548e-04 - acc: 1.0000 - binary_crossentropy: 3.0548e-04 - val_loss: 1.0436 - val_acc: 0.8517 - val_binary_crossentropy: 1.0436
Epoch 15/20
- 4s - loss: 2.8601e-04 - acc: 1.0000 - binary_crossentropy: 2.8601e-04 - val_loss: 1.0513 - val_acc: 0.8515 - val_binary_crossentropy: 1.0513
Epoch 16/20
- 4s - loss: 2.6864e-04 - acc: 1.0000 - binary_crossentropy: 2.6864e-04 - val_loss: 1.0557 - val_acc: 0.8514 - val_binary_crossentropy: 1.0557
Epoch 17/20
- 4s - loss: 2.5256e-04 - acc: 1.0000 - binary_crossentropy: 2.5256e-04 - val_loss: 1.0621 - val_acc: 0.8515 - val_binary_crossentropy: 1.0621
Epoch 18/20
- 4s - loss: 2.3668e-04 - acc: 1.0000 - binary_crossentropy: 2.3668e-04 - val_loss: 1.0696 - val_acc: 0.8513 - val_binary_crossentropy: 1.0696
Epoch 19/20
- 4s - loss: 2.2305e-04 - acc: 1.0000 - binary_crossentropy: 2.2305e-04 - val_loss: 1.0736 - val_acc: 0.8514 - val_binary_crossentropy: 1.0736
Epoch 20/20
- 4s - loss: 2.1043e-04 - acc: 1.0000 - binary_crossentropy: 2.1043e-04 - val_loss: 1.0787 - val_acc: 0.8512 - val_binary_crossentropy: 1.0787
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 4) 40004
_________________________________________________________________
dense_4 (Dense) (None, 4) 20
_________________________________________________________________
dense_5 (Dense) (None, 1) 5
=================================================================
Total params: 40,029
Trainable params: 40,029
Non-trainable params: 0
_________________________________________________________________
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
- 4s - loss: 0.6272 - acc: 0.6357 - binary_crossentropy: 0.6272 - val_loss: 0.5717 - val_acc: 0.7494 - val_binary_crossentropy: 0.5717
Epoch 2/20
- 4s - loss: 0.5228 - acc: 0.8084 - binary_crossentropy: 0.5228 - val_loss: 0.5157 - val_acc: 0.8182 - val_binary_crossentropy: 0.5157
Epoch 3/20
- 4s - loss: 0.4692 - acc: 0.8652 - binary_crossentropy: 0.4692 - val_loss: 0.4829 - val_acc: 0.8553 - val_binary_crossentropy: 0.4829
Epoch 4/20
- 4s - loss: 0.4299 - acc: 0.8969 - binary_crossentropy: 0.4299 - val_loss: 0.4605 - val_acc: 0.8714 - val_binary_crossentropy: 0.4605
Epoch 5/20
- 4s - loss: 0.3981 - acc: 0.9192 - binary_crossentropy: 0.3981 - val_loss: 0.4474 - val_acc: 0.8709 - val_binary_crossentropy: 0.4474
Epoch 6/20
- 4s - loss: 0.3706 - acc: 0.9323 - binary_crossentropy: 0.3706 - val_loss: 0.4402 - val_acc: 0.8698 - val_binary_crossentropy: 0.4402
Epoch 7/20
- 4s - loss: 0.3461 - acc: 0.9446 - binary_crossentropy: 0.3461 - val_loss: 0.4290 - val_acc: 0.8784 - val_binary_crossentropy: 0.4290
Epoch 8/20
- 4s - loss: 0.3246 - acc: 0.9559 - binary_crossentropy: 0.3246 - val_loss: 0.4282 - val_acc: 0.8744 - val_binary_crossentropy: 0.4282
Epoch 9/20
- 4s - loss: 0.3052 - acc: 0.9633 - binary_crossentropy: 0.3052 - val_loss: 0.4261 - val_acc: 0.8740 - val_binary_crossentropy: 0.4261
Epoch 10/20
- 4s - loss: 0.2881 - acc: 0.9692 - binary_crossentropy: 0.2881 - val_loss: 0.4254 - val_acc: 0.8748 - val_binary_crossentropy: 0.4254
Epoch 11/20
- 4s - loss: 0.2728 - acc: 0.9736 - binary_crossentropy: 0.2728 - val_loss: 0.4276 - val_acc: 0.8725 - val_binary_crossentropy: 0.4276
Epoch 12/20
- 4s - loss: 0.2590 - acc: 0.9769 - binary_crossentropy: 0.2590 - val_loss: 0.4320 - val_acc: 0.8710 - val_binary_crossentropy: 0.4320
Epoch 13/20
- 4s - loss: 0.2463 - acc: 0.9798 - binary_crossentropy: 0.2463 - val_loss: 0.4361 - val_acc: 0.8699 - val_binary_crossentropy: 0.4361
Epoch 14/20
- 4s - loss: 0.2343 - acc: 0.9827 - binary_crossentropy: 0.2343 - val_loss: 0.4339 - val_acc: 0.8694 - val_binary_crossentropy: 0.4339
Epoch 15/20
- 4s - loss: 0.2237 - acc: 0.9845 - binary_crossentropy: 0.2237 - val_loss: 0.4344 - val_acc: 0.8697 - val_binary_crossentropy: 0.4344
Epoch 16/20
- 4s - loss: 0.2141 - acc: 0.9858 - binary_crossentropy: 0.2141 - val_loss: 0.4408 - val_acc: 0.8681 - val_binary_crossentropy: 0.4408
Epoch 17/20
- 4s - loss: 0.2049 - acc: 0.9869 - binary_crossentropy: 0.2049 - val_loss: 0.4414 - val_acc: 0.8680 - val_binary_crossentropy: 0.4414
Epoch 18/20
- 4s - loss: 0.1966 - acc: 0.9880 - binary_crossentropy: 0.1966 - val_loss: 0.4469 - val_acc: 0.8667 - val_binary_crossentropy: 0.4469
Epoch 19/20
- 4s - loss: 0.1887 - acc: 0.9883 - binary_crossentropy: 0.1887 - val_loss: 0.4572 - val_acc: 0.8655 - val_binary_crossentropy: 0.4572
Epoch 20/20
- 4s - loss: 0.1814 - acc: 0.9891 - binary_crossentropy: 0.1814 - val_loss: 0.4591 - val_acc: 0.8654 - val_binary_crossentropy: 0.4591
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 512) 5120512
_________________________________________________________________
dense_7 (Dense) (None, 512) 262656
_________________________________________________________________
dense_8 (Dense) (None, 1) 513
=================================================================
Total params: 5,383,681
Trainable params: 5,383,681
Non-trainable params: 0
_________________________________________________________________
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
# Out of memory in office laptop !!!
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history)])#,
# ('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
- 4s - loss: 0.5580 - acc: 0.7862 - binary_crossentropy: 0.5195 - val_loss: 0.4073 - val_acc: 0.8698 - val_binary_crossentropy: 0.3698
Epoch 2/20
- 4s - loss: 0.3220 - acc: 0.9046 - binary_crossentropy: 0.2805 - val_loss: 0.3344 - val_acc: 0.8878 - val_binary_crossentropy: 0.2898
Epoch 3/20
- 4s - loss: 0.2587 - acc: 0.9268 - binary_crossentropy: 0.2114 - val_loss: 0.3304 - val_acc: 0.8878 - val_binary_crossentropy: 0.2812
Epoch 4/20
- 4s - loss: 0.2317 - acc: 0.9391 - binary_crossentropy: 0.1807 - val_loss: 0.3413 - val_acc: 0.8832 - val_binary_crossentropy: 0.2893
Epoch 5/20
- 4s - loss: 0.2145 - acc: 0.9456 - binary_crossentropy: 0.1613 - val_loss: 0.3511 - val_acc: 0.8810 - val_binary_crossentropy: 0.2970
Epoch 6/20
- 4s - loss: 0.2027 - acc: 0.9518 - binary_crossentropy: 0.1477 - val_loss: 0.3648 - val_acc: 0.8782 - val_binary_crossentropy: 0.3092
Epoch 7/20
- 4s - loss: 0.1936 - acc: 0.9538 - binary_crossentropy: 0.1373 - val_loss: 0.3859 - val_acc: 0.8726 - val_binary_crossentropy: 0.3290
Epoch 8/20
- 4s - loss: 0.1855 - acc: 0.9579 - binary_crossentropy: 0.1282 - val_loss: 0.3917 - val_acc: 0.8734 - val_binary_crossentropy: 0.3340
Epoch 9/20
- 4s - loss: 0.1795 - acc: 0.9596 - binary_crossentropy: 0.1211 - val_loss: 0.4063 - val_acc: 0.8707 - val_binary_crossentropy: 0.3474
Epoch 10/20
- 4s - loss: 0.1742 - acc: 0.9616 - binary_crossentropy: 0.1149 - val_loss: 0.4212 - val_acc: 0.8675 - val_binary_crossentropy: 0.3615
Epoch 11/20
- 4s - loss: 0.1694 - acc: 0.9630 - binary_crossentropy: 0.1093 - val_loss: 0.4358 - val_acc: 0.8667 - val_binary_crossentropy: 0.3753
Epoch 12/20
- 4s - loss: 0.1650 - acc: 0.9666 - binary_crossentropy: 0.1038 - val_loss: 0.4471 - val_acc: 0.8654 - val_binary_crossentropy: 0.3856
Epoch 13/20
- 4s - loss: 0.1612 - acc: 0.9680 - binary_crossentropy: 0.0995 - val_loss: 0.4661 - val_acc: 0.8626 - val_binary_crossentropy: 0.4040
Epoch 14/20
- 4s - loss: 0.1609 - acc: 0.9658 - binary_crossentropy: 0.0982 - val_loss: 0.4757 - val_acc: 0.8614 - val_binary_crossentropy: 0.4126
Epoch 15/20
- 4s - loss: 0.1546 - acc: 0.9709 - binary_crossentropy: 0.0913 - val_loss: 0.4891 - val_acc: 0.8587 - val_binary_crossentropy: 0.4257
Epoch 16/20
- 4s - loss: 0.1487 - acc: 0.9732 - binary_crossentropy: 0.0852 - val_loss: 0.5136 - val_acc: 0.8551 - val_binary_crossentropy: 0.4502
Epoch 17/20
- 4s - loss: 0.1483 - acc: 0.9724 - binary_crossentropy: 0.0846 - val_loss: 0.5175 - val_acc: 0.8584 - val_binary_crossentropy: 0.4533
Epoch 18/20
- 4s - loss: 0.1492 - acc: 0.9714 - binary_crossentropy: 0.0845 - val_loss: 0.5249 - val_acc: 0.8578 - val_binary_crossentropy: 0.4597
Epoch 19/20
- 4s - loss: 0.1434 - acc: 0.9743 - binary_crossentropy: 0.0781 - val_loss: 0.5368 - val_acc: 0.8570 - val_binary_crossentropy: 0.4713
Epoch 20/20
- 4s - loss: 0.1381 - acc: 0.9782 - binary_crossentropy: 0.0727 - val_loss: 0.5439 - val_acc: 0.8560 - val_binary_crossentropy: 0.4786
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data from the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply prints a `.` for each epoch, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for its regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Overfit and underfit View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).In both of the previous examples—[classifying text](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) and [predicting fuel efficiency](https://www.tensorflow.org/tutorials/keras/regression) — we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then stagnate or start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more complete training data. The dataset should cover the full range of inputs that the model is expected to handle. Additional data may only be useful if it covers new and interesting cases.A model trained on more complete data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore several common regularization techniques, and use them to improve on a classification model. Setup Before getting started, import the necessary packages:
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
###Output
_____no_output_____
###Markdown
The Higgs DatasetThe goal of this tutorial is not to do particle physics, so don't dwell on the details of the dataset. It contains 11&x202F;000&x202F;000 examples, each with 28 features, and a binary class label.
###Code
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz')
FEATURES = 28
###Output
_____no_output_____
###Markdown
The `tf.data.experimental.CsvDataset` class can be used to read csv records directly from a gzip file with no intermediate decompression step.
###Code
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
###Output
_____no_output_____
###Markdown
That csv reader class returns a list of scalars for each record. The following function repacks that list of scalars into a (feature_vector, label) pair.
###Code
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
###Output
_____no_output_____
###Markdown
TensorFlow is most efficient when operating on large batches of data.So instead of repacking each row individually make a new `Dataset` that takes batches of 10000-examples, applies the `pack_row` function to each batch, and then splits the batches back up into individual records:
###Code
packed_ds = ds.batch(10000).map(pack_row).unbatch()
###Output
_____no_output_____
###Markdown
Have a look at some of the records from this new `packed_ds`.The features are not perfectly normalized, but this is sufficient for this tutorial.
###Code
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
###Output
_____no_output_____
###Markdown
To keep this tutorial relatively short use just the first 1000 samples for validation, and the next 10 000 for training:
###Code
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
###Output
_____no_output_____
###Markdown
The `Dataset.skip` and `Dataset.take` methods make this easy.At the same time, use the `Dataset.cache` method to ensure that the loader doesn't need to re-read the data form the file on each epoch:
###Code
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
###Output
_____no_output_____
###Markdown
These datasets return individual examples. Use the `.batch` method to create batches of an appropriate size for training. Before batching also remember to `.shuffle` and `.repeat` the training set.
###Code
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to start with a small model: A model with a small number of learnable parameters (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.Start with a simple model using only `layers.Dense` as a baseline, then create larger versions, and compare them. Training procedure Many models train better if you gradually reduce the learning rate during training. Use `optimizers.schedules` to reduce the learning rate over time:
###Code
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
###Output
_____no_output_____
###Markdown
The code above sets a `schedules.InverseTimeDecay` to hyperbolically decrease the learning rate to 1/2 of the base rate at 1000 epochs, 1/3 at 2000 epochs and so on.
###Code
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
###Output
_____no_output_____
###Markdown
Each model in this tutorial will use the same training configuration. So set these up in a reusable way, starting with the list of callbacks.The training for this tutorial runs for many short epochs. To reduce the logging noise use the `tfdocs.EpochDots` which simply a `.` for each epoch and, and a full set of metrics every 100 epochs.Next include `callbacks.EarlyStopping` to avoid long and unnecessary training times. Note that this callback is set to monitor the `val_binary_crossentropy`, not the `val_loss`. This difference will be important later.Use `callbacks.TensorBoard` to generate TensorBoard logs for the training.
###Code
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
###Output
_____no_output_____
###Markdown
Similarly each model will use the same `Model.compile` and `Model.fit` settings:
###Code
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
###Output
_____no_output_____
###Markdown
Tiny model Start by training a model:
###Code
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
###Output
_____no_output_____
###Markdown
Now check how the model did:
###Code
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
Small model To see if you can beat the performance of the small model, progressively train some larger models.Try two hidden layers with 16 units each:
###Code
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
###Output
_____no_output_____
###Markdown
Medium model Now try 3 hidden layers with 64 units each:
###Code
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
###Output
_____no_output_____
###Markdown
Large modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
###Output
_____no_output_____
###Markdown
Plot the training and validation losses The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). While building a larger model gives it more power, if this power is not constrained somehow it can easily overfit to the training set.In this example, typically, only the `"Tiny"` model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly. This becomes so severe for the `"large"` model that you need to switch the plot to a log-scale to really see what's happening.This is apparent if you plot and compare the validation metrics to the training metrics.* It's normal for there to be a small difference.* If both metrics are moving in the same direction, everything is fine.* If the validation metric begins to stagnate while the training metric continues to improve, you are probably close to overfitting.* If the validation metric is going in the wrong direction, the model is clearly overfitting.
###Code
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
###Output
_____no_output_____
###Markdown
Note: All the above training runs used the `callbacks.EarlyStopping` to end the training once it was clear the model was not making progress. View in TensorBoardThese models all wrote TensorBoard logs during training.Open an embedded TensorBoard viewer inside a notebook:
###Code
#docs_infra: no_execute
%tensorboard --logdir {logdir}/sizes
###Output
_____no_output_____
###Markdown
You can view the [results of a previous run](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/scalars&_smoothingWeight=0.97) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).TensorBoard.dev is a managed experience for hosting, tracking, and sharing ML experiments with everyone.It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
###Output
_____no_output_____
###Markdown
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.Note: This step requires a Google account.```!tensorboard dev upload --logdir {logdir}/sizes```Caution: This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool. Strategies to prevent overfitting Before getting into the content of this section copy the training logs from the `"Tiny"` model above, to use as a baseline for comparison.
###Code
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
###Output
_____no_output_____
###Markdown
Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.L1 regularization pushes weights towards exactly zero encouraging a sparse model. L2 regularization will penalize the weights parameters without making them sparse since the penalty goes to zero for small weights. one reason why L2 is more common.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
###Output
_____no_output_____
###Markdown
`l2(0.001)` means that every coefficient in the weight matrix of the layer will add `0.001 * weight_coefficient_value**2` to the total **loss** of the network.That is why we're monitoring the `binary_crossentropy` directly. Because it doesn't have this regularization component mixed in.So, that same `"Large"` model with an `L2` regularization penalty performs much better:
###Code
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
As you can see, the `"L2"` regularized model is now much more competitive with the the `"Tiny"` model. This `"L2"` model is also much more resistant to overfitting than the `"Large"` model it was based on despite having the same number of parameters. More infoThere are two important things to note about this sort of regularization.**First:** if you are writing your own training loop, then you need to be sure to ask the model for it's regularization losses.
###Code
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
###Output
_____no_output_____
###Markdown
**Second:** This implementation works by adding the weight penalties to the model's loss, and then applying a standard optimization procedure after that.There is a second approach that instead only runs the optimizer on the raw loss, and then while applying the calculated step the optimizer also applies some weight decay. This "Decoupled Weight Decay" is seen in optimizers like `optimizers.FTRL` and `optimizers.AdamW`. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto.The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own.Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1].The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In `tf.keras` you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our network to see how well they do at reducing overfitting:
###Code
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
It's clear from this plot that both of these regularization approaches improve the behavior of the `"Large"` model. But this still doesn't beat even the `"Tiny"` baseline.Next try them both, together, and see if that does better. Combined L2 + dropout
###Code
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
###Output
_____no_output_____
###Markdown
This model with the `"Combined"` regularization is obviously the best one so far. View in TensorBoardThese models also recorded TensorBoard logs.To open an embedded tensorboard viewer inside a notebook, copy the following into a code-cell:```%tensorboard --logdir {logdir}/regularizers``` You can view the [results of a previous run](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/scalars&_smoothingWeight=0.97) of this notebook on [TensorDoard.dev](https://tensorboard.dev/).It's also included in an `` for convenience:
###Code
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
###Output
_____no_output_____ |
Keras_second_version.ipynb | ###Markdown
Plot
###Code
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.plot(fpr_rf, tpr_rf, label='RF (area = {:.3f})'.format(auc_rf))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
plt.xlim(0, 0.4)
plt.ylim(0.6, 1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.plot(fpr_rf, tpr_rf, label='RF (area = {:.3f})'.format(auc_rf))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Validation curve
###Code
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
SUMMARY
###Code
model.summary()
for layer in model.layers:
weights = layer.get_weights()
from keras.utils import plot_model
plot_model(model, to_file='/tmp/model.png', show_shapes=True,)
y_pred = model.predict_classes(X_test)
score = model.evaluate(X_test, y_test,verbose=1)
print(score)
###Output
6/6 [==============================] - 0s 2ms/step - loss: 0.5502 - accuracy: 0.7204
[0.5501977205276489, 0.7204301357269287]
###Markdown
Bayesian
###Code
# bounds for hyper-parameters in mnist model
# the bounds dict should be in order of continuous type and then discrete type
bounds = [{'name': 'validation_split', 'type': 'continuous', 'domain': (0.0, 0.3)},
{'name': 'l1_drop', 'type': 'continuous', 'domain': (0.0, 0.3)},
{'name': 'l2_drop', 'type': 'continuous', 'domain': (0.0, 0.3)},
{'name': 'l1_out', 'type': 'discrete', 'domain': (64, 128, 256, 512, 1024)},
{'name': 'l2_out', 'type': 'discrete', 'domain': (64, 128, 256, 512, 1024)},
{'name': 'batch_size', 'type': 'discrete', 'domain': (10, 100, 500)},
{'name': 'epochs', 'type': 'discrete', 'domain': (5, 10, 20)}]
###Output
_____no_output_____ |
lectures/data_manipulation/scientific_computing/sympy.ipynb | ###Markdown
Sympy Introducción Hay dos sistemas de álgebra computarizada (CAS) notables para Python:* [SymPy](http://sympy.org/en/index.html): un módulo de Python que se puede utilizar en cualquier programa de Python, o en una sesión de IPython, que proporciona potentes funciones de CAS.* [Sage](http://www.sagemath.org/) - Sage es un entorno CAS muy potente y con todas las funciones que tiene como objetivo proporcionar un sistema de código abierto que compita con Mathematica y Maple. Sage no es un módulo Python normal, sino un entorno CAS que utiliza Python como lenguaje de programación.`Sage` es en algunos aspectos más poderoso que `SymPy`, pero ambos ofrecen una funcionalidad CAS muy completa. La ventaja de SymPy es que es un módulo Python normal y se integra bien con el portátil IPython.Para comenzar a usar SymPy en un programa o cuaderno de Python, importe el módulo `sympy`:
###Code
from sympy import *
###Output
_____no_output_____
###Markdown
Para obtener una salida con formato $\LaTeX $ atractiva, ejecute:
###Code
init_printing()
# or with older versions of sympy/ipython, load the IPython extension
#%load_ext sympy.interactive.ipythonprinting
# or
#%load_ext sympyprinting
###Output
_____no_output_____
###Markdown
Variables simbólicas En `SymPy` necesitamos crear símbolos para las variables con las que queremos trabajar. Podemos crear un nuevo símbolo usando la clase `Symbol`:
###Code
x = Symbol('x')
(pi + x)**2
# alternative way of defining symbols
a, b, c = symbols("a, b, c")
type(a)
###Output
_____no_output_____
###Markdown
Podemos agregar suposiciones a los símbolos cuando los creamos:
###Code
x = Symbol('x', real=True)
x.is_imaginary
x = Symbol('x', positive=True)
x > 0
###Output
_____no_output_____
###Markdown
Números complejosLa unidad imaginaria se denota "I" en `Sympy`.
###Code
1+1*I
I**2
(x * I + 1)**2
###Output
_____no_output_____
###Markdown
Numeros racionalesHay tres tipos numéricos diferentes en SymPy: `Real`,` Rational`, ʻInteger`:
###Code
r1 = Rational(4,5)
r2 = Rational(5,4)
r1
r1+r2
r1/r2
###Output
_____no_output_____
###Markdown
Evaluación numérica`SymPy` usa una biblioteca para precisión artística como backend numérico, y tiene expresiones `SymPy` predefinidas para una serie de constantes matemáticas, como: `pi`, ʻe`, ʻoo` para infinito.Para evaluar una expresión numéricamente podemos usar la función `evalf` (o `N`). Toma un argumento "n" que especifica el número de dígitos significativos.
###Code
pi.evalf(n=50)
y = (x + pi)**2
N(y, 5) # same as evalf
###Output
_____no_output_____
###Markdown
Cuando evaluamos numéricamente expresiones algebraicas, a menudo queremos sustituir un símbolo por un valor numérico. En `SymPy` lo hacemos usando la función `subs`:
###Code
y.subs(x, 1.5)
N(y.subs(x, 1.5))
###Output
_____no_output_____
###Markdown
Por supuesto, la función `subs` también se puede utilizar para sustituir símbolos y expresiones:
###Code
y.subs(x, a+pi)
###Output
_____no_output_____
###Markdown
También podemos combinar la evolución numérica de expresiones con matrices `Numpy`:
###Code
import numpy
import matplotlib.pyplot as plt
x_vec = numpy.arange(0, 10, 0.1)
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
###Output
_____no_output_____
###Markdown
Sin embargo, este tipo de evolución numérica puede ser muy lenta, y hay una manera mucho más eficiente de hacerlo: use la función `lambdify` para" compilar "una expresión Sympy en una función que sea mucho más eficiente para evaluar numéricamente:
###Code
f = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that
# f will be a function of: in this case only x -> f(x)
y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated
###Output
_____no_output_____
###Markdown
La aceleración cuando se utilizan funciones `lambdify` en lugar de una evaluación numérica directa puede ser significativa, a menudo de varios órdenes de magnitud. Incluso en este ejemplo simple obtenemos una velocidad significativa:
###Code
%%timeit
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
%%timeit
y_vec = f(x_vec)
###Output
2.89 µs ± 48.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
###Markdown
Manipulaciones algebraicasUno de los usos principales de un CAS es realizar manipulaciones algebraicas de expresiones. Por ejemplo, podríamos querer expandir un producto, factorizar una expresión o simplemente una expresión. Las funciones para realizar estas operaciones básicas en SymPy se muestran en esta sección. Expandir y factorizarLos primeros pasos en una manipulación algebraica
###Code
(x+1)*(x+2)*(x+3)
expand((x+1)*(x+2)*(x+3))
###Output
_____no_output_____
###Markdown
La función `expand` toma un número de argumentos de palabras clave que podemos decirle a las funciones qué tipo de expansiones queremos que se realicen. Por ejemplo, para expandir expresiones trigonométricas, use el argumento de palabra clave `trig = True`:
###Code
sin(a+b)
expand(sin(a+b), trig=True)
###Output
_____no_output_____
###Markdown
Consulte `help (expand)` para obtener una explicación detallada de los distintos tipos de expansiones que pueden realizar las funciones de ʻexpand`.Lo contrario, una expansión de producto es, por supuesto, factorización. El factor de una expresión en SymPy usa la función `factor`:
###Code
factor(x**3 + 6 * x**2 + 11*x + 6)
###Output
_____no_output_____
###Markdown
SimplificarEl "simplificar" intenta simplificar una expresión en una expresión agradable, utilizando varias técnicas. También existen alternativas más específicas a las funciones `simplify`:` trigsimp`, `powsimp`,` logcombine`, etc.Los usos básicos de estas funciones son los siguientes:
###Code
# simplify expands a product
simplify((x+1)*(x+2)*(x+3))
# simplify uses trigonometric identities
simplify(sin(a)**2 + cos(a)**2)
simplify(cos(x)/sin(x))
###Output
_____no_output_____
###Markdown
Separados y juntosPara manipular expresiones simbólicas de fracciones, podemos usar las funciones `apart` y `together`: **apart**
###Code
f1 = 1/((a+1)*(a+2))
f1
apart(f1)
###Output
_____no_output_____
###Markdown
**together**
###Code
f2 = 1/(a+2) + 1/(a+3)
f2
together(f2)
###Output
_____no_output_____
###Markdown
Simplificar generalmente combina fracciones pero no factoriza:
###Code
simplify(f2)
###Output
_____no_output_____
###Markdown
CálculoAdemás de las manipulaciones algebraicas, el otro uso principal de CAS es hacer cálculo, como derivadas e integrales de expresiones algebraicas. DiferenciaciónLa diferenciación suele ser sencilla. Utilice la función `diff`. El primer argumento es la expresión para tomar la derivada y el segundo argumento es el símbolo por el cual tomar la derivada:
###Code
y
diff(y**2, x)
###Output
_____no_output_____
###Markdown
Para derivados de orden superior podemos hacer:
###Code
diff(y**2, x, x)
diff(y**2, x, 2) # same as above
###Output
_____no_output_____
###Markdown
Para calcular la derivada de una expresión multivariante, podemos hacer:
###Code
x, y, z = symbols("x,y,z")
f = sin(x*y) + cos(y*z)
###Output
_____no_output_____
###Markdown
$\frac{d^3f}{dxdy^2}$
###Code
diff(f, x, 1, y, 2)
###Output
_____no_output_____
###Markdown
IntegraciónLa integración se realiza de manera similar:
###Code
f
integrate(f, x)
###Output
_____no_output_____
###Markdown
Al proporcionar límites para la variable de integración, podemos evaluar integrales definidas:
###Code
integrate(f, (x, -1, 1))
###Output
_____no_output_____
###Markdown
y también integrales impropias:
###Code
integrate(exp(-x**2), (x, -oo, oo))
###Output
_____no_output_____
###Markdown
Recuerde, `oo` es la notación SymPy para infinito. Sumas y productosPodemos evaluar sumas y productos usando las funciones: 'Suma'
###Code
n = Symbol("n")
Sum(1/n**2, (n, 1, 10))
Sum(1/n**2, (n,1, 10)).evalf()
Sum(1/n**2, (n, 1, oo)).evalf()
###Output
_____no_output_____
###Markdown
Los productos funcionan de la misma manera:
###Code
Product(n, (n, 1, 10)) # 10!
###Output
_____no_output_____
###Markdown
LímitesLos límites se pueden evaluar utilizando la función `limit`. Por ejemplo,
###Code
limit(sin(x)/x, x, 0)
###Output
_____no_output_____
###Markdown
Podemos usar `limit` para verificar el resultado de la derivación usando la función `diff`:
###Code
f
diff(f, x)
###Output
_____no_output_____
###Markdown
$\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
###Code
h = Symbol("h")
limit((f.subs(x, x+h) - f)/h, h, 0)
###Output
_____no_output_____
###Markdown
Podemos cambiar la dirección desde la que nos acercamos al punto límite usando el argumento `dir`:
###Code
limit(1/x, x, 0, dir="+")
limit(1/x, x, 0, dir="-")
###Output
_____no_output_____
###Markdown
SerieLa expansión de la serie también es una de las características más útiles de un CAS. En SymPy podemos realizar una expansión en serie de una expresión usando la función `series`:
###Code
series(exp(x), x)
###Output
_____no_output_____
###Markdown
De forma predeterminada, expande la expresión alrededor de $x = 0$, pero podemos expandir alrededor de cualquier valor de $x$ al incluir explícitamente un valor en la llamada a la función:
###Code
series(exp(x), x, 1)
###Output
_____no_output_____
###Markdown
Y podemos definir explícitamente en qué orden se debe realizar la expansión de la serie:
###Code
series(exp(x), x, 1, 10)
###Output
_____no_output_____
###Markdown
La expansión de la serie incluye el orden de la aproximación, lo cual es muy útil para realizar un seguimiento del orden de validez cuando hacemos cálculos con expansiones de la serie de diferente orden:
###Code
s1 = cos(x).series(x, 0, 5)
s1
s2 = sin(x).series(x, 0, 2)
s2
expand(s1 * s2)
###Output
_____no_output_____
###Markdown
Si queremos deshacernos de la información del error, podemos usar el método `removeO`:
###Code
expand(s1.removeO() * s2.removeO())
###Output
_____no_output_____
###Markdown
Pero tenga en cuenta que esta no es la expansión correcta de $ \cos(x) \sin(x)$ a $ 5 $ ésimo orden:
###Code
(cos(x)*sin(x)).series(x, 0, 6)
###Output
_____no_output_____
###Markdown
Álgebra lineal MatricesLas matrices se definen usando la clase `Matrix`:
###Code
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
A = Matrix([[m11, m12],[m21, m22]])
A
b = Matrix([[b1], [b2]])
b
###Output
_____no_output_____
###Markdown
Con las instancias de la clase `Matrix` podemos hacer las operaciones habituales de álgebra matricial:
###Code
A**2
A * b
###Output
_____no_output_____
###Markdown
Y calcular determinantes e inversas, y similares:
###Code
A.det()
A.inv()
###Output
_____no_output_____
###Markdown
Resolver ecuacionesPara resolver ecuaciones y sistemas de ecuaciones podemos usar la función `resolver`:
###Code
solve(x**2 - 1, x)
solve(x**4 - x**2 - 1, x)
###Output
_____no_output_____
###Markdown
Sistema de ecuaciones:
###Code
solve([x + y - 1, x - y - 1], [x,y])
###Output
_____no_output_____
###Markdown
En cuanto a otras expresiones simbólicas:
###Code
solve([x + y - a, x - y - c], [x,y])
###Output
_____no_output_____
###Markdown
Sympy Introducción Hay dos sistemas de álgebra computarizada (CAS) notables para Python:* [SymPy](http://sympy.org/en/index.html): un módulo de Python que se puede utilizar en cualquier programa de Python, o en una sesión de IPython, que proporciona potentes funciones de CAS.* [Sage](http://www.sagemath.org/) - Sage es un entorno CAS muy potente y con todas las funciones que tiene como objetivo proporcionar un sistema de código abierto que compita con Mathematica y Maple. Sage no es un módulo Python normal, sino un entorno CAS que utiliza Python como lenguaje de programación.`Sage` es en algunos aspectos más poderoso que `SymPy`, pero ambos ofrecen una funcionalidad CAS muy completa. La ventaja de SymPy es que es un módulo Python normal y se integra bien con el portátil IPython.Para comenzar a usar SymPy en un programa o cuaderno de Python, importe el módulo `sympy`:
###Code
from sympy import *
###Output
_____no_output_____
###Markdown
Para obtener una salida con formato $\LaTeX $ atractiva, ejecute:
###Code
init_printing()
# or with older versions of sympy/ipython, load the IPython extension
#%load_ext sympy.interactive.ipythonprinting
# or
#%load_ext sympyprinting
###Output
_____no_output_____
###Markdown
Variables simbólicas En `SymPy` necesitamos crear símbolos para las variables con las que queremos trabajar. Podemos crear un nuevo símbolo usando la clase `Symbol`:
###Code
x = Symbol('x')
(pi + x)**2
# alternative way of defining symbols
a, b, c = symbols("a, b, c")
type(a)
###Output
_____no_output_____
###Markdown
Podemos agregar suposiciones a los símbolos cuando los creamos:
###Code
x = Symbol('x', real=True)
x.is_imaginary
x = Symbol('x', positive=True)
x > 0
###Output
_____no_output_____
###Markdown
Números complejosLa unidad imaginaria se denota "I" en `Sympy`.
###Code
1+1*I
I**2
(x * I + 1)**2
###Output
_____no_output_____
###Markdown
Numeros racionalesHay tres tipos numéricos diferentes en SymPy: `Real`,` Rational`, ʻInteger`:
###Code
r1 = Rational(4,5)
r2 = Rational(5,4)
r1
r1+r2
r1/r2
###Output
_____no_output_____
###Markdown
Evaluación numérica`SymPy` usa una biblioteca para precisión artística como backend numérico, y tiene expresiones `SymPy` predefinidas para una serie de constantes matemáticas, como: `pi`, ʻe`, ʻoo` para infinito.Para evaluar una expresión numéricamente podemos usar la función `evalf` (o `N`). Toma un argumento "n" que especifica el número de dígitos significativos.
###Code
pi.evalf(n=50)
y = (x + pi)**2
N(y, 5) # same as evalf
###Output
_____no_output_____
###Markdown
Cuando evaluamos numéricamente expresiones algebraicas, a menudo queremos sustituir un símbolo por un valor numérico. En `SymPy` lo hacemos usando la función `subs`:
###Code
y.subs(x, 1.5)
N(y.subs(x, 1.5))
###Output
_____no_output_____
###Markdown
Por supuesto, la función `subs` también se puede utilizar para sustituir símbolos y expresiones:
###Code
y.subs(x, a+pi)
###Output
_____no_output_____
###Markdown
También podemos combinar la evolución numérica de expresiones con matrices `Numpy`:
###Code
import numpy
import matplotlib.pyplot as plt
x_vec = numpy.arange(0, 10, 0.1)
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
###Output
_____no_output_____
###Markdown
Sin embargo, este tipo de evolución numérica puede ser muy lenta, y hay una manera mucho más eficiente de hacerlo: use la función `lambdify` para" compilar "una expresión Sympy en una función que sea mucho más eficiente para evaluar numéricamente:
###Code
f = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that
# f will be a function of: in this case only x -> f(x)
y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated
###Output
_____no_output_____
###Markdown
La aceleración cuando se utilizan funciones `lambdify` en lugar de una evaluación numérica directa puede ser significativa, a menudo de varios órdenes de magnitud. Incluso en este ejemplo simple obtenemos una velocidad significativa:
###Code
%%timeit
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
%%timeit
y_vec = f(x_vec)
###Output
2.89 µs ± 48.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
###Markdown
Manipulaciones algebraicasUno de los usos principales de un CAS es realizar manipulaciones algebraicas de expresiones. Por ejemplo, podríamos querer expandir un producto, factorizar una expresión o simplemente una expresión. Las funciones para realizar estas operaciones básicas en SymPy se muestran en esta sección. Expandir y factorizarLos primeros pasos en una manipulación algebraica
###Code
(x+1)*(x+2)*(x+3)
expand((x+1)*(x+2)*(x+3))
###Output
_____no_output_____
###Markdown
La función `expand` toma un número de argumentos de palabras clave que podemos decirle a las funciones qué tipo de expansiones queremos que se realicen. Por ejemplo, para expandir expresiones trigonométricas, use el argumento de palabra clave `trig = True`:
###Code
sin(a+b)
expand(sin(a+b), trig=True)
###Output
_____no_output_____
###Markdown
Consulte `help (expand)` para obtener una explicación detallada de los distintos tipos de expansiones que pueden realizar las funciones de ʻexpand`.Lo contrario, una expansión de producto es, por supuesto, factorización. El factor de una expresión en SymPy usa la función `factor`:
###Code
factor(x**3 + 6 * x**2 + 11*x + 6)
###Output
_____no_output_____
###Markdown
SimplificarEl "simplificar" intenta simplificar una expresión en una expresión agradable, utilizando varias técnicas. También existen alternativas más específicas a las funciones `simplify`:` trigsimp`, `powsimp`,` logcombine`, etc.Los usos básicos de estas funciones son los siguientes:
###Code
# simplify expands a product
simplify((x+1)*(x+2)*(x+3))
# simplify uses trigonometric identities
simplify(sin(a)**2 + cos(a)**2)
simplify(cos(x)/sin(x))
###Output
_____no_output_____
###Markdown
Separados y juntosPara manipular expresiones simbólicas de fracciones, podemos usar las funciones `apart` y `together`: **apart**
###Code
f1 = 1/((a+1)*(a+2))
f1
apart(f1)
###Output
_____no_output_____
###Markdown
**together**
###Code
f2 = 1/(a+2) + 1/(a+3)
f2
together(f2)
###Output
_____no_output_____
###Markdown
Simplificar generalmente combina fracciones pero no factoriza:
###Code
simplify(f2)
###Output
_____no_output_____
###Markdown
CálculoAdemás de las manipulaciones algebraicas, el otro uso principal de CAS es hacer cálculo, como derivadas e integrales de expresiones algebraicas. DiferenciaciónLa diferenciación suele ser sencilla. Utilice la función `diff`. El primer argumento es la expresión para tomar la derivada y el segundo argumento es el símbolo por el cual tomar la derivada:
###Code
y
diff(y**2, x)
###Output
_____no_output_____
###Markdown
Para derivados de orden superior podemos hacer:
###Code
diff(y**2, x, x)
diff(y**2, x, 2) # same as above
###Output
_____no_output_____
###Markdown
Para calcular la derivada de una expresión multivariante, podemos hacer:
###Code
x, y, z = symbols("x,y,z")
f = sin(x*y) + cos(y*z)
###Output
_____no_output_____
###Markdown
$\frac{d^3f}{dxdy^2}$
###Code
diff(f, x, 1, y, 2)
###Output
_____no_output_____
###Markdown
IntegraciónLa integración se realiza de manera similar:
###Code
f
integrate(f, x)
###Output
_____no_output_____
###Markdown
Al proporcionar límites para la variable de integración, podemos evaluar integrales definidas:
###Code
integrate(f, (x, -1, 1))
###Output
_____no_output_____
###Markdown
y también integrales impropias:
###Code
integrate(exp(-x**2), (x, -oo, oo))
###Output
_____no_output_____
###Markdown
Recuerde, `oo` es la notación SymPy para infinito. Sumas y productosPodemos evaluar sumas y productos usando las funciones: 'Suma'
###Code
n = Symbol("n")
Sum(1/n**2, (n, 1, 10))
Sum(1/n**2, (n,1, 10)).evalf()
Sum(1/n**2, (n, 1, oo)).evalf()
###Output
_____no_output_____
###Markdown
Los productos funcionan de la misma manera:
###Code
Product(n, (n, 1, 10)) # 10!
###Output
_____no_output_____
###Markdown
LímitesLos límites se pueden evaluar utilizando la función `limit`. Por ejemplo,
###Code
limit(sin(x)/x, x, 0)
###Output
_____no_output_____
###Markdown
Podemos usar `limit` para verificar el resultado de la derivación usando la función `diff`:
###Code
f
diff(f, x)
###Output
_____no_output_____
###Markdown
$\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
###Code
h = Symbol("h")
limit((f.subs(x, x+h) - f)/h, h, 0)
###Output
_____no_output_____
###Markdown
Podemos cambiar la dirección desde la que nos acercamos al punto límite usando el argumento `dir`:
###Code
limit(1/x, x, 0, dir="+")
limit(1/x, x, 0, dir="-")
###Output
_____no_output_____
###Markdown
SerieLa expansión de la serie también es una de las características más útiles de un CAS. En SymPy podemos realizar una expansión en serie de una expresión usando la función `series`:
###Code
series(exp(x), x)
###Output
_____no_output_____
###Markdown
De forma predeterminada, expande la expresión alrededor de $x = 0$, pero podemos expandir alrededor de cualquier valor de $x$ al incluir explícitamente un valor en la llamada a la función:
###Code
series(exp(x), x, 1)
###Output
_____no_output_____
###Markdown
Y podemos definir explícitamente en qué orden se debe realizar la expansión de la serie:
###Code
series(exp(x), x, 1, 10)
###Output
_____no_output_____
###Markdown
La expansión de la serie incluye el orden de la aproximación, lo cual es muy útil para realizar un seguimiento del orden de validez cuando hacemos cálculos con expansiones de la serie de diferente orden:
###Code
s1 = cos(x).series(x, 0, 5)
s1
s2 = sin(x).series(x, 0, 2)
s2
expand(s1 * s2)
###Output
_____no_output_____
###Markdown
Si queremos deshacernos de la información del error, podemos usar el método `removeO`:
###Code
expand(s1.removeO() * s2.removeO())
###Output
_____no_output_____
###Markdown
Pero tenga en cuenta que esta no es la expansión correcta de $ \cos(x) \sin(x)$ a $ 5 $ ésimo orden:
###Code
(cos(x)*sin(x)).series(x, 0, 6)
###Output
_____no_output_____
###Markdown
Álgebra lineal MatricesLas matrices se definen usando la clase `Matrix`:
###Code
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
A = Matrix([[m11, m12],[m21, m22]])
A
b = Matrix([[b1], [b2]])
b
###Output
_____no_output_____
###Markdown
Con las instancias de la clase `Matrix` podemos hacer las operaciones habituales de álgebra matricial:
###Code
A**2
A * b
###Output
_____no_output_____
###Markdown
Y calcular determinantes e inversas, y similares:
###Code
A.det()
A.inv()
###Output
_____no_output_____
###Markdown
Resolver ecuacionesPara resolver ecuaciones y sistemas de ecuaciones podemos usar la función `resolver`:
###Code
solve(x**2 - 1, x)
solve(x**4 - x**2 - 1, x)
###Output
_____no_output_____
###Markdown
Sistema de ecuaciones:
###Code
solve([x + y - 1, x - y - 1], [x,y])
###Output
_____no_output_____
###Markdown
En cuanto a otras expresiones simbólicas:
###Code
solve([x + y - a, x - y - c], [x,y])
###Output
_____no_output_____ |
Prelim_Exam.ipynb | ###Markdown
Problem 2. (50 points)1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Self () using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student: #class name Student allowing to display your full name, student number, age, and course.
def __init__(self,name,studentnumber,age,school,course):
self.name = name
self.studentnumber = studentnumber
self.age = age
self.school = school
self.course = course
def Fullname(self): #Fullname() method for the name of the student.
return self.name
def Studentnumber(self): #Studentnumber() method for the student number of the student.
return self.studentnumber
def Age(self): #Age() method for the age of the student.
return self.age
def School(self): #School() method for the school of the student.
return self.school
def Course(self): #Course() method for the course of the student.
return self.course
def Self(self): #To display the information of the student.
print("Name:",self.Fullname())
print("Student No.:",self.Studentnumber())
print("Age:",self.Age())
print("School:",self.School())
print("Course:",self.Course())
OOP_58001 = Student("Barbado, Ralph Mikhail B.","202115503","19","Adamson University","BS in Computer Engineering") #Variables used
OOP_58001.Self()
###Output
Name: Barbado, Ralph Mikhail B.
Student No.: 202115503
Age: 19
School: Adamson University
Course: BS in Computer Engineering
###Markdown
Prelim Exam **Question 1**. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2**. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
print()
C = np.eye(4)
print(2*C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3**. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
#compute the cross product of A and B
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
**Prelim Exam - Belarmino ⚡** * Write a Python to display your full name, student number, age, and course* Create a class named Student with attributes: Name, Student_No, Age, School, and Course* Create an object name Myself and assign an instance for each attribute.* Create a method Self () using an instantiation of a class.* Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student ():
def __init__ (self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Myself (self):
print ("My name is", self.Name)
print ("My student number is", self.Student_No)
print ("I am", self.Age, "years old")
print ("I am studying", self.Course, "at", self.School)
sinigang = Student ("Erica Belarmino", 202110020, 18, "Adamson University", "BS Computer Engineering")
sinigang.Myself()
###Output
My name is Erica Belarmino
My student number is 202110020
I am 18 years old
I am studying BS Computer Engineering at Adamson University
###Markdown
Problem 2.1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School,and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-2"
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(self.Name,self.Student_No,self.Age,self.School,self.Course)
Myself = Student("Wella Mae L. Boriol",202105205,19,"Cavite State University - Main Campus","Bachelor of Science in Computer Engineering")
print(f"My name is {Myself.Name}, and my student student number is {Myself.Student_No}. I am {Myself.Age} years old and studying at {Myself.School} with a {Myself.Course}.")
###Output
My name is Wella Mae L. Boriol, and my student student number is 202105205. I am 19 years old and studying at Cavite State University - Main Campus with a Bachelor of Science in Computer Engineering.
###Markdown
PROBLEM 2
###Code
#Write a Python to display your full name, student number, age, and course
#Create a class named Student with attributes: Name, Student_No, Age, School, and Course
#Create an object named Myself and assign an instance for each attribute.
#Create a method Info() using an instantiation of a class.
class Student:
def __init__(self,Name,student_no,Age,School,Course):
self.Name = Name
self.Student_no = student_no
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Name:" f"{self.Name}, Student_no:" f"{self.Student_no}, Age:" f" {self.Age}," " School:" f" {self.School}," " Course:" f" {self.Course}.")
Myself = Student("Baby Angel E. Rupido", "202103135", 19,"CvSU","BSCpE")
Myself.Info()
###Output
Name:Baby Angel E. Rupido, Student_no:202103135, Age: 19, School: CvSU, Course: BSCpE.
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
class MyInfo(Student): #instantiation of a class
def Info(self):
print(self.Name)
print(self.Student_No)
print(self.Age)
print(self.School)
print(self.Course)
Myself = MyInfo("Hanns Jaspher A. Elalto", "202101663", "19", "Cavite State University - Main Campus",
"Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Hanns Jaspher A. Elalto
202101663
19
Cavite State University - Main Campus
Bachelor of Science in Computer Engineering
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n","Greetings I am",self.Name,"and my student number is", self.Student_No,"\n","I just turned",self.Age,"in the 15th of March",
"\n", "I am currently enrolled in", self.School,"taking", self.Course)
Myself = Student("Ernest Danniel R. Tiston","202106651","18", "Cavite State University-Indang Campus", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Greetings I am Ernest Danniel R. Tiston and my student number is 202106651
I just turned 18 in the 15th of March
I am currently enrolled in Cavite State University-Indang Campus taking Bachelor of Science in Computer Engineering
###Markdown
QUESTION 1
###Code
#Create a 4 x 4 matrix whose diagonal elements are all one (1's).
#Name it as matrix "C". Show your solutions using Python codes and do not
#forget to label them on the Text Cell.
import numpy as np
C = np.full((4,4),1)
print(C)
q1 = np.diagonal(C)
print(q1)
###Output
[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]
[1 1 1 1]
###Markdown
QUESTION 2
###Code
#In relation to Question 1, show a solution that doubles all the values of each
#element. Show your solutions using Python codes and do not forget to label
#them on the Text Cell.
q2_a = C*2
q2_b = q1*2
print("Double the value (Multiply by 2) of C from Question 1")
print(q2_a)
print(q2_b)
###Output
Double the value (Multiply by 2) of C from Question 1
[[2 2 2 2]
[2 2 2 2]
[2 2 2 2]
[2 2 2 2]]
[2 2 2 2]
###Markdown
QUESTION 3
###Code
#Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8].
#Show your solutions using Python codes and do not forget to label them on
#the Text Cell.
A = np.array([2,7,4])
B = np.array([3,9,8])
print(A)
print(B)
q3 = np.cross(A,B)
print(q3)
###Output
[2 7 4]
[3 9 8]
[20 -4 -3]
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n", "Good day! My name is", self.Name,
"\n", "My student number is", self.Student_No,
"\n", "I am", self.Age, "years old",
"\n", "I am currently studying at", self.School, "and I am taking", self.Course)
Myself = Student("Kinlie Venice L. de Guzman", "202101551", "18", "Cavite State University (Main Campus)", "Computer Engineering")
Myself.Info()
###Output
Good day! My name is Kinlie Venice L. de Guzman
My student number is 202101551
I am 18 years old
I am currently studying at Cavite State University (Main Campus) and I am taking Computer Engineering
###Markdown
QUESTION 1 Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
c = np.eye(4)
np.fill_diagonal(c,1)
print(c)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
QUESTION 2 In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
c = np.eye(4)
print(c+c)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
QUESTION 3 Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
#compute the cross product of A and B
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Person:
def __init__(self, name, number, age, school, course):
self.name = name
self.number = number
self.age = age
self.school = school
self.course = course
def myFunction(self):
print("I am",self.name, "and My Number is",self.number ,"I'm currently",self.age, "and studying in",self.school)
print("My course is",self.course)
p1 = Person("Operaña, Larenz Sandrei", 20211302419, 19, "Adamson University", "BS Computer Engeering")
p1.myFunction()
###Output
I am Operaña, Larenz Sandrei and My Number is 20211302419 I'm currently 19 and studying in Adamson University
My course is BS Computer Engeering
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, Course, School):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.Course = Course
self.School = School
def Info(self):
print(f"full name: {self.Name} \nstudent number: {self.Student_No} \nage: {self.Age} \ncourse: {self.Course} \nSchool: {self.School}")
Myself = Student("Kurt Ashley S. Emprese", "202101034", 18, "BSCpE","Cavite State University")
Myself.Info()
###Output
full name: Kurt Ashley S. Emprese
student number: 202101034
age: 18
course: BSCpE
School: Cavite State University
###Markdown
Problem 1. Examine the program below and create an appropriate flowchart (50 points)
###Code
n = 20
total_numbers = n
sum = 0
while n >= 0:
sum += n
n -= 1
print("sum =", sum)
average = sum / total_numbers
print("Average =", average)
###Output
sum = 210
Average = 10.5
###Markdown
Problem 2.1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object named Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def info(self):
print("My Name is", self.Name)
print("My Student Number is", self.Student_No)
print("My Age is", self.Age)
print("My School is", self.Course)
Myself = Student("Franz Louise B. Gloriani", 202101633, 19, "Cavite State Univeristy", "Bachelor of Science in Computer Engineering")
Myself.info()
###Output
My Name is Franz Louise B. Gloriani
My Student Number is 202101633
My Age is 19
My School is Bachelor of Science in Computer Engineering
###Markdown
Prelim Exam Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Matrix C
import numpy as np
f = np.eye(4)
print(f)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
f = np.eye(4) * 2
print(f)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Student(): #create a class named Student
def __init__(self,name,student_no,age,school,course):
self.name=name
self.student_no=student_no
self.age=age
self.school=school
self.course=course #represents the instance of class named Student
def description(self):
return self.name,self.student_no,self.age,self.school,self.course
def display(self):
print("My name, student no., age, school, and course is",self.description())
myself = Student("Medina, Joanna Micka E.","202110306","19 years old","Adamson University","Bachelor of Science in Computer Engineering") #to create an object with its attribute values
myself.display()
###Output
My name, student no., age, school, and course is ('Medina, Joanna Micka E.', '202110306', '19 years old', 'Adamson University', 'Bachelor of Science in Computer Engineering')
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No,Age,School,Course):
self.Name=Name
self.Student_No=Student_No
self.Age=Age
self.School=School
self.Course=Course
def Info(self):
print('Name:',self.Name)
print('Student No:',self.Student_No)
print('Age',self.Age)
print('School:',self.School)
print('Course:',self.Course)
Myself=Student('Christian Angelo A. Mones',20210510,18,'Cavite State University','Bachelor of Science in Computer Engineering')
Myself.Info()
###Output
Name: Christian Angelo A. Mones
Student No: 20210510
Age 18
School: Cavite State University
Course: Bachelor of Science in Computer Engineering
###Markdown
Problem 2:1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-2"
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(self.Name)
print(self.Student_No)
print(self.Age)
print(self.School)
print(self.Course)
Myself = Student("Jan Vincent C. Vallente", 202102026, 20, "Cavite State University", "BS Computer Engineering")
print("Name:",Myself.Name)
print("Student No.:",Myself.Student_No)
print("Age:",Myself.Age)
print("School:",Myself.School)
print("Course:",Myself.Course)
###Output
Name: Jan Vincent C. Vallente
Student No.: 202102026
Age: 20
School: Cavite State University
Course: BS Computer Engineering
###Markdown
**PRELIM EXAM** $$A = \begin{bmatrix} 1, 2, 3\\ 2, 3, 3\\ 3, 4, -2\end{bmatrix} \\ $$
###Code
import numpy as np
A = np.array([[1, 2, 3],
[2, 3, 3],
[3, 4, -2]]
)
ADet = round(np.linalg.det(A))
print(ADet)
###Output
5
###Markdown
Problem 2. (50 points)1. Write a Python to display your full name, student number, age, school and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(Myself,fullname,student_no,age,school,course):
Myself.fullname = fullname
Myself.student_no = student_no
Myself.age = age
Myself.school = school
Myself.course = course
def Info(Myself):
#print(Myself.fullname,self.student_no,self.age,self.course,self.school)
print("My Name is", Myself.fullname)
print("My Student number is", Myself.student_no)
print("My Age is", Myself.age)
print("My School is", Myself.school)
print("My Course is", Myself.course)
student = Student("Jessa Mae Mendoza",202102187,"19 years old","CvSU-Indang Campus","BS CPE")
student.Info()
###Output
My Name is Jessa Mae Mendoza
My Student number is 202102187
My Age is 19 years old
My School is CvSU-Indang Campus
My Course is BS CPE
###Markdown
Problem 2
###Code
class Student:
def __init__(self, deej, number, age, school, course):
self.deej = deej
self.number = number
self.age = age
self.school = school
self.course = course
def deejong(self):
print("My name is "+ self.deej, "\nAge: "+self.age, "\nCurrent school: "+self.school, "\nStudent number: "+self.number, "\nCourse taken: "+self.course)
deej1 = Student("Tolentino, Daniel Jethro L.", "202117984", "19", "Adamson University", "Computer Engineering")
deej1.deejong()
###Output
My name is Tolentino, Daniel Jethro L.
Age: 19
Current school: Adamson University
Student number: 202117984
Course taken: Computer Engineering
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Full Name:",self.Name)
print("Student Number:",self.Student_No)
print("Age:",self.Age)
print("School:",self.School)
print("Course:",self.Course)
Myself = Student("Mark Jeremin C. Poblete", "202101927","18 Years Old", "Cavite State University - Main","Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Full Name: Mark Jeremin C. Poblete
Student Number: 202101927
Age: 18 Years Old
School: Cavite State University - Main
Course: Bachelor of Science in Computer Engineering
###Markdown
* Write a Python to display your full name, student number, age, and course* Create a class named Student with attributes: Name, Student_No, Age, School, and Course* Create an object name Myself and assign an instance for each attribute.* Create a method Self () using an instantiation of a class.
###Code
class Student:
def __init__(self,Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def self(self):
print("Name: ", self.Name)
print("Student Numebr: ", self.Student_No)
print("Age: ", self.Age)
print("School: ", self.School)
print("Course: ", self.Course)
Myself = Student("Andrea Castillo", 202119092, 18, "Adamson University", "BS in Computer Engineering")
Myself.self()
###Output
Name: Andrea Castillo
Student Numebr: 202119092
Age: 18
School: Adamson University
Course: BS in Computer Engineering
###Markdown
**Prelim Exam** * Write a Python to display your full name, student number, age, and course* Create a class named Student with attributes: Name, Student_No, Age, School, and Course* Create an object name Myself and assign an instance for each attribute.* Create a method Self () using an instantiation of a class.* Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student:
def __init__(self, name, studnumber, studage, studschool, studcourse):
self.name = name
self.snumber = studnumber
self.age = studage
self.school = studschool
self.course = studcourse
def Myself (self):
print ("My name is", self.name)
print ("My student number is", self.snumber)
print ("I am", self.age, "years old")
print ("I am studying", self.course)
print ("I am currently studying in", self.school)
Clap = Student ("Renz Julius Guico", 202110527, 19, "Adamson University", "BS Computer Engineering")
Clap.Myself()
###Output
My name is Renz Julius Guico
My student number is 202110527
I am 19 years old
I am studying BS Computer Engineering
I am currently studying in Adamson University
###Markdown
Prelim Exam 1.(20 Points)Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell
###Code
import numpy as np
C = np.eye(4)
C
###Output
_____no_output_____
###Markdown
2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
double = np.multiply(C,2)
double
###Output
_____no_output_____
###Markdown
3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
Answer = np.cross(A,B)
Answer
###Output
_____no_output_____
###Markdown
Write a Python to display your full name, student number, age, and course1. Create a class named Student with attributes: Name, Student_No, Age, School, and Course2. Create an object name Myself and assign an instance for each attribute.3. Create a method Self () using an instantiation of a class.4. Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student():
def __init__(self,Name,Student_No,Age,School,Course):
self.Student_No = Student_No
self.Name = Name
self.Age = Age
self.School = School
self.Course = Course
def Self(self):
print("Welcome", self.Name + ", to", self.School, "Your student number is", self.Student_No + ",", "a freshman of", self.Course, "at the age of", self.Age, "Years Old")
Myself = Student("Sangco, Jerrold", "202110017", "18", "Adamson University", "BS CPE")
Myself.Self()
###Output
Welcome Sangco, Jerrold, to Adamson University Your student number is 202110017, a freshman of BS CPE at the age of 18 Years Old
###Markdown
Alternative Program
###Code
class Student():
def __init__(self,Name,Student_No,Age,School,Course):
self.Student_No = Student_No
self.Name = Name
self.Age = Age
self.School = School
self.Course = Course
def Self(self):
print("Welcome", self.Name + ", to", self.School, "Your student number is", self.Student_No + ",", "a freshman of", self.Course, "at the age of", self.Age, "Years Old")
Myself = Student(str(input("Input your Name: ")), str(input("Input your student number: ")), str(input("Input your age: ")), str(input("Input your School: ")), str(input("Input your Course: ")))
Myself.Self()
###Output
Input your Name: Sangco, Jerrold
Input your student number: 202110017
Input your age: 18
Input your School: Adamson University
Input your Course: BS CPE
Welcome Sangco, Jerrold, to Adamson University Your student number is 202110017, a freshman of BS CPE at the age of 18 Years Old
###Markdown
Prelim Exam Question 1Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
print("Question 1") #printing of label
import numpy as np #importing of library used for array matrices
C= np.eye((4)) #creation of matrix C with default value of 1 in diagonal
print(C) #printing values of matrix C
###Output
Question 1
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np #importing of library used for array matrices
C= np.eye((4)) #creation of matrix C with default value of 1 in diagonal
print("Question 2") #printing of label
doubleC=C*2 #scaling value of C by 2 and assigning to another variable
print(doubleC) #print of variable assigned for scaled up values
###Output
Question 2
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np #importing of library used for array matrices
A = np.array([2,7,4]) #creation of Array A and assigning its values
B = np.array([3,9,8]) #creation of Array B and assigning its values
cpAP= np.cross(A,B) #crossing A and B, then assigning its values to a new variable
print("Question 3") #printing of label
print("Cross Product of A and B: \n") #printing of label
print(cpAP) #prinnting the variable containing cross-product of A and B
###Output
Question 3
Cross Product of A and B:
[20 -4 -3]
###Markdown
Matrix C Question 1
###Code
import numpy as np
x = np.diag([1, 1, 1, 1])
print(x)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
import numpy as np
x = np.diag([1, 1, 1, 1,])
print(x)
print()
print(x*2)
print()
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
Prelim Exam Question 1
###Code
import numpy as np
C = np.array(
[[ 1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]]
)
print(C)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
C = np.array(
[[ 1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]]
)
print(C*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3
###Code
A = [2,7,4]
B = [3,9,8]
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
PRELIM EXAM
###Code
class Person:
def __init__(self,student,number,age,school,course):
self.student = student
self.number = number
self.age = age
self.school = school
self.course = course
def myFunction(self):
print("I am ",self.student,"my age is",self.age, "my student number is",self.number, "studying in", self.school, "and I am taking", self.course)
p1= Person("Campaña, Brendon Gio C.", 202113816, 18, "Adamson University", "Computer Engineering")
p1.myFunction()
###Output
I am Campaña, Brendon Gio C. my age is 18 my student number is 202113816 studying in Adamson University and I am taking Computer Engineering
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Name: ", self.Name)
print("Student Number: ", self.Student_No)
print("Age: ", self.Age)
print("School: ", self.School)
print("Course: ", self.Course)
Myself = Student("Alexis Jelyn P. Anciado", 202101677, "18 years old", "Cavite State University", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Name: Alexis Jelyn P. Anciado
Student Number: 202101677
Age: 18 years old
School: Cavite State University
Course: Bachelor of Science in Computer Engineering
###Markdown
Prelim Exam Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C".
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element.
###Code
import numpy as np
C = np.eye(4)
print(C*2)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8].
###Code
import numpy as np
A = np.array([[2,7,4]])
B = np.array([[3,9,8]])
print(np.cross(A,B))
###Output
[[20 -4 -3]]
###Markdown
Prelim Exam OOP 1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Self () using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student:
def __init__(self,name,studentno,age,school,course):
self.name = name
self.studentno = studentno
self.age = age
self.school = school
self.course = course
def self(self):
print("Name: ",self.name)
print("Student number: ",self.studentno)
print("age: ",self.age)
print("School: ",self.school)
print("Course: ",self.course)
Myself = Student("Red, Jeralph O.", 202116038, 20, "Adamson University", "BS in Computer Engineering")
Myself.self()
###Output
Name: Red, Jeralph O.
Student number: 202116038
age: 20
School: Adamson University
Course: BS in Computer Engineering
###Markdown
Question 1 Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C".
###Code
#Matrix C
import numpy as np
C = np.eye(4)
print (C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2 In relation to Question 1, show a solution that doubles all the values of each element.
###Code
import numpy as np
C = np.eye(4) * 2
print (C)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3 Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8].
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self,fullname,student_number,age,school,course):
self.fullname=fullname
self.student_number=student_number
self.age=age
self.school=school
self.course=course
def info(self):
print("Fullname:",self.fullname)
print("Student_number:",self.student_number)
print("Age:",self.age)
print("School:",self.school)
print("Course:",self.course)
myself = Student("Jean Exequiel Sosa", "202102079", "19","Cavite State University","BS COMPUTER ENGINEERING")
myself.info()
###Output
Fullname: Jean Exequiel Sosa
Student_number: 202102079
Age: 19
School: Cavite State University
Course: BS COMPUTER ENGINEERING
###Markdown
Prelim Exam Question 1 : 4 x 4 matrix whose diagonal elements are all one (1's).
###Code
import numpy as np
# Create 4 x 4 matrix whose diagonal elements are all one (1's).
# 1st solution
C = np.diagonal([[1,2,3,4],[2,1,3,4],[3,2,1,4],[4,3,2,1]])
print(C) # this will only print all the diagonal elements which has a value of 1
# 2nd solution
C = np.zeros((4,4))
np.fill_diagonal(C,1)
print(c) # this will print the entire 4x4 matrix
###Output
[1 1 1 1]
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2 : show a solution that doubles all the values of each element
###Code
# Double all the values of the elements in Question 1
print(C * 2)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3 : cross-product of matrices, A = [2,7,4] and B = [3,9,8]
###Code
A = [2,7,4]
B = [3,9,8]
# cross product
answer = np.cross(A,B)
print(answer)
###Output
[20 -4 -3]
###Markdown
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Question 1
###Code
# Creating a 4 x 4 matrix
# a parameter dtype in defined to clean decimal points in the output
C = np.zeros([4,4], dtype=int)
# Filling in matrix's diagonals with the value 1
np.fill_diagonal(C, 1)
# printing the output
print(C)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
# SOLUTION 01
## Multiplying matrix C by 2 with * operator
## This is the same as C = C * 2
C *= 2
print("using C *= 2: \n", C)
# SOLUTION 02
## Multiplying matrix C by 2 with the multiply() function
np.multiply(2, C)
print("\nusing np.multiply(2, C):\n", C)
###Output
using C *= 2:
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
using np.multiply(2, C):
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3
###Code
# Defining matrices 'A' and 'B'
A = np.array([2,7,4])
B = np.array([3,9,8])
# Assigning the dotproduct of A and B to variable 'output'
output = np.cross(A, B)
# Printing output
print(output)
###Output
[20 -4 -3]
###Markdown
**Prelim Exam** John Benedict Aquino 58019 Question 1
###Code
import numpy as np
C = np.eye(4,4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
import numpy as np
C = np.eye(4,4)
print(2*C)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
CP = np.cross(A,B)
print(CP)
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Question 1 matrix "C"
###Code
C = np.diag([1,1,1,1])
print (C)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
C = np.diag([1,1,1,1])
double = np.multiply(2,C)
print (double)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
equals = np.cross(A,B)
print(equals)
###Output
[20 -4 -3]
###Markdown
Question 1
###Code
import numpy as np
c = np.eye(4)
print(c)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
import numpy as np
c = np.eye(4)
#solution that doubles the values
print(c*2)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
import numpy as np
a = np.array([2,7,4])
b = np.array([3,9,8])
#cross product of a and b
cross = np.cross(a,b)
print(cross)
###Output
[20 -4 -3]
###Markdown
Prelim Exam Question 1.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2.
###Code
import numpy as np
double = 2*C
print(double)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3.
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
1. Write a Python Program to display your full name, student number, age, and course.2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course.3. Create an object name Myself and assign an instance for each attribute.4. Create a method info() using an instantiation of a class.
###Code
def personal_details():
name = "Miro Angeles"
student_number = "202101487"
age = 18
course = "Bachelor of Science in Computer Engineering"
print("Name: {}\nStudent_number: {}\nAge: {}\nCourse: {}".format(name, student_number, age, course))
personal_details()
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Myself(self):
print("My name is",self.Name)
print("My Student number is",self.Student_No)
print("I am",self.Age,"years old")
print("I study at",self.School)
print("I am enrolled as a",self.Course,"student")
student_attributes = Student("Miro G. Angeles","202101487","18","Cavite State University","Bachelor of Science in Computer Engineering")
student_attributes.Myself()
###Output
My name is Miro G. Angeles
My Student number is 202101487
I am 18 years old
I study at Cavite State University
I am enrolled as a Bachelor of Science in Computer Engineering student
###Markdown
Prelim Exam Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = 2*C
print(A)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A=np.array([2,7,4])
B=np.array([3,9,8])
ans=np.cross(A,B)
print(ans)
###Output
[20 -4 -3]
###Markdown
Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
c = np.zeros((4, 4)) #this where the 4 x 4 matrix
np.fill_diagonal(c, 1) #the fill diagonal method is used to fill a diagonal line with the specific number. it this case the number "1" was used.
print(c)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
c = np.zeros((4, 4)) #this where the 4 x 4 matrix
np.fill_diagonal(c, 1) #the fill diagonal method is used to fill a diagonal line with the specific number. it this case the number "1" was used.
print("c= ")
print(c)
print("Doubled: ")
print(2*c)
###Output
c=
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
Doubled:
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = [2, 7, 4]
B = [3, 9, 8]
C = np.cross(A, B) #where the cross product happens.
print("A= ")
print(A)
print("B= ")
print(B)
print("C= ")
print(C)
###Output
A=
[2, 7, 4]
B=
[3, 9, 8]
C=
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Myself(self):
print("Full Name:",self.Name)
print("Student Number:",self.Student_No)
print("Age:",self.Age)
print("School:",self.School)
print("Course:",self.Course)
Self = Student("Edrian Borinaga Rabena", "202114223", "18", "Adamson University", "BS CpE")
Self.Myself()
###Output
Full Name: Edrian Borinaga Rabena
Student Number: 202114223
Age: 18
School: Adamson University
Course: BS CpE
###Markdown
1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-2"
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name=Name
self.Student_No=Student_No
self.Age=Age
self.School=School
self.Course=Course
def info(self):
print(self.Name,self.Student_No,self.Age,self.School,self.Course)
Myself = Student("Roy Millamis",202101651,20,"Cavite State University Main Campus","Bachelor of Science in Computer Engineering")
print(Myself.Name)
print(Myself.Student_No)
print(Myself.Age)
print(Myself.School)
print(Myself.Course)
print(Myself.info)
Myself.info()
###Output
Roy Millamis
202101651
20
Cavite State University Main Campus
Bachelor of Science in Computer Engineering
<bound method Student.info of <__main__.Student object at 0x7f19013de210>>
Roy Millamis 202101651 20 Cavite State University Main Campus Bachelor of Science in Computer Engineering
###Markdown
PROBLEM 2 1.) Write a Python to display your full name, student number, age, and course
###Code
class OOP_1_1:
def __init__(self,fullname, student_no, age, course):
self.fullname = fullname
self.student_no = student_no
self.age = age
self.course = course
def Info(self):
#print(self.fullname, self.student_no,self.age,self.course)
print("My Name is", self.fullname)
print("My Student Number is", self.student_no)
print("My Age is", self.age)
print("My Course is", self.course)
student = OOP_1_1("Khalin Vidamo" ,202101935, 18, "BSCpE")
student.Info()
###Output
My Name is Khalin Vidamo
My Student Number is 202101935
My Age is 18
My Course is BSCpE
###Markdown
2.) Create a class named Student with attributes: Name, Student_No, Age, School, and Course
###Code
class MyClass:
def __init__(self,name,student_no,age,school,course):
self.name = name #create a class with attributes
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def display(self):
print(self.name, self.student_no, self.age, self.school, self.course )
person = MyClass("Khalin Vidamo",202101935, 18, "Cavite State University", "Bscpe" )
person.display()
###Output
Khalin Vidamo 202101935 18 Cavite State University Bscpe
###Markdown
3.)Create an object name Myself and assign an instance for each attribute.
###Code
class OOP_1_1:
def __init__(self,fullname, age, course, school):
self.fullname = fullname
self.age = age
self.course = course
self.school = school
def Info(self):
#print(self.fullname, self.age,self.course,self.school)
print("My Name is", self.fullname)
print("My Age is", self.age)
print("My Course is", self.course)
print("My School is", self.school)
student = OOP_1_1("Khalin Vidamp",18, "BSCpE", "CVSU")
student.Info()
###Output
My Name is Khalin Vidamp
My Age is 18
My Course is BSCpE
My School is CVSU
###Markdown
4.) Create a method Info() using an instantiation of a class.
###Code
class OOP_1_1:
def __init__(self,fullname, age, course, school):
self.fullname = fullname
self.age = age
self.course = course
self.school = school
def Info(self):
#print(self.fullname,self.age,self.course,self.school)
print("My Name is", self.fullname)
print("My Age is", self.age)
print("My Course is", self.course)
print("My School is", self.school)
student = OOP_1_1("Khalin Vidamo",18, "BSCpE", "CVSU")
student.Info()
###Output
My Name is Khalin Vidamo
My Age is 18
My Course is BSCpE
My School is CVSU
###Markdown
**Riego de Dios, Celyssa Chryse** **Question 1**
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2**
###Code
import numpy as np
C = np.eye(4)
print(C*2)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3**
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
output = np.cross(A,B)
print(output)
###Output
[20 -4 -3]
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Name:", self.Name)
print("Student No:", self.Student_No)
print("Age:", self.Age, "years old")
print("School:", self.School)
print("Course:", self.Course)
myself = Student("Trisha Faye Cabug Cueno", 202101759, 18, "Cavite State University", "BS Computer Engineering")
myself.Info()
###Output
Name: Trisha Faye Cabug Cueno
Student No: 202101759
Age: 18 years old
School: Cavite State University
Course: BS Computer Engineering
###Markdown
Prelim Exam
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Creating Matrix C
C = np.diag([1,1,1,1])
print ("Matrix C\n",C)
###Output
Matrix C
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Creating Matrix C
C = np.diag([1,1,1,1])
#Doubling the value by 2
print (C * 2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Given Matrices A and B
A = np.array([2,7,4])
B = np.array([3,9,8])
#Computing for Cross Product
answer = np.cross(A,B)
print(answer)
###Output
[20 -4 -3]
###Markdown
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
x = np.array([1,1,1,1])
C = np.diag(x)
print(C*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
crossproduct = np.cross(A,B)
print(crossproduct)
###Output
[20 -4 -3]
###Markdown
###Code
class Student: #class name
def __init__ (self, name, student_no, age, school, course): #attributes
self.name = name
self.age = age
self.student_no = student_no
self.school = school
self.course = course
def Myself(self): #object name
print("My name is ",self.name, self.age, "studying at ",self.school) #instances for each of attributes
print("I'm currently taking ", self.course, "and my student number is ", self.student_no)
Student1 = Student("Magleo, Mary Chelsea Reigne P.",202110390,19, "Adamson University", "B.S. Computer Engineering")
Student1.Myself()
###Output
My name is Magleo, Mary Chelsea Reigne P. 19 studying at Adamson University
I'm currently taking B.S. Computer Engineering and my student number is 202110390
###Markdown
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
**Question 1**
###Code
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2**
###Code
doub = np.multiply(2,C)
print(doub)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3**
###Code
a = np.array([2,7,4])
b = np.array([3,9,8])
q3 =np.cross(a,b)
print(q3)
###Output
[20 -4 -3]
###Markdown
Prelim Exam
###Code
print("Prince Iannbrentte M Buenaobra")
print("20212482")
print("19")
print("Computer Engineering")
###Output
Prince Iannbrentte M Buenaobra
20212482
19
Computer Engineering
###Markdown
###Code
n = input("Name: ")
c = int(input("Student_No: "))
a = int(input(" Age : "))
x = input("School: ")
d = input("Course: ")
print("Name:", n)
print("Student:", c)
print("Age:", a)
print("School:", x)
print("Course:", d)
###Output
Name: Prince Iannbrentte Buenaobra
Student_No: 20212482
Age : 19
School: Adamson University
Course: Computer Engineering
Name: Prince Iannbrentte Buenaobra
Student: 20212482
Age: 19
School: Adamson University
Course: Computer Engineering
###Markdown
1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(self,fullname, student_no,age,school,course):
self.fullname=fullname
self.student_no=student_no
self.age=age
self.course=course
self.school=school
def info(self):
print(self.fullname,self.student_no,self.age,self.school,self.course)
myself = Student("Allen Patrick Argente",202101513,18,"CVSU","BSCpE")
student.info()
###Output
Allen Patrick Argente 202101513 18 BSCpE CVSU
###Markdown
Problem 2.1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School,and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-2"
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n", "Hi my Name is", self.Name,"and My Student Number is", self.Student_No,"\n", "I am", self.Age,"years old","\n","And I Studying at", self.School , self.Course)
Myself = Student("Jan Rovick M. Causaren", "202101632" ,"19" , "Cavite State University - Main Campus,", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Hi my Name is Jan Rovick M. Causaren and My Student Number is 202101632
I am 19 years old
And I Studying at Cavite State University - Main Campus, Bachelor of Science in Computer Engineering
###Markdown
Prelim Exam - Lalas Write a Python to display your full name, student number, age, and course* Create a class named Student with attributes: Name, Student_No, Age, School, and Course* Create an object name Myself and assign an instance for each attribute.* Create a method Self () using an instantiation of a class.* Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student ():
def __init__ (self, Nm,Nn, Stud_No, Age, Schl, Crs, ):
self.Nm = Nm
self.Nn = Nn
self.Stud_No = Stud_No
self.Age = Age
self.Schl = Schl
self.Crs = Crs
def Myself (self):
print ("Name:", self.Nm)
print ("Nickname:", self.Nn)
print ("Student Number:", self.Stud_No)
print ("Age:", self.Age, "years old")
print ("Course:", self.Crs)
print ("School:", self.Schl)
sexybanana = Student ("AMIEL SIMON RAY LALAS", "Luciel" ,202114210, 18, "Adamson University", "Bachelor of Science in Computer Engineering")
sexybanana.Myself()
###Output
Name: AMIEL SIMON RAY LALAS
Nickname: Luciel
Student Number: 202114210
Age: 18 years old
Course: Bachelor of Science in Computer Engineering
School: Adamson University
###Markdown
**Question 1. (20 points) **Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Matrix C
import numpy as np
f = np.eye(4)
print(f)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2. (20 points) **In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
f = np.eye(4)
print(f*2)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3. (10 points) **Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(Self,Name,student_no,Age,School,Course):
self.Namme = Name
self.Student_No = Student_No
self.age = Age
self.School = School
self.Course = Course
def self(self):
return f'Name: {self.Name }\nStudent Number: {self.Student_No}\nAge: {self.Age}\nSchool: {self.School}\nCourse: {self.Course}'
Myself = Student("Josh Gabriel E. Sese", 202117298, 18, "Adamson University", "BS In Computer Engineering")
print (Myself.self())
###Output
_____no_output_____
###Markdown
###Code
class Student ():
def __init__ (self, Student, Student_Num, Age, School,Course):
self.Student = Student
self.Student_Num = Student_Num
self.Age = Age
self.School = School
self.Course = Course
def Myself (self):
print ("The Student name is", self.Student)
print ("The Student_num is", self.Student_Num)
print ("The Student is", self.Age, "years old")
print ("The Student is studying", self.Course, "at", self.School)
NPC = Student ("Kathleen Zamora", 202113527, 19,"Adamson University", "BS Computer Engineering")
NPC.Myself()
###Output
The Student name is Kathleen Zamora
The Student_num is 202113527
The Student is 19 years old
The Student is studying BS Computer Engineering at Adamson University
###Markdown
Prelim Exam Question 1.A 4 x 4 matrix whose diagonal elements are all one (1's)
###Code
import numpy as np
A = np.array([1,1,1,1,])
C = np.diag(A)
print (C)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2. doubles all the values of each element.
###Code
import numpy as np
A = np.array([1, 1, 1, 1])
B = np.diag(A)
print (C*2) #To double the value of array C
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. The cross-product of matrices, A = [2,7,4] and B = [3,9,8]
###Code
import numpy as np
A = np.array([2, 7, 4])
B = np.array([3, 9, 8])
#To compute the cross of arrays A and B
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
Write a Python program that displays your full name, student number, age, and course
###Code
class Student():
def __init__(self, fullname, studentnumber, age, course):
self.fullname = fullname
self.studentnumber = studentnumber
self.age = age
self.course = course
def name(self):
return self.fullname
def number(self):
return self.studentnumber
def ageko(self):
return self.age
def courseko(self):
return self.course
def display(self):
print("My Full Name is", self.name())
print("My Student Number is", self.number())
print("My Age is", self.ageko())
print("My Course is", self.courseko())
myself = Student("Jeremiah Manalang", "202010993", "20", "Computer Engineering")
myself.display()
###Output
My Full Name is Jeremiah Manalang
My Student Number is 202010993
My Age is 20
My Course is Computer Engineering
###Markdown
Prelim Exam Question 1
###Code
#numpy
import numpy as np
C = np.full((4,4),1)
Diagonal = np.diagonal((C))
print("C =",C)
print("Diagonal = ",Diagonal)
###Output
C = [[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]
Diagonal = [1 1 1 1]
###Markdown
Question 2
###Code
import numpy as np
C = np.full((4,4),1) #to display the 4x4 matrix with the value of 1
print(C*2) #to print the 4x4 matrix doubled
###Output
[[2 2 2 2]
[2 2 2 2]
[2 2 2 2]
[2 2 2 2]]
###Markdown
Question 3
###Code
import numpy as np
A = np.array([2,7,4]) #matrix A
B = np.array([3,9,8]) #matix B
output = np.cross(A,B) #cross multiply the martix A and matrix B
print(output) #to print the cross product
###Output
[20 -4 -3]
###Markdown
Example
###Code
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Write a Python to display your full name, student number, age, and courseCreate a class named Student with attributes: Name, Student_No, Age, School, and CourseCreate an object name Myself and assign an instance for each attribute.Create a method Self () using an instantiation of a class.Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class StudentInfo ():
def __init__ (self, FName, StudentID, Age, Course, School):
self.FName = FName
self.StudentID = StudentID
self.Age = Age
self.Course = Course
self.School = School
def disp (self):
print ("Name:", self.FName)
print ("Student ID:", self.StudentID)
print ("Age:", self.Age, "yrs old")
print ("Course:", self.Course)
print ("School:", self.School)
studN = StudentInfo ("Gwyneth Cepeda", 202110005, 18, "BS in Computer Engineering", "Adamson University")
studN.disp()
###Output
Name: Gwyneth Cepeda
Student ID: 202110005
Age: 18 yrs old
Course: BS in Computer Engineering
School: Adamson University
###Markdown
Preliminary Exam in Python
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Self(self):
return "Hello my name is", self.Name, "My student no is:", self.Student_No, "my age is", self.Age, "I' am currently studying at", self.School, "and I'am studying", self.Course
def display(self):
print("Hello my name is", self.Name)
print("My student no is:", self.Student_No)
print("I'am", self.Age)
print("I'm currently studying at", self.School)
print("and I'am taking", self.Course )
myself = Student("Gilbert Juluis Padriquez", "202119209", "18", "Adamson Universirty", "B.S in Computer Engineering" )
myself.display()
###Output
Hello my name is Gilbert Juluis Padriquez
My student no is: 202119209
I'am 18
I'm currently studying at Adamson Universirty
and I'am taking B.S in Computer Engineering
###Markdown
1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-2"
###Code
class Student:
def __init__(self, name, student_no, age, course, school):
self.name = name
self.student_no = student_no
self.age = age
self.course = course
self.school = school
def info(self):
print(self.name,self.student_no,self.age,self.course,self.school)
Myself = Student("Gimarose A. Luzande", "202107605", 19, "BS Computer Engineering", "Cavite State University - Main Campus")
print(f"My name is {Myself.name}, and my student number is {Myself.student_no}. I am {Myself.age} years old. Taking up {Myself.course} at {Myself.school}")
###Output
My name is Gimarose A. Luzande, and my student number is 202107605. I am 19 years old. Taking up BS Computer Engineering at Cavite State University - Main Campus
###Markdown
Prelim Exam Question 1
###Code
import numpy as np
C = np.eye(4) #creates a matrix with a diagonal 1 and fills the rest with zeros
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
import numpy as np
C = np.eye(4) #from Q1
c=2*C #for doubling the values
print(c)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
A = ([2,7,4])
B = ([3,9,8])
cross = np.cross(A,B) #cross multiplies vector A and B
print("Matrix A: ",A)
print("Matrix B: ",B)
print("Cross-product: ",cross)
###Output
Matrix A: [2, 7, 4]
Matrix B: [3, 9, 8]
Cross-product: [20 -4 -3]
###Markdown
PROBLEM 1
###Code
n = 20
total_numbers = n
sum = 0
while n >= 0:
sum += n
n -= 1
print("sum =", sum)
average = sum / total_numbers
print("Average = ", average)
###Output
sum = 210
Average = 10.5
###Markdown
PROBLEM 21. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.
###Code
class Student:
def __init__ (self, Name, Student_No, Age, School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Hi! my name is", self.Name, self.Age, "years old,","with student number", self.Student_No)
print("Currently taking",self.Course, "at",self.School)
Myself = Student("Joshua D. Atencia", 202101624,18,"Cavite State University Main Campus","Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Hi! my name is Joshua D. Atencia 18 years old, with student number 202101624
Currently taking Bachelor of Science in Computer Engineering at Cavite State University Main Campus
###Markdown
Question 1
###Code
#create a 4 x 4 matrix
import numpy as np
C= np.array([1,1,1,1])
print(np.diag(C))
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
#doubles all the values of each element
C = np.array ([1,1,1,1])
D = np.diag(C)
print(np.multiply(D,D))
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 3
###Code
#initialize arrays
A = np.array([2,7,4])
B = np.array([3,9,8])
#compute for the cross product
output = np.cross(A,B)
print(output)
###Output
[20 -4 -3]
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name=Name
self.Student_No=Student_No
self.Age=Age
self.School=School
self.Course=Course
def Self(self):
print("My name is "+self.Name)
print("My student number is: ",self.Student_No)
print("My school is "+self.School)
print("My Course is "+self.Course)
me=Student("Legaspi, Mart Joven C", 201012641, 29, "Adamson University", "BS Computer Engineering")
me.Self()
###Output
My name is Legaspi, Mart Joven C
My student number is: 201012641
My school is Adamson University
My Course is BS Computer Engineering
###Markdown
###Code
class Student:
def __init__(self,name,student_no, age, school,course):
self.name = name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info (self):
print("Name: Marc Jay Serenio")
print("Student_No : 202102057")
print("Age: 19")
print("School: Cavite State University")
print("Course: BS Computer Engineering")
Myself = Student("Marc Jay Serenio", "202102057" , "19", "Cavite State University", "BS Computer Engineering")
Myself.Info()
###Output
Name: Marc Jay Serenio
Student_No : 202102057
Age: 19
School: Cavite State University
Course: BS Computer Engineering
###Markdown
Prelim Exam Question 1
###Code
import numpy as np
C = np.eye(4) #this is variable C that has the eye function where all elements are 0, except for the diagonal values
print("Answer: \n")
print(C) #the 4x4 matrix output
###Output
Answer:
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
import numpy as np
C = np.eye(4) #this is variable C that has the eye function where all elements are 0, except for the diagonal values
D = C * 2 #this doubles the values of each elements
print("Answer: \n")
print("The original values: \n",C) #printing the original results
print("\n The doubled values: \n",D) #the doubled printing results
###Output
Answer:
The original values:
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
The doubled values:
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
import numpy as np
A = ([2,7,4]) #this is variable A
B = ([3,9,8]) #this is variable B
output = np.cross(A,B) #the cross product of a vector
print("Answer: \n ")
print(output) #result/output
###Output
Answer:
[20 -4 -3]
###Markdown
Problem 2. (50 points)Write a Python to display your full name, student number, age, and courseCreate a class named Student with attributes: Name, Student_No, Age, School, and CourseCreate an object name Myself and assign an instance for each attribute.Create a method Self () using an instantiation of a class.Insert your GitHub link "Prelim Exam" from your repository named "OOP 58002"
###Code
class Student ():
def __init__ (self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Myself (self):
print ("ako nga pala si", self.Name)
print ("at ang aking student number ay", self.Student_No)
print ("ako ay", self.Age, "taong gulang")
print ("ang aking kurso ay", self.Course, "at nag aaral ako sa", self.School)
jai = Student ("J'aira Ronquillo", 202111835, 18, "Adamson University", "BS Computer Engineering")
jai.Myself()
###Output
ako nga pala si J'aira Ronquillo
at ang aking student number ay 202111835
ako ay 18 taong gulang
ang aking kurso ay BS Computer Engineering at nag aaral ako sa Adamson University
###Markdown
Prelim Exam
###Code
#Question 1
import numpy as np
c = np.ones((4, 4))
c[::2, 1::2] = 2
c[1::2, ::2] = 3
print(c)
#Question 2
import numpy as np
c = np.ones((4, 4))
c[::2, 1::2] = 2
c[1::2, ::2] = 3
print(c*2)
#Question 3
import numpy as np
A = (([2,7,4]))
B = (([3,9,8]))
output = np.cross(A,B)
print(output)
###Output
[20 -4 -3]
###Markdown
**Prelim Exam** Matrix C
###Code
#Question1
#Numpy
import numpy as np
C = np.array([[1,4,4,4],[4,1,4,4],[4,4,1,4],[4,4,4,1]]) ## A 4x4 matrix which displays the diagonal elements are all one (1s)
print(C) ## Displays the result
#Question2
#Numpy
import numpy as np
C = np.array([[1,4,4,4],[4,1,4,4],[4,4,1,4],[4,4,4,1]]) ## A 4x4 matrix which is related to question 1 and that the values are doubled
print(C) ## Display the result (diagonal elements are one (1s))
print()
print(C*2) ## Displays the result (Double the values)
#Question3
#Numpy
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
C = np.cross(A,B) ## Cross product of matrices A and B
print(C) ## Display the result
###Output
[20 -4 -3]
###Markdown
###Code
class Person():
def __init__(self, name, age, school, ID, course):
self.name = name
self.age = age
self.school = school
self.ID = ID
self.course = course
def Display():
name, age = "Betchayda, Ezekiel L.", 18
ID = 202113843
school = "Adamson University"
course = "Computer Engineering"
print("My name is {} and I am {} years old".format(name, age))
print("My student ID is {}".format(ID))
print("I am currently studying in {}".format(school))
print("My course is {}".format(course))
Display()
###Output
My name is Betchayda, Ezekiel L. and I am 18 years old
My student ID is 202113843
I am currently studying in Adamson University
My course is Computer Engineering
###Markdown
###Code
class Student:
def __init__(self,Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(self.Name)
print(self.Student_No)
print(self.Age)
print(self.School)
print(self.Course)
Myself = Student("Paul Francis B. Masangcay", 202101724, 19, "CvSU", "BS Computer Engineering")
Myself.Info()
###Output
Paul Francis B. Masangcay
202101724
19
CvSU
BS Computer Engineering
###Markdown
###Code
class Student:
def __init__(self,Student,Number,Age,School,Course):
self.Student = Student
self.Number = Number
self.Age = Age
self.School = School
self.Course = Course
def myFunction(self):
print("Name: ",self.Student,"\nAge: ",self.Age, "\nStudent No: ",self.Number, "\nSchool: ", self.School, "\nCourse: ", self.Course)
Myself= Student("Meneses, Michael Paul A.", 202113407, 19, "Adamson University", "BS Computer Engineering")
Myself.myFunction()
###Output
Name: Meneses, Michael Paul A.
Age: 19
Student No: 202113407
School: Adamson University
Course: BS Computer Engineering
###Markdown
Prelim Exam
###Code
import numpy as LA
###Output
_____no_output_____
###Markdown
Question 1
###Code
C = LA.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
B = C*2
print(B)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
A = LA.array([2,7,4])
D = LA.array([3,9,8])
Output = LA.cross(A,D)
print("Cross-product: ", Output)
###Output
Cross-product: [20 -4 -3]
###Markdown
###Code
class OOP_1_1:
def __init__ (self,fullname,student_no,age,school,course):
self.fullname = fullname
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info(self):
print("I Am",self.fullname)
print("My Student Number is",self.student_no)
print("I am already",self.age,"Years Old")
print("My school is",self.school)
print("And the course i Picked is",self.course)
student = OOP_1_1("Marc christian Blasco",202101462,18,"CvSU","BSCPE")
student.Info()
###Output
I Am Marc christian Blasco
My Student Number is 202101462
I am already 18 Years Old
My school is CvSU
And the course i Picked is BSCPE
###Markdown
###Code
#Problem 2
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Fullname: ",self.Name)
print("Student Number: ",self.Student_No)
print("Age: ",self.Age)
print("School: ",self.School)
print("Course: ",self.Course)
Myself = Student("Justine Kate Palen Albero",202101555,19,"Cavite State University - Indang","BS Computer Engineering")
Myself.Info()
###Output
Fullname: Justine Kate Palen Albero
Student Number: 202101555
Age: 19
School: Cavite State University - Indang
Course: BS Computer Engineering
###Markdown
* Write a Python to display your full name, student number, age, and course* Create a class named Student with attributes: Name, Student_No, Age, School, and Course* Create an object name Myself and assign an instance for each attribute.* Create a method Self () using an instantiation of a class.* Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Person():
def __init__(self, name, age, school, ID, course):
self.name = name
self.age = age
self.school = school
self.ID = ID
self.course = course
def my_info():
name, age = "Hular, Sarah Nicole S.,", 18
ID =202119132
school = "Adamson University"
course = "Bachelor of Science in Computer Engineering"
print("My name is: {} {} years old".format(name, age))
print("My student ID: {}".format(ID))
print("I am currently studying in: {}".format(school))
print("My course: {}".format(course))
my_info()
###Output
My name is: Hular, Sarah Nicole S., 18 years old
My student ID: 202119132
I am currently studying in: Adamson University
My course: Bachelor of Science in Computer Engineering
###Markdown
###Code
class Student():
def __init__(self,name,studno,yearold,adu,cpe):
self.name = name
self.studno = studno
self.yearold = yearold
self.adu = adu
self.cpe = cpe
def student(self):
return self.name + self.studno + self.yearold + self.adu + self.cpe
def detail(self):
print(self.student())
myself = Student("John Andrey Delos Reyes ","202119019 ","19y/o ", "Adamson University ", "BS Computer Engineering")
myself.detail()
###Output
_____no_output_____
###Markdown
###Code
class Student:
def __init__(self,Name,Student_no,Age,Course):
self.Name=Name
self.Student_no=Student_no
self.Age=Age
self.Course=Course
def Myself(self):
print(self.Name)
print(self.Student_no)
print(self.Age)
print(self.Course)
student=Student("Ulysses Alcantara","202102226","19","BS in Computer Engineering")
student.Myself()
###Output
Ulysses Alcantara
202102226
19
BS in Computer Engineering
###Markdown
**Problem 2**1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(f"Full Name: {self.Name}")
print(f"Student Number: {self.Student_No}")
print(f"Age: {self.Age}")
print(f"Course: {self.Course}")
Myself = Student("Lanz Andrei A. Catamisan", 202101886, 18, "Cavite State University", "BSCpE")
Myself.Info()
###Output
Full Name: Lanz Andrei A. Catamisan
Student Number: 202101886
Age: 18
Course: BSCpE
###Markdown
Question 1
###Code
import numpy as np
A = np.array([1, 1, 1, 1,])
C = np.diag(A)
print(C)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
import numpy as np
A = np.array([1, 1, 1, 1])
C = np.diag(A)
#To double the value of array C
print(C*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3
###Code
import numpy as np
A = np.array([2, 7, 4])
B = np.array([3, 9, 8])
#To compute the cross of arrays A and B
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
PROBLEM 2Write a Python to display your full name, student number, age, and course.Create a class named Student with attributes: Name, Student_No, Age, School, and Course.Create an object name Myself and assign an instance for each attribute.Create a method Info() using an instantiation of a class.Insert your GitHub link " Prelim Exam" for your repository named "OOP 1-2".
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Name:", Myself.Name)
print("Student no. :", Myself.Student_No)
print("Age:", Myself.Age)
print("School:", Myself.School)
print("Course:", Myself.Course)
Myself = Student("Alliyah Francine J. Roxas", "202102290", "19 years of age", "Cavite State University(Main Campus)", "Bachelor of Science in Computer Engineering (BSCPE)")
Myself.Info()
###Output
Name: Alliyah Francine J. Roxas
Student no. : 202102290
Age: 19 years of age
School: Cavite State University(Main Campus)
Course: Bachelor of Science in Computer Engineering (BSCPE)
###Markdown
###Code
n = 20
total_numbers = n
sum = 0
while n >= 0:
sum += n #sum = sum+n 20+19+18+17+16+15+14+13+12+11+10+9+8+7+6+5+4+3+2+1+0=210
n -= 1 #n = n-1 -1
print("sum =", sum)
average = sum / total_numbers #210/20 = 10.50
print("Average = ", average)
#Problem 2
class Student:
def __init__(self,Name, Student_No,Age, Course, School):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.Course = Course
self.School = School #assigning all the attributes
def Self(self):
print("My Name is",self.Name)
print("My Student Number is",self.Student_No)
print("My Age is",self.Age)
print("My Course is",self.Course)
print("I am studying in",self.School) #assigning Methods to display the instance of the class
Myself = Student("Maria",201040165,39,"BSCpE","Adamson University")
Myself.Self()
###Output
My Name is Maria
My Student Number is 201040165
My Age is 39
My Course is BSCpE
I am studying in Adamson University
###Markdown
Prelim exam Question 1Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(2*C)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
C = np.cross(A,B)
print(C)
###Output
[20 -4 -3]
###Markdown
###Code
import numpy as np
c = np.ones((4, 4))
c[::2, 1::2] = 2
c[1::2, ::2] = 3
print(c)
import numpy as np
c = np.ones((4, 4))
c[::2, 1::2] = 2
c[1::2, ::2] = 3
print(c*2)
import numpy as np
A = (([2,7,4]))
B = (([3,9,8]))
output = np.cross(A,B)
print(output)
###Output
[20 -4 -3]
###Markdown
###Code
class StudentWithAttributes():
def __init__(self,name,studentnum,age,school,course):
self.name = name
self.studentnum = studentnum
self.age= age
self.school = school
self.course = course
def section(self):
print("Name:", self.name)
print("Student No.:", self.studentnum)
print("Age:", self.age)
print("Schol:", self.school)
print("Course:", self.course)
myself= StudentWithAttributes("Dumapias, Alfred Resti R. ","202110121 ","18 ","Adamson University ","BS CpE ")
myself.section()
n = 20
total_numbers = n
sum = 0
while n >= 0:
sum += n
n -= 1
print("sum =", sum)
average = sum / total_numbers
print("Average = ", average)
###Output
sum = 210
Average = 10.5
###Markdown
Prelim Exam Question 1
###Code
import numpy as np
C = np.full((4,4),1)
print(C)
r = np.diag(C)
print(r)
###Output
[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]
[1 1 1 1]
###Markdown
Question 2
###Code
r1 = np.square(C)
print(r1)
###Output
[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]
###Markdown
Question 3
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
result = np.cross(A,B)
print(result)
###Output
[20 -4 -3]
###Markdown
Problem 21. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(self, Name,Student_No,Age,School,Course):
self.Name=Name
self.Student_No=Student_No
self.Age=Age
self.School=School
self.Course=Course
def Info(self):
print("My name is",self.Name,"I am",self.Age, "years old.", "I am currently taking",self.Course, "at",self.School, "and my student number is",self.Student_No)
Myself=Student("Lance Wesley B. Alcala",202101468,19,"Cavite State University","Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
My name is Lance Wesley B. Alcala I am 19 years old. I am currently taking Bachelor of Science in Computer Engineering at Cavite State University and my student number is 202101468
###Markdown
Problem 2 Write a Python to display your full name, student number, age, and course
###Code
class Student:
def __init__(self, name, student_no, age, school, course):
self.name = name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info(self):
print("My name is", self.name + ".")
print("My student number is", self.student_no)
print("I am", self.age, "years old")
print("I am studying at", self.school + ".", "Currently, I am taking", self.course)
person = Student(str(input("Full Name: ")), str(input("Student Number: ")), str(input("Age: ")), str(input("SChool: ")), str(input("Course: ")))
person.Info()
###Output
Full Name: Ryan Nico Anogante
Student Number: 202101875
Age: 21
SChool: Cavite State University
Course: Bachelor of Science in Computer Engineering
My name is Ryan Nico Anogante.
My student number is 202101875
I am 21 years old
I am studying at Cavite State University. Currently, I am taking Bachelor of Science in Computer Engineering
###Markdown
QUESTION 1
###Code
import numpy as np
C = np.zeros((4,4)) #this where the 4 x 4 matrix
np.fill_diagonal(c, 1)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
QUESTION 2
###Code
import numpy as np
c = np.zeros((4,4)) #this where the 4 x 4 matrix
np.fill_diagonal(c,1)
print("c= ")
print(c)
print("Doubled:")
print(2*c)
###Output
c=
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
Doubled:
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
QUESTION 3
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
Programming Problem 2 1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course.3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(f"Name : {self.Name}")
print(f"Student_No : {self.Student_No}")
print(f"Age : {self.Age}")
print(f"School : {self.School}")
print(f"Course : {self.Course}")
Myself = Student("Charlie Milaya", 202101869, 18, "Cavite State University",
"BS Computer Engineering")
Myself.Info()
###Output
Name : Charlie Milaya
Student_No : 202101869
Age : 18
School : Cavite State University
Course : BS Computer Engineering
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Prelim Exam
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Question 1
###Code
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
Ans = 2*C
print(Ans)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
X = np.cross(A,B)
print(X)
###Output
[20 -4 -3]
###Markdown
###Code
class student():
def __init__(self, name, student_no, age, school, course):
self.name = name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
Myself = student("Monte, Jerome P.", 202117435, 19, "Adamson University", "BS Computer Engineering")
print("Name : ", Myself.name)
print("Student No. : ", Myself.student_no)
print("Age : ", Myself.age)
print("School : ", Myself.school)
print("Course : ", Myself.course)
###Output
Name : Monte, Jerome P.
Student No. : 202117435
Age : 19
School : Adamson University
Course : BS Computer Engineering
###Markdown
Prelim Exam Answer for Question 1
###Code
import numpy as np
C= np.zeros((4,4)) # 4x4 matrix
np.fill_diagonal(C,1)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Answer for Question 2
###Code
import numpy as np
c = np.zeros((4,4)) # 4x4 matrix
np.fill_diagonal(c,1) # To fill a diagonal line with a given number, use the fill diagonal method. In this case number "1" is used.
print("c = ")
print(c)
print("Doubled: ")
print(2*c)
###Output
c =
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
Doubled:
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Answer for Question 3
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
#Problem 2
class Student:
def __init__ (self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Name:", self.Name)
print("Student No:", self.Student_No)
print("Age:", self.Age)
print("School:", self.School)
print("Course:", self.Course)
Myself = Student ("Julius Caezar R. Eugenio", "202101486", "19", "Cavite State University", "BS Computer Engineering")
Myself.Info ()
###Output
Name: Julius Caezar R. Eugenio
Student No: 202101486
Age: 19
School: Cavite State University
Course: BS Computer Engineering
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n", "My name is", self.Name, "and I am", self.Age, "\n",
"I am currently studying in", self.School, "taking", self.Course, "with a Student Number of", self.Student_No)
Myself = Student("Landon Sarmiento Lorica", "202106458", "19 years old", "Cavite State University Main Campus", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
My name is Landon Sarmiento Lorica and I am 19 years old
I am currently studying in Cavite State University Main Campus taking Bachelor of Science in Computer Engineering with a Student Number of 202106458
###Markdown
Problem 2: 1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(self,Name,Student_No,Age,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.Course = Course
def Info(self):
print("My Name is", self.Name)
print("My Student number is", self.Student_No)
print("My Age is", self.Age)
print("My Course is", self.Course)
student = Student("Ben Piolo G. Nicart",202101441,19,"BSCpE")
student.Info()
###Output
My Name is Ben Piolo G. Nicart
My Student number is 202101441
My Age is 19
My Course is BSCpE
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("Name: ", self.Name)
print("Student No: ", self.Student_No)
print("Age: ", self.Age)
print("School: ", self.School)
print("Course: ", self.Course)
Myself = Student("Dirk M. Tayab", 202102048, 19, "Cavite State University", "BS Computer Engineering")
Myself.Info()
###Output
Name: Dirk M. Tayab
Student No: 202102048
Age: 19
School: Cavite State University
Course: BS Computer Engineering
###Markdown
PRELIM EXAM Question 1
###Code
import numpy as np
C = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
print(C)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2
###Code
Double = 2*C #Formula that doubles the value of each element
print(Double)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
CP = np.cross(A,B) #Formula for computing cross product of A and B
CP
###Output
_____no_output_____
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def self(self):
return f'Name: {self.Name}\nStudent Number: {self.Student_No}\nAge: {self.Age}\nSchool: {self.School}\nCourse: {self.Course}'
Myself = Student("Rafael Espiña", 202116061, 19, "Adamson University", "BS in Computer Engineering")
print(Myself.self())
n = 20
total_numbers = n
sum = 0
while n >= 0:
sum += n
n -= 1
print("sum =", sum)
average = sum / total_numbers
print("Average = ", average)
###Output
sum = 210
Average = 10.5
###Markdown
Prelim Exam Question 1 Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2 In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
Doubles = 2*C
print(Doubles)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3 Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = np.array([2,7,4])
B= np.array([3,9,8])
product = np.cross(A,B)
print(product)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(f"Name: {self.Name} \nStudent Number: {self.Student_No} \nAge: {self.Age} \nSchool: {self.School} \nCourse: {self.Course}")
Myself = Student("Jhoriz Rodel F. Aquino", 202106201, 19, "Cavite State University", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Name: Jhoriz Rodel F. Aquino
Student Number: 202106201
Age: 19
School: Cavite State University
Course: Bachelor of Science in Computer Engineering
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("My name is",self.Name,"with student number of",self.Student_No,"and I am",self.Age + ".","I'm currently taking",self.Course,"at",self.School + ".")
Myself = Student("Chelsey L. Guasis",202102319,"19 years old","BS in Computer Engineering","Cavite State University-Main Campus")
Myself.Info()
###Output
My name is Chelsey L. Guasis with student number of 202102319 and I am 19 years old. I'm currently taking Cavite State University-Main Campus at BS in Computer Engineering.
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Self(self):
print(f"The student's full name: \n-{self.Name.title()}\n")
print(f"The student's number: \n-{self.Student_No}\n")
print(f"The student's age: \n-{self.Age}\n")
print(f"The student's school: \n-{self.School.title()}\n")
print(f"The student's Course: \n-{self.Course.title()}\n")
print(f"--->({self.Student_No}){self.Name.title()} {self.Age} years old, currently studying in {self.School.title()} as a {self.Course.title()} student.\n")
Myself = Student('nemuel rico palomo', 201813656, '22', 'adamson university', 'b.s. computer engineering' )
Myself.Self()
###Output
The student's full name:
-Nemuel Rico Palomo
The student's number:
-201813656
The student's age:
-22
The student's school:
-Adamson University
The student's Course:
-B.S. Computer Engineering
--->(201813656)Nemuel Rico Palomo 22 years old, currently studying in Adamson University as a B.S. Computer Engineering student.
###Markdown
Prelim Exam Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
x = np.array([1,1,1,1])
C = np.diag(x)
print(C*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
**Prelim Exam** **Question 1**
###Code
import numpy as np
c = np.eye(4)
print(c)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2**
###Code
import numpy as np
c = np.eye(4)
print(c)
print()
print(c*2)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3**
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
import numpy as np #Importing of NumPy. This code was ran first to make numpy usable in all other code cells.
###Output
_____no_output_____
###Markdown
**Question 1.** (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
C = np.identity(4) #Use of the identity matrix, which fills the main diagonal with 1.
print(C) #Printing of matrix "C"
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2.** (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
double = 2*C #Doubles the values of matrix "C"
print(double) #Prints doubled matrix "C"
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3.** (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = [2,7,4] #Matrix A
B = [3,9,8] #Matrix B
cross = np.cross(A,B) #Calculates the cross product of matrix A and B
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self,Name,Student_Number,Age,School,Course):
self.name=Name
self.sn=Student_Number
self.age=Age
self.school=School
self.course=Course
def Myself(self):
print("My name is",self.name)
print("I am of", self.age,"years of age")
print("I attend",self.school)
print("My student number is",self.sn)
print("I am currently taking up the course of",self.course)
Info=Student("Billy Gilson R. Pulido",202102004,19,"Cavite State University - Main Campus","Bachelor of Science in Computer Engineering")
Info.Myself()
###Output
My name is Billy Gilson R. Pulido
I am of 19 years of age
I attend Cavite State University - Main Campus
My student number is 202102004
I am currently taking up the course of Bachelor of Science in Computer Engineering
###Markdown
Problem 2
###Code
class student:
def __init__(self, name, studno, age, school, course):
self.name = name
self.studno = studno
self.age = age
self.school = school
self.course = course
def Myself(self):
print(self.name)
print(self.studno)
print(self.age)
print(self.school)
print(self.course)
A1 =student("Ashley Denise T. Goce", 202112720, 18, "Adamson University", "Bachelor of Science in Computer Engineering (BSCpE)")
A1.Myself()
###Output
Ashley Denise T. Goce
202112720
18
Adamson University
Bachelor of Science in Computer Engineering (BSCpE)
###Markdown
Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = np.array([1,1,1,1])
C = np.diag(A)
print(C*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self, name, student_no, age, course, school):
self.name = name #attributes
self.student_no = student_no
self.age = age
self.course = course
self.school = school
def info(self):
print(self.name,self.student_no,self.age,self.course,self.school)
Myself = Student("Hazel Anne P. Quilao", "202102041", 19, "BS Computer Engineering", "Cavite State University - Main Campus")
print(f"My name is {Myself.name}, and my student number is {Myself.student_no}. I am {Myself.age} years old. Taking up {Myself.course} at {Myself.school}")
###Output
My name is Hazel Anne P. Quilao, and my student number is 202102041. I am 19 years old. Taking up BS Computer Engineering at Cavite State University - Main Campus
###Markdown
Prelim Exam 1. Write a Python to display your full name, student number, age, and course 2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course 3. Create an object name Myself and assign an instance for each attribute. 4. Create a method Info() using an instantiation of a class. 5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-2"
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(self.Name,self.Student_No,self.Age,self.School,self.Course)
Myself = Student("April Joy S. Lopez", "202105215", "18 years of age", "Cavite State University - Main Campus(Indang)", "Bachelor of Science in Computer Engineering")
print("Fullname:", Myself.Name)
print("Student No:", Myself.Student_No)
print("Age:", Myself.Age)
print("School:", Myself.School)
print("Course Program:", Myself.Course)
###Output
Fullname: April Joy S. Lopez
Student No: 202105215
Age: 18 years of age
School: Cavite State University - Main Campus(Indang)
Course Program: Bachelor of Science in Computer Engineering
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("My name is",self.Name,".","My student number is",self.Student_No,".","My age is",self.Age,"and I am studying in",self.School,"taking",self.Course,".")
Myself = Student("Alessandro Xavier T. Ocasion",202101502,"19 years old","Cavite State University","BS in Computer Engineering")
Myself.Info()
###Output
My name is Alessandro Xavier T. Ocasion . My student number is 202101502 . My age is 19 years old and I am studying in Cavite State University taking BS in Computer Engineering .
###Markdown
**Problem 2:**1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.
###Code
class Student:
def __init__(self, full_name, student_no, age, school, course):
self.full_name = full_name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info(self):
print(f"My name is {self.full_name}.")
print(f"\nMy Student Number is {self.student_no}.")
print(f"\nI am {self.age} years old.")
print(f"\nI'm enrolled in {self.school}.")
print(f"\nMy course is {self.course}.")
Myself = Student("Dominic Z. Marasigan", 202101628, 19, "CvSU - Indang Campus", "BS Computer Engineering (BS CpE)",)
Myself.Info()
###Output
My name is Dominic Z. Marasigan.
My Student Number is 202101628.
I am 19 years old.
I'm enrolled in CvSU - Indang Campus.
My course is BS Computer Engineering (BS CpE).
###Markdown
Problem Set 2
###Code
class student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(self.Student_No, self.Age, self.School, self.Course)
Myself = student("Jhon Arvin T. Viado",202102083,18,"Cavite State University","Bachelor of Science in Computer Engineering")
print("Name: ", Myself.Name)
print("Student No. : ", Myself.Student_No)
print("Age: ", Myself.Age)
print("School: ", Myself.School)
print("Course: ", Myself.Course)
print("Thankyou for the information!")
###Output
Name: Jhon Arvin T. Viado
Student No. : 202102083
Age: 18
School: Cavite State University
Course: Bachelor of Science in Computer Engineering
Thankyou for the information!
###Markdown
Problem 2 Write a Python to display your full name, student number, age, and course 2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course 3. Create an object name Myself and assign an instance for each attribute. 4. Create a method Info() using an instantiation of a class.
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name=Name
self.Student_No=Student_No
self.Age=Age
self.School=School
self.Course=Course
def Info(self):
return (f'Name: {self.Name} | Student No: {self.Student_No} | Age: {self.Age} | School: {self.School} | Course: {self.Course}')
Myself=Student('Gabriel S. Catanaoan', 202101498, 18,'Cavite State University','Bachelor of Science in Computer Enginieering')
print(Myself.Info())
###Output
Name: Gabriel S. Catanaoan | Student No: 202101498 | Age: 18 | School: Cavite State University | Course: Bachelor of Science in Computer Enginieering
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Myself(self):
print(self.Name,self.Student_No,self.Age,self.School,self.Course)
student = Student("Wearl Ian G. Baguio",202101723,18,"CvSU-Indang","BSCpE")
student.Myself()
###Output
Wearl Ian G. Baguio 202101723 18 CvSU-Indang BSCpE
###Markdown
Prelim Exam Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
C = np.eye(4)
print(C+C)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self,fullname,student_number,age,school,course):
self.fullname = fullname
self.student_number = student_number
self.age = age
self.school = school
self.course = course
def info(self):
print("Fullname:",self.fullname)
print("Student_number:",self.student_number)
print("My Age:",self.age)
print("My School:",self.school)
print("My Course:",self.course)
myself = Student("Michael Colcol", "202101789", "19","Cavite State University","BS Computer Engineering(BSCPE)")
myself.info()
###Output
Fullname: Michael Colcol
Student_number: 202101789
My Age: 19
My School: Cavite State University
My Course: BS Computer Engineering(BSCPE)
###Markdown
Problem 2
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def info(self):
print("Name: ", self.Name)
print("Student number: ", self.Student_No)
print("Age: ", self.Age,"years old")
print("School: ", self.School)
print("Course: ", self.Course)
Myself = Student("Aguado, Danielle Ysabelle M.", 202102333, 19, "Cavite State University", "BS Computer Engineering")
Myself.info()
###Output
Name: Aguado, Danielle Ysabelle M.
Student number: 202102333
Age: 19 years old
School: Cavite State University
Course: BS Computer Engineering
###Markdown
Question 1
###Code
import numpy as np
C = np.eye(4)
print("C",C)
###Output
C [[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
import numpy as np
C = np.eye(4)
print(C*2)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
import numpy as np
A = ([2,7,4])
B = ([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
###Markdown
Prelim Exam
###Code
import numpy as np
##Question 1
C = np.array([1,1,1,1])
print(np.diag(C))
##Question 2
C = np.array([1,1,1,1])
A = np.diag(C)
print(np.multiply(A,A))
##Question 3
A = np.array([2,7,4])
B = np.array([3,9,8])
print(np.cross(A,B))
###Output
[20 -4 -3]
###Markdown
###Code
import numpy as np
A = np.array([
[1, 2, 3],
[2, 3, 3],
[3, 4, -2],
])
A_Det = round(np.linalg.det(A))
A_Det
###Output
_____no_output_____
###Markdown
Problem 2. (50 points)1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School,and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Self () using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student(): #create a class named Student
def __init__(self,name,number,age,school,course):
self.name = name #represents the student's full name
self.number = number #represents her student number
self.age = age #represents her age
self.school = school #represents her currently enrolled university
self.course = course #represents her chosen course
def FullName(self):
return self.name
def StudentNumber(self):
return self.number
def Age(self):
return self.age
def University(self):
return self.school
def Course(self):
return self.course
def display(self):
print("My full name is", self.FullName())
print("My student number is", self.StudentNumber())
print("I am", self.Age())
print("I am currently enrolled at", self.University())
print("My chosen course is", self.Course())
#to display an info with its attributes
info = Student("Vargas, Zep Monica Aitziber.", "202117648.", "19 years old.", "Adamson University.", "BS Computer Engineering.")
info.display()
###Output
My full name is Vargas, Zep Monica Aitziber.
My student number is 202117648.
I am 19 years old.
I am currently enrolled at Adamson University.
My chosen course is BS Computer Engineering.
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n", "Hi, I am", self.Name, "and my student number is", self.Student_No,
"\n", "I am currently", self.Age, "\n",
"Studying at", self.School, "taking", self.Course)
Myself = Student("Colleen M. Quijano", "202102070", "19 years old", "Cavite State Univeristy - Main Campus", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Hi, I am Colleen M. Quijano and my student number is 202102070
I am currently 19 years old
Studying at Cavite State Univeristy - Main Campus taking Bachelor of Science in Computer Engineering
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Matrix C
import numpy as np
f = np.eye(4)
print(f)
###Output
_____no_output_____
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
f = np.eye(4) * 2
print(f)
###Output
_____no_output_____
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
_____no_output_____
###Markdown
Problem 2 1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student:
def __init__(Myself, Name, Student_No, Age, School, Course):
Myself.Name = Name
Myself.Student_No = Student_No
Myself.Age = Age
Myself.School = School
Myself.Course = Course
def Info(Myself):
#print(Myself.Name, Myself.Student_No, Myself.Age, Myself.School Myself.Course)
print("My full name is", f"{Myself.Name}.")
print("My Student Number is", f"{Myself.Student_No}.")
print("My Age is", f"{Myself.Age}.")
print("My School is", f"{Myself.School}.")
print("My Course is", f"{Myself.Course}.")
student= Student("Ericka Jane A. Alegre", 202101777,18,"Cavite State University-Don Severino Delas Alas Campus", "BS in Computer Engineering")
student.Info()
###Output
My full name is Ericka Jane A. Alegre.
My Student Number is 202101777.
My Age is 18.
My School is Cavite State University-Don Severino Delas Alas Campus.
My Course is BS in Computer Engineering.
###Markdown
1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class Student():
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("My name is",self.Name)
print(self.Student_No,"is my student number.")
print("I am",self.Age,"years old.")
print("I am currently studying at",self.School)
print("The course that I'm taking is",self.Course)
Myself = Student("Gabriel Q. Camandono.",202101726,18,"Cavite State University - Don Severino delas Alas Campus.","BS Computer Engineering.")
Myself.Info()
###Output
My name is Gabriel Q. Camandono.
202101726 is my student number.
I am 18 years old.
I am currently studying at Cavite State University - Don Severino delas Alas Campus.
The course that I'm taking is BS Computer Engineering.
###Markdown
###Code
class Student():
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def student(self):
return self.Name + self.Student_No + self.Age + self.School + self.Course
def section(self):
print(self.student())
myself = Student("Waywaya B. Taclibon ","202111908 ","18y/o ", "Adamson University ", "BS Computer Engineering")
myself.section()
###Output
Waywaya B. Taclibon 202111908 18y/o Adamson University BS Computer Engineering
###Markdown
###Code
class Student:
def __init__(self, Fullname, Student_Number, Age, School, Course):
self.Fullname = Fullname
self.Student_Number = Student_Number
self.Age = Age
self.School = School
self.Course = Course
class Info(Student):
def info(self):
print("Fullname:", self.Fullname)
print("Student Number:", self.Student_Number)
print("Age:", self.Age)
print("School:", self.School)
print("Course:", self.Course)
myself = Info("Jeroh Lee Malabanan Mojica", "202101648", "18 years old", "Cavite State University-Main Campus", "Bachelor of Science in Computer Engineering")
myself.info()
###Output
Fullname: Jeroh Lee Malabanan Mojica
Student Number: 202101648
Age: 18 years old
School: Cavite State University-Main Campus
Course: Bachelor of Science in Computer Engineering
###Markdown
###Code
class Student():
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def student(self):
return self.Name + self.Student_No + self.Age + self.School + self.Course
def section(self):
print(self.student())
myself = Student("Yumang, James Beranrd G. ","202112471 ","18", "Adamson University ", "BS Computer Engineering")
myself.section()
###Output
Yumang, James Beranrd G. 202112471 18Adamson University BS Computer Engineering
###Markdown
**Prelim Exam**
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
**Question 1**
###Code
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2**
###Code
R = C*2
print(R)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3**
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
product = np.cross(A,B)
print(product)
###Output
[20 -4 -3]
###Markdown
Prelim Exam Question 1
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
answer = 2*C
print(answer)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3 Matrice A
###Code
A = np.array([2,7,4])
###Output
_____no_output_____
###Markdown
Matrice B
###Code
B = np.array([3,9,8])
###Output
_____no_output_____
###Markdown
Answer
###Code
answer2 = np.cross(A,B)
print(answer2)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self,name,student_no,age,school,course):
self.name = name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info(self):
print("name:" f"{self.name}, student_no:" f"{self.student_no}, age:" f" {self.age}," "school:" f" {self.school}," "course:" f"{self.course}")
myself = Student("Elise Brixe S. Cubol","202101864","18","Cavite State University","Bachelor of Science in Computer Engineering")
myself.Info()
###Output
name:Elise Brixe S. Cubol, student_no:202101864, age: 18,school: Cavite State University,course:Bachelor of Science in Computer Engineering
###Markdown
Prelim Exam Problem 21. Write a Python to display your full name, student number, age, and course.2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course.3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantation of a class.5. Insert your Github link "Prelim Exam" from your repository named "OOP 1-2".
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(self.Name, self.Student_No, self.Age, self.School, self.Course)
Myself = Student("John Fred B. Delos Santos", 202101897, 19, "Cavite State University - Don Severino Delas Alas Campus", "Bachelor of Science in Computer Engineering")
print(f"Name: {Myself.Name} \nStudent Number: {Myself.Student_No} \nAge: {Myself.Age} \nSchool: {Myself.School} \nCourse: {Myself.Course}")
print(f"\nHello! My name is {Myself.Name}, {Myself.Age} years old, and currently Studying in {Myself.School}. \nI am pursuing {Myself.Course} and my student number is {Myself.Student_No}.")
###Output
Name: John Fred B. Delos Santos
Student Number: 202101897
Age: 19
School: Cavite State University - Don Severino Delas Alas Campus
Course: Bachelor of Science in Computer Engineering
Hello! My name is John Fred B. Delos Santos, 19 years old, and currently Studying in Cavite State University - Don Severino Delas Alas Campus.
I am pursuing Bachelor of Science in Computer Engineering and my student number is 202101897.
###Markdown
**Problem 2:**1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.
###Code
class Student:
def __init__(self, full_name, student_no, age, school, course):
self.full_name = full_name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info(self):
print(f"My name is {self.full_name}.")
print(f"\nMy Student Number is {self.student_no}.")
print(f"\nI am {self.age} years old.")
print(f"\nI'm enrolled in {self.school}.")
print(f"\nMy course is {self.course}.")
Myself = Student("Dominic Z. Marasigan", 202101628, 19, "CvSU - Indang Campus", "BS Computer Engineering (BS CpE)",)
Myself.Info()
###Output
My name is Dominic Z. Marasigan.
My Student Number is 202101628.
I am 19 years old.
I'm enrolled in CvSU - Indang Campus.
My course is BS Computer Engineering (BS CpE).
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name= Name
self.Student_No= Student_No
self.Age= Age
self.School= School
self.Course= Course
def Student_info(self):
print(f" Name:{self.Name} \n Student Number:{self.Student_No} \n Age:{self.Age} \n School:{self.School} \n Course:{self.Course}")
Myself= Student("Sarah O. Rebulado", "202101811", "19", "Cavite State University-Main, Indang", "Bachelor of Science in Computer Engineering")
print(Myself.Student_info())
###Output
Name:Sarah O. Rebulado
Student Number:202101811
Age:19
School:Cavite State University-Main, Indang
Course:Bachelor of Science in Computer Engineering
None
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print(f"Student name: {self.Name} \nStudent No.: {self.Student_No} \nStudent Age: {self.Age} \nStudent School: {self.School} \nStudent Course: {self.Course}")
print(f"I am {self.Name} \nMy Student No. is {self.Student_No} \nI am in the {self.Age}th year of my life \nMy school where I took my course is {self.School} \nLastly my course is {self.Course}")
Myself = Student("Gerald Christian Rey R. Balindan", 202101796, 19, "Cavite State University-Indang Campus", "Bachelor of Science Major in Computer Engineering")
Myself.Info()
###Output
Student name: Gerald Christian Rey R. Balindan
Student No.: 202101796
Student Age: 19
Student School: Cavite State University-Indang Campus
Student Course: Bachelor of Science Major in Computer Engineering
I am Gerald Christian Rey R. Balindan
My Student No. is 202101796
I am in the 19th year of my life
My school where I took my course is Cavite State University-Indang Campus
Lastly my course is Bachelor of Science Major in Computer Engineering
###Markdown
###Code
class Student:
def __init__(self, name, student_no, age, course, school):
self.name = name #attributes
self.student_no = student_no
self.age = age
self.course = course
self.school = school
def info(self):
print(self.name,self.student_no,self.age,self.course,self.school)
Myself = Student("Florentino R. Manaysay III", "202101473", 19, "BS Computer Engineering", "Cavite State University - Main Campus")
print(f"My name is {Myself.name}, and my student number is {Myself.student_no}. I am {Myself.age} years old. Taking up {Myself.course} at {Myself.school}")
###Output
My name is Florentino R. Manaysay III, and my student number is 202101473. I am 19 years old. Taking up BS Computer Engineering at Cavite State University - Main Campus
###Markdown
Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
#Matrix C
a = np.matrix([1,1,1,1])
c = np.diag(a.A1)
print(c)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
a = np.array([1,1,1,1])
c = np.diag(a)
print(c*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Given Matrices A and B
A = np.array([2,7,4])
B = np.array ([3,9,8])
#Compute for the cross product
output = np.cross(A,B)
print(output)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self, Name, Student_No, Age, School, Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Self(self):
return self.Name, self.Student_No, self.Age, self.School, self.Course
def display (self):
print("College Student")
print("\nName:", self.Name)
print("Student Number:", self.Student_No)
print("Age:", self.Age)
print("School:", self.School)
print("Course:", self.Course)
print("\n--End of Program--")
Myself = Student("Marina Ortega", 202119669, 20, "Adamson University", "B.S Computer Engineering")
Myself.display()
###Output
College Student
Name: Marina Ortega
Student Number: 202119669
Age: 20
School: Adamson University
Course: B.S Computer Engineering
--End of Program--
###Markdown
Question 1 Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np #importing of library numpy
C = np.array(([1.,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1],)) #4x4 matrix
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2 In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
print(2*C)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3 Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array(([2,7,4]))
B = np.array(([3,9,8]))
print(np.cross(A,B))
###Output
[20 -4 -3]
###Markdown
Question 1 (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
#Matrix c
a = np.array([1,1,1,1])
b = np.diag(a)
print(b)
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
a = np.array([1,1,1,1])
b = np.diag(a)
print(b*2)
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
a = np.array ([2,7,4])
b = np.array ([3,9,8])
output = np.cross(a,b)
print(output)
###Output
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n","Greetings I am",self.Name,"and my student number is", self.Student_No,"\n","I just turned",self.Age,"in the 15th of March",
"\n", "I am currently enrolled in", self.School,"taking", self.Course)
Myself = Student("Ernest Danniel R. Tiston","202106651","18", "Cavite State University-Indang Campus", "Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Greetings I am Ernest Danniel R. Tiston and my student number is 202106651
I just turned 18 in the 15th of March
I am currently enrolled in Cavite State University-Indang Campus taking Bachelor of Science in Computer Engineering
###Markdown
###Code
n = 20
total_numbers = n
sum = 0
while n >= 0:
sum += n
n -= 1
print("sum =", sum)
average = sum / total_numbers
print("Average = ", average)
###Output
sum = 210
Average = 10.5
###Markdown
Problem 2. (50 points)1. Write a Python to display your full name, student number, age, and course2. Create a class named Student with attributes: Name, Student_No, Age, School, and Course3. Create an object name Myself and assign an instance for each attribute.4. Create a method Info() using an instantiation of a class.5. Insert your GitHub link "Prelim Exam" from your repository named "OOP 1-1"
###Code
class OOP_1_1:
def __init__(self,fullname,student_no,age,course,school):
self.fullname = fullname
self.student_no = student_no
self.age = age
self.course = course
self.school = school
def info(self):
#print(self.fullname,self.student_no,self.age,self.course,self.school)
print("Name: ", self.fullname)
print("Student No. ", self.student_no)
print("Age: ", self.age,"years old")
print("School: ", self.school)
print("Course: ", self.course)
Myself = OOP_1_1("King John Adamz R. Paglinawan",202102061,19,"BSCPE/ BACHELOR OF SCIENCE IN COMPUTER ENGINEERING","Cavite State University (Main Campus)")
Myself.info()
###Output
Name: King John Adamz R. Paglinawan
Student No. 202102061
Age: 19 years old
School: Cavite State University (Main Campus)
Course: BSCPE/ BACHELOR OF SCIENCE IN COMPUTER ENGINEERING
###Markdown
###Code
class JohnsClass:
def __init__(self,fullname, student_no, age, course):
self.fullname = fullname
self.student_no = student_no
self.age = age
self.course = course
def display(self):
print("My name is"+"",self.fullname)
print("My Student Number is",self.student_no)
print("I am",self.age, "years old")
print("My course in college is",self.course)
student = JohnsClass("John Hendrick S. Daguio",202101672,19,"BSCpE")
student.display()
class Student:
def __init__(self,name, student_no, age, school, course):
self.name = name
self.student_no = student_no
self.age = age
self.school = school
self.course = course
def Info(self):
print("I am"+"",self.name,"My student number is",self.student_no,"I am"+"",self.age,"years old","I am currently studying in",self.school,"My college course is",self.course)
Myself = Student("John Hendrick S. Daguio",202101672,19,"CVSU","BSCpE")
Myself.Info()
###Output
I am John Hendrick S. Daguio My student number is 202101672 I am 19 years old I am currently studying in CVSU My college course is BSCpE
###Markdown
###Code
##import numpy and call it np
import numpy as np
###Output
_____no_output_____
###Markdown
**Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". **
###Code
## identity matrix using identity method, diagonal elements are all 1
## can also use np.eye(4) or np.diag to a matrix that has 1s in it \
## np.identity creates a matrix that takes in a number n as the number of rows and columns and sets the diagonal to 1s.
C = np.identity(4)
print("Answer: \n")
print(C) ## print matrix C
###Output
Answer:
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
**Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. **
###Code
## double all the values in matrix C by multiplying it to 2
D = C * 2
print("Answer: \n")
print(D)
###Output
Answer:
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
**Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8]. **
###Code
A = np.array([2, 7, 4])
B = np.array([3, 9, 8])
## use np.cross to get the cross product for matrix A and B
cross = np.cross(A,B)
print("Answer: \n")
print(cross)
###Output
Answer:
[20 -4 -3]
###Markdown
Prelim Exam Question 1
###Code
import numpy as np
c = np.eye(4)
print(c)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
import numpy as np
c = np.eye(4)
print(c)
print()
print(c*2)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
C = np.cross(A,B)
print(C)
###Output
[20 -4 -3]
###Markdown
PRELIM EXAM Question 1.Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C".
###Code
import numpy as np
C = np.eye(4)
print("Matrix C\n\n", C)
###Output
Matrix C
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2.In relation to Question 1, show a solution that doubles all the values of each element.
###Code
import numpy as np
DoubleC = 2*C
print("Double value of matrix C\n\n", DoubleC)
###Output
Double value of matrix C
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. Find the cross-product of matrices, A = [2,7,4] and B = [3,9,8].
###Code
import numpy as np
A = np.array([2,7,4])
B = np.array([3,9,8])
CrossProd = np.cross(A,B)
print("Cross Product of Matrix A and Matrix B\n\n", CrossProd)
###Output
Cross Product of Matrix A and Matrix B
[20 -4 -3]
###Markdown
###Code
class Student:
def __init__ (self, name,student_number,age, school,course):
self.name = name
self.student_number= student_number
self.age= age
self.school= school
self.course=course
def myself(self):
print("My Name is", self.name, self.age, "years old.", "My Student Number is", self.student_number,".")
print("I'm taking", self.course, "at", self.school)
S = Student("Nicole Shaira A. Tabligan", 202150371,19, "Adamson University", "Bachelor of Science in Computer Engineering")
S.myself()
###Output
My Name is Nicole Shaira A. Tabligan 19 years old. My Student Number is 202150371 .
I'm taking Bachelor of Science in Computer Engineering at Adamson University
###Markdown
Prelim Exam Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
#declare the value of what value that is needed in the said Matrix which are all 1
Z= np.array([1,1,1,1])
C=np.diag(Z) #declare for the contents of variable Z to be diagonal
print(C) #to display or print the matrix
###Output
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
###Markdown
Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
Z= np.array([1,1,1,1])
C=np.diag(Z)
#there are a lot of ways to double the values of each elements.
#simply multiply the Matrix C to 2 so it will be doubled.
print(C*2) #to display or print the output
###Output
[[2 0 0 0]
[0 2 0 0]
[0 0 2 0]
[0 0 0 2]]
###Markdown
Question 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and
###Code
import numpy as np
A= np.array([2,7,4])
B= np.array([3,9,8])
#To compute the cross of A and B this will be used
CrossAB= np.cross(A,B)
print(CrossAB) #to display or print the output
###Output
[20 -4 -3]
###Markdown
-Write a Python to display your full name, student number, age, and course-Create a class named Student with attributes: Name, Student_No, Age, School, and Course-Create an object name Myself and assign an instance for each attribute.-Create a method Self () using an instantiation of a class.-Insert your GitHub link "Prelim Exam" from your repository named "OOP 58001"
###Code
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def self(self):
return f'Name: {self.Name} \nStudent Number: {self.Student_No} \nAge: {self.Age} \nSchool: {self.School} \nCourse: {self.Course}'
Myself = Student ("Xienina F. Roldan", 202119564, 19, "Adamson University", "Bachelor of Science in Computer Engineering")
print (Myself.self())
###Output
Name: Xienina F. Roldan
Student Number: 202119564
Age: 19
School: Adamson University
Course: Bachelor of Science in Computer Engineering
###Markdown
Qeustion 1
###Code
import numpy as np
C = np.eye(4)
print(C)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2
###Code
answer = 2*C
print(answer)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3
###Code
A =([2,7,4])
B = ([3,9,8])
output = np.cross(A,B)
print(output)
###Output
[20 -4 -3]
###Markdown
Prelim Exam
###Code
class Student:
def __init__(self,Name, Student_No, Age, School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self. School = School
self. Course = Course
def Info(self):
print(self.Name)
print(self.Student_No)
print(self.Age)
print(self.School)
print(self.Course)
Myself = Student("Mark Adrian Balbaira Beranque",202101472,18,"Cavite State University Delas Alas Campus","Bachelor of Science in Computer Engineering")
Myself.Info()
###Output
Mark Adrian Balbaira Beranque
202101472
18
Cavite State University Delas Alas Campus
Bachelor of Science in Computer Engineering
###Markdown
Question 1. Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix "C". Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
#Matrix C
import numpy as np
f = np.eye(4)
print(f)
###Output
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
###Markdown
Question 2. In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
import numpy as np
f = np.eye(4) * 2
print(f)
###Output
[[2. 0. 0. 0.]
[0. 2. 0. 0.]
[0. 0. 2. 0.]
[0. 0. 0. 2.]]
###Markdown
Question 3. Find the cross-product of matrices, A = [2,7,4] andB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.
###Code
A = np.array([2,7,4])
B = np.array([3,9,8])
cross = np.cross(A,B)
print(cross)
###Output
[20 -4 -3]
|
convokit/forecaster/CRAFT/demos/craft_demo_new.ipynb | ###Markdown
CRAFT demo (inference only) using ConvoKitThis example notebook shows how an already-trained CRAFT model can be applied to conversational data to predict future derailment. This example uses the fully trained Wikiconv-based model as reported in the "Trouble on the Horizon" paper, and applies it to ConvoKit's version of the labeled Wikiconv corpus.
###Code
import convokit
from convokit import Forecaster, Corpus, download
MAX_LENGTH = 80
from convokit.forecaster.CRAFTModel import CRAFTModel
craft_model = CRAFTModel(device_type="cpu", model_path="finetuned_model.tar")
forecaster = Forecaster(forecaster_model = craft_model,
forecast_mode = "future",
convo_structure="linear",
text_func = lambda utt: utt.meta["tokens"][:(MAX_LENGTH-1)],
label_func = lambda utt: int(utt.meta['comment_has_personal_attack']),
forecast_attribute_name="prediction", forecast_prob_attribute_name="pred_score",
use_last_only = True,
skip_broken_convos=False
)
corpus = Corpus(filename=download("conversations-gone-awry-corpus"))
###Output
Dataset already exists at /kitchen/convokit-corpora-jpc/conversations-gone-awry-corpus
###Markdown
Part 2: load the dataNow we load the labeled Wikiconv corpus from ConvoKit, and run some transformations to prepare it for use with PyTorch
###Code
from convokit.forecaster.CRAFT import craft_tokenize
for utt in corpus.iter_utterances():
utt.add_meta("tokens", craft_tokenize(craft_model.voc, utt.text))
forecaster.transform(corpus, selector=lambda convo: convo.meta["split"] == "train",
ignore_utterances=lambda utt: utt.meta["is_section_header"])
forecasts_df = forecaster.summarize(corpus)
forecasts_df.head(20)
###Output
_____no_output_____
###Markdown
CRAFT demo (inference only) using ConvoKitThis example notebook shows how an already-trained CRAFT model can be applied to conversational data to predict future derailment. This example uses the fully trained Wikiconv-based model as reported in the "Trouble on the Horizon" paper, and applies it to ConvoKit's version of the labeled Wikiconv corpus.
###Code
import convokit
from convokit import Forecaster, Corpus, download
MAX_LENGTH = 80
from convokit.forecaster.CRAFTModel import CRAFTModel
craft_model = CRAFTModel(device_type="cpu", model_path="finetuned_model.tar")
forecaster = Forecaster(forecaster_model = craft_model,
forecast_mode = "future",
convo_structure="linear",
text_func = lambda utt: utt.meta["tokens"][:(MAX_LENGTH-1)],
label_func = lambda utt: int(utt.meta['comment_has_personal_attack']),
forecast_feat_name="prediction", forecast_prob_feat_name="pred_score",
use_last_only = True,
skip_broken_convos=False
)
corpus = Corpus(filename=download("conversations-gone-awry-corpus"))
###Output
Dataset already exists at /Users/calebchiam/.convokit/downloads/conversations-gone-awry-corpus
###Markdown
Part 2: load the dataNow we load the labeled Wikiconv corpus from ConvoKit, and run some transformations to prepare it for use with PyTorch
###Code
from convokit.forecaster.CRAFT import craft_tokenize
for utt in corpus.iter_utterances():
utt.add_meta("tokens", craft_tokenize(craft_model.voc, utt.text))
forecaster.transform(corpus, selector=lambda convo: convo.meta["split"] == "train",
ignore_utterances=lambda utt: utt.meta["is_section_header"])
forecasts_df = forecaster.summarize(corpus)
forecasts_df.head(20)
###Output
_____no_output_____
###Markdown
CRAFT demo (inference only) using ConvoKitThis example notebook shows how an already-trained CRAFT model can be applied to conversational data to predict future derailment. This example uses the fully trained Wikiconv-based model as reported in the "Trouble on the Horizon" paper, and applies it to ConvoKit's version of the labeled Wikiconv corpus.
###Code
import os
os.chdir('../../../..')
import convokit
from convokit import Forecaster, Corpus, download
MAX_LENGTH = 80
craft_model = convokit.CRAFTModel(device_type="cpu", model_path="finetuned_model.tar")
forecaster = Forecaster(forecaster_model = craft_model,
forecast_mode = "future",
convo_structure="linear",
text_func = lambda utt: utt.meta["tokens"][:(MAX_LENGTH-1)],
utt_selector_func = lambda utt: not utt.meta["is_section_header"],
label_func = lambda utt: int(utt.meta['comment_has_personal_attack']),
convo_selector_func = (lambda convo: convo.meta["split"] == "train"),
forecast_feat_name="prediction", forecast_prob_feat_name="pred_score",
use_last_only = True,
skip_broken_convos=False
)
corpus = Corpus(filename=download("conversations-gone-awry-corpus"))
###Output
Dataset already exists at /Users/calebchiam/.convokit/downloads/conversations-gone-awry-corpus
###Markdown
Part 2: load the dataNow we load the labeled Wikiconv corpus from ConvoKit, and run some transformations to prepare it for use with PyTorch
###Code
from convokit import craft_tokenize
for utt in corpus.iter_utterances():
utt.add_meta("tokens", craft_tokenize(craft_model.voc, utt.text))
forecaster.transform(corpus)
forecasts_df = forecaster.summarize(corpus)
forecasts_df.head(20)
###Output
_____no_output_____ |
fenland_analysis.ipynb | ###Markdown
Fenland Analysis Script 1. Importing dependencies
###Code
import os
import tempfile
from glob import glob
import pandas as pd
import numpy as np
from collections import defaultdict
from hypnospy import Wearable, Diary
from hypnospy.data import RawProcessing
from hypnospy.analysis import NonWearingDetector, SleepBoudaryDetector, Validator, Viewer, PhysicalActivity,SleepMetrics
from hypnospy import Experiment
###Output
_____no_output_____
###Markdown
2. Setting up the experimentUsing Fenland-specific data pre-processing
###Code
def load_experiment(data_path, start_hour):
# Configure the Experiment
exp = Experiment()
# Iterates over a set of files in a directory.
for file in glob(data_path):
pp = RawProcessing(file,
# HR information
col_for_hr="mean_hr",
# Activity information
cols_for_activity=["stdMET_highIC_Branch"],
is_act_count=False,
device_location="dw",
# Datetime information
col_for_datetime="real_time",
strftime="%d-%m-%Y %H:%M:%S",#'2012-09-03 10:55:00'
# Participant information
col_for_pid="id")
#pp.data["hyp_act_x"] = (pp.data["hyp_act_x"]/0.0060321) + 0.057 # adjust for Fenland
w = Wearable(pp) # Creates a wearable from a pp object
exp.add_wearable(w)
# Set frequency for every wearable in the collection
exp.set_freq_in_secs(60)
# Changing the hour the experiment starts from midnight (0) to 3pm (15)
exp.change_start_hour_for_experiment_day(start_hour)
return exp
###Output
_____no_output_____
###Markdown
3. Defining the data path, hyperparameters and cutoffs
###Code
# Path to find the wearables data
data_path = "./data/small_collection_fenland_full/*.csv"
# Parameters for the HypnosPy HR-based sleep algorithm
hr_quantile = 0.40
hr_min_window_length = 35
hr_merge_blocks = 180
hr_volarity = 6
#Time to consider as start and end of each experiment day - if equal the sleep labelling occurs
#over the entire 24 hours
start_hour = 20
end_hour = 20
#Giving the experiment a number
exp_id = 0
#Set the PA cutoffs - in METs, with names being the binary columns created to label each epoch
cutoffs=[1.5,3,6]
names=['Sed','LPA','MPA','VPA']
###Output
_____no_output_____
###Markdown
4. Running the experiment1. Loading2. Validating3. Sleep Labelling4. Physical Activity LabellingTo Do:- get sleep metrics (SE, awakenings, SRI from SleepMetrics)
###Code
exp = load_experiment(data_path, start_hour)
exp.fill_no_activity(-0.0001)
va = Validator(exp)
# Flag times with less activity than set threshold, or non-wearing periods
va.flag_epoch_physical_activity_less_than(min_activity_threshold=0)
va.flag_epoch_null_cols(col_list=["hyp_act_x"])
va.flag_day_max_nonwearing(max_non_wear_minutes_per_day=60)
va.flag_day_if_invalid_epochs_larger_than(max_invalid_minutes_per_day=60)
# Accounting for removed days and subjects (referred to as wearables)
n_removed_days = va.remove_flagged_days()
print("Removed %d days (non wearing)." % n_removed_days)
n_users = va.remove_wearables_without_valid_days()
print("Removed %d wearables." % n_users)
sbd = SleepBoudaryDetector(exp)
sbd.detect_sleep_boundaries(strategy="hr", output_col="hyp_sleep_period_hr", hr_quantile=hr_quantile,
hr_volarity_threshold=hr_volarity, hr_rolling_win_in_minutes=5,
hr_sleep_search_window=(start_hour, end_hour),
hr_min_window_length_in_minutes=hr_min_window_length,
hr_volatility_window_in_minutes=10, hr_merge_blocks_gap_time_in_min=hr_merge_blocks,
hr_sleep_only_in_sleep_search_window=True, hr_only_largest_sleep_period=True)
cutoffs=[1.5,3,6]
names=['Sed','LPA','MPA','VPA']
pa = PhysicalActivity(exp)
pa.set_cutoffs(cutoffs=cutoffs,names=names)
pa.generate_pa_columns(based_on='hyp_act_x')
###Output
Removed 0 days (non wearing).
Removed 0 wearables.
###Markdown
5. Population Analysis1. Creates dict with all data2. Extracts statistics from pop dict into pop_df dataframe To Do:- put sleep metrics into the population analysis- bin subjects by TST according to analysis plan (below)- creates tables and graphs from pop_df Analysis Plan: 1. Subjects who are more physically active have higher TST, higher SE, higher SRI and lower WASO i. Physical activity binned into: 1) 0-300, 300-600, 600-900, 900+ METmins per week (multiply daily average by 7) OR 2) 0-100, 100-200, 200-300, 300+ MVPA per week (multiply daily average by 7) ii. Then average all the sleep metrics over these bins and test for statistically significant differences iii. Would produce 2 tables: METmins vs sleep metrics & MVPA vs sleep metrics 2. Subjects with higher sleep quality are healthier i. Sleep metrics: 1) TST binned into hourly intervals (eg. those sleeping <5, 5-6,6-7,7-8,8+ hours/night on average) 2) SRI binned into quartiles ii. Then average the METmins per week for these bins, BMI and also OR for having a cardiovascular disease iii. Would produce 2 tables: TST vs PA, BMI, disease status & SRI vs PA, BMI, disease status
###Code
pop = defaultdict()
for w in exp.wearables:
pop[w] = {}
pop[w]['tst'] = exp.wearables[w].get_total_sleep_time_per_day(sleep_col="hyp_sleep_period_hr")
pop[w]['onset'] = exp.wearables[w].get_onset_sleep_time_per_day(sleep_col="hyp_sleep_period_hr")
pop[w]['offset'] = exp.wearables[w].get_offset_sleep_time_per_day(sleep_col="hyp_sleep_period_hr")
pop[w]['height'] = exp.wearables[w].data['height'][0]
pop[w]['weight'] = exp.wearables[w].data['weight'][0]
pop[w]['BMI'] = pop[w]['weight'] / (pop[w]['height']**2)
pop[w]['sex'] = exp.wearables[w].data['sex'][0]
pop[w]['age'] = exp.wearables[w].data['age'][0]
pop[w]['Sed'] = exp.wearables[w].data.groupby(exp.wearables[w].get_experiment_day_col())['Sed'].sum()
pop[w]['LPA'] = exp.wearables[w].data.groupby(exp.wearables[w].get_experiment_day_col())['LPA'].sum()
pop[w]['MPA'] = exp.wearables[w].data.groupby(exp.wearables[w].get_experiment_day_col())['MPA'].sum()
pop[w]['VPA'] = exp.wearables[w].data.groupby(exp.wearables[w].get_experiment_day_col())['VPA'].sum()
pop[w]['METmins_MPA'] = exp.wearables[w].data[exp.wearables[w].data['MPA']]['hyp_act_x'].sum()
pop[w]['METmins_VPA'] = exp.wearables[w].data[exp.wearables[w].data['VPA']]['hyp_act_x'].sum()
pop[w]['METmins_total'] = pop[w]['METmins_MPA'] + pop[w]['METmins_VPA']
#Exclude exp_days with <150 mins of sleep
pop[w]['tst_mean'] = pop[w]['tst'][pop[w]['tst']['hyp_sleep_period_hr']>150].mean()[0]
pop[w]['tst_std'] = pop[w]['tst'][pop[w]['tst']['hyp_sleep_period_hr']>150].std()[0]
pop[w]['LPA_daily'] = pop[w]['LPA'].mean()
pop[w]['MPA_weekly'] = pop[w]['MPA'].mean()*7
pop[w]['VPA_weekly'] = pop[w]['VPA'].mean()*7
pop[w]['MVPA_weekly'] = (pop[w]['MPA'].mean() + pop[w]['VPA'].mean())*7
pop[w]['METmins_weekly'] = pop[w]['METmins_total'].mean()*7
#print(pop['dummy5'].items())
df_cols = ['sex','BMI','age','tst_mean','tst_std',
'LPA_daily','MPA_weekly','VPA_weekly','MVPA_weekly','METmins_weekly']
pop_df = pd.DataFrame(columns=df_cols)
for w in exp.wearables:
for col in df_cols:
pop_df.loc[w,col] = pop[w][col]
print(pop_df)
###Output
sex BMI age tst_mean tst_std LPA_daily MPA_weekly \
dummy1 0 18.116276 48 395.5 181.116261 58.428571 34.0
dummy2 1 26.023427 29 435.25 164.817829 74.4 30.8
dummy3 0 25.059307 31 544.166667 112.762435 87.0 94.5
dummy4 1 26.023427 29 435.25 164.817829 74.4 30.8
dummy5 1 26.97404 52 508.333333 125.463408 117.285714 191.0
VPA_weekly MVPA_weekly METmins_weekly
dummy1 0.0 34.0 845.068672
dummy2 0.0 30.8 517.243873
dummy3 2.625 97.125 3510.096365
dummy4 0.0 30.8 517.243873
dummy5 4.0 195.0 5363.642738
|
labs/lab_03.ipynb | ###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
df.shape[0]
###Output
_____no_output_____
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
df.shape[1]
###Output
_____no_output_____
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
df.columns
###Output
_____no_output_____
###Markdown
4.- Imprima el índice del dataframe
###Code
df.index
###Output
_____no_output_____
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe()
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df.occupation
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
len(df.occupation.unique())
###Output
_____no_output_____
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df["occupation"].value_counts().idxmax()
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
df["age"].mean()
###Output
_____no_output_____
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
df["age"].value_counts().idxmin()
df["age"].min()
###Output
_____no_output_____
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
a = df.groupby(["occupation"]).age.mean()
print(np.array(a))
df.groupby(["occupation"]).age.mean()
###Output
_____no_output_____
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
df=df.dropna()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
df.groupby(["company"]).price.mean(["price"]).idxmax()
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
df.where(df["company"] =="toyota").dropna()
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
df.groupby(["company"]).company.count()
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
df.groupby(["company"]).price.idxmax()
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
df.groupby(["company"])["average-mileage"].mean()
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df.price.sort_values()
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1=pd.DataFrame(GermanCars)
carsDf2=pd.DataFrame(japaneseCars)
carsDf1
carsDf2
pd.concat([carsDf1,carsDf2], keys=["Germany","Japan"])
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1=pd.DataFrame(Car_Price)
carsDf2=pd.DataFrame(car_Horsepower)
carsDf1.set_index("Company").join(carsDf2.set_index("Company"))
#o alternativamente:
#carsDf1.join(carsDf2.set_index("Company"), on="Company")
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Objetivos de la clase* Reforzar los conceptos básicos de pandas. Contenidos* [Problema 01](p1) Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información tal como: edad ,sexo, profesión, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|").set_index('user_id')
df.head()
# se eliminan los valore nulos del dataframe
df = df[lambda df: df.notnull().all(axis=1)]
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas: 1. ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
df.shape
###Output
_____no_output_____
###Markdown
Por lo tanto el número de observaciones es 943 2. ¿Cuál es el número de columnas en el conjunto de datos?
###Code
number = len(df.columns)
print("el número de columnas en el conjunto de datos es:",number)
###Output
el número de columnas en el conjunto de datos es: 4
###Markdown
3. Imprime el nombre de todas las columnas
###Code
for i in range(len(df.columns)):
print(df.columns[i])
###Output
age
gender
occupation
zip_code
###Markdown
4. Imprima el índice del dataframe
###Code
df.index
###Output
_____no_output_____
###Markdown
5. ¿Cuál es el tipo de datos de cada columna?
###Code
print("Tipo de dato por columna:")
df.dtypes
###Output
Tipo de dato por columna:
###Markdown
6. Resumir el conjunto de datos
###Code
df.describe()
###Output
_____no_output_____
###Markdown
7. Resume conjunto de datos con todas las columnas
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
8. Imprimir solo la columna de **occupation**.
###Code
df['occupation']
###Output
_____no_output_____
###Markdown
9. ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
ocupaciones_diferentes = df['occupation'].unique()
ocupaciones_totales = list(ocupaciones_diferentes)
n = len(ocupaciones_totales)
print("el número de ocupaciones diferentes es:" ,n)
###Output
el número de ocupaciones diferentes es: 21
###Markdown
10. ¿Cuál es la ocupación más frecuente?
###Code
diccionario={}
for ocupacion in ocupaciones_diferentes:
diccionario[ocupacion]=0
for ocupacion1 in diccionario.keys():
for ocupacion2 in df['occupation']:
if ocupacion1==ocupacion2:
diccionario[ocupacion1]+=1
dataframe = pd.DataFrame({
"ocupacion":diccionario.keys(),
"ocurrencia":diccionario.values()
})
dataframe
maxima_frecuencia=dataframe['ocurrencia'].max()
for i in diccionario.keys():
if diccionario[i]==maxima_frecuencia:
print("La ocupación con la mayor ocurrencia es:",i)
###Output
La ocupación con la mayor ocurrencia es: student
###Markdown
11. ¿Cuál es la edad media de los usuarios?
###Code
edades = df['age']
print("La edad media de los usuarios es:",edades.mean())
###Output
La edad media de los usuarios es: 34.05196182396607
###Markdown
12. ¿Cuál es la edad con menos ocurrencia?
###Code
diccionario2 = {}
edades_diferentes = df['age'].unique()
for edad in edades_diferentes:
diccionario2[edad]=0
for edad1 in diccionario2.keys():
for edad2 in df['age']:
if edad1==edad2:
diccionario2[edad1]+=1
dataframe2 = pd.DataFrame({
"edad":diccionario2.keys(),
"ocurrencia":diccionario2.values()
})
print(dataframe2)
minima_ocurrencia = dataframe2['ocurrencia'].min()
lista_de_edades=[]
for i in diccionario2.keys():
if diccionario2[i]==minima_ocurrencia:
lista_de_edades.append(i)
print("Las edades con la menor ocurrencia son:")
for i in lista_de_edades:
print(i)
###Output
edad ocurrencia
0 24 33
1 53 12
2 23 28
3 33 26
4 42 21
.. ... ...
56 10 1
57 73 1
58 58 3
59 69 2
60 70 3
[61 rows x 2 columns]
Las edades con la menor ocurrencia son:
7
66
11
10
73
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
print("Numero de observaciones:", df.shape[0])
###Output
Numero de observaciones: 943
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
print("Numero de columnas:", df.shape[1])
###Output
Numero de columnas: 5
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
print("las columnas son:")
for col in df.columns:
print(col)
###Output
las columnas son:
user_id
age
gender
occupation
zip_code
###Markdown
4.- Imprima el índice del dataframe
###Code
print(df.index)
###Output
RangeIndex(start=0, stop=943, step=1)
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe()
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
print(df["occupation"])
###Output
0 technician
1 other
2 writer
3 technician
4 other
...
938 student
939 administrator
940 student
941 librarian
942 student
Name: occupation, Length: 943, dtype: object
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
print("hay un total de", len(df["occupation"].unique()), "ocupaciones")
###Output
hay un total de 21 ocupaciones
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
print("La ocupación más frecuente es:", df["occupation"].value_counts().idxmax())
print("con una frecuencia de:", df["occupation"].value_counts().max())
###Output
con una frecuencia de: 196
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
print("La edad promedio es", df["age"].mean())
print("redondeada:", round(df["age"].mean()))
###Output
redondeada: 34
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
print("la edad con menos ocurrencia es:", df["age"].value_counts().idxmin())
df.groupby(["occupation"])["age"].mean()
###Output
_____no_output_____
###Markdown
12.- Encontrar la edad promedio según la variable **occupation** Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
df=df.dropna()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
df.groupby(["company"]).mean()["price"].idxmax()
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
df.where(df["company"]=="toyota").dropna()
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
df.groupby(["company"])["company"].count()
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
df.groupby(["company"])["price"].idxmax()
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
df.groupby(["company"])["average-mileage"].mean()
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df["price"].sort_values()
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1=pd.DataFrame(GermanCars)
carsDf2=pd.DataFrame(japaneseCars)
carsDf=pd.concat([carsDf1,carsDf2],keys=["Germany","Japan"])
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1=pd.DataFrame(Car_Price)
carsDf2=pd.DataFrame(car_Horsepower)
carsDf=carsDf1.join(carsDf2.set_index("Company"),on="Company")
carsDf
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
df.shape[0]
###Output
_____no_output_____
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
df.shape[1]
###Output
_____no_output_____
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
df.columns
###Output
_____no_output_____
###Markdown
4.- Imprima el índice del dataframe
###Code
df.index
###Output
_____no_output_____
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe()
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df.occupation
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
len(df.occupation.unique())
###Output
_____no_output_____
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df["occupation"].value_counts().idxmax()
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
df["age"].mean()
###Output
_____no_output_____
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
df["age"].value_counts().idxmin()
df["age"].min()
###Output
_____no_output_____
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
df.groupby(["occupation"]).age.mean()
###Output
_____no_output_____
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
df=df.dropna()
df
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
df.groupby(["company"]).price.mean(["price"]).idxmax()
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
df.where(df["company"]== "toyota").dropna()
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
df.groupby(["company"]).company.count()
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
df.groupby(["company"]).price.idxmax()
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
df.groupby(["company"])["average-mileage"].mean()
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df.price.sort_values()
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1=pd.DataFrame(GermanCars)
carsDf2=pd.DataFrame(japaneseCars)
pd.concat([carsDf1,carsDf2],keys=["Germany","Japan"])
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1=pd.DataFrame(Car_Price)
carsDf2=pd.DataFrame(car_Horsepower)
carsDf1.join(carsDf2.set_index('Company'), on='Company')
#carsDf1.set_index('Company').join(carsDf2.set_index('Company'))
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import numpy as np
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
print ("El número de observaciones es:")
df.shape
###Output
El número de observaciones es:
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
print("El número de columnas es:")
df.shape[1]
###Output
El número de columnas es:
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
print("Columnas:")
df.columns
###Output
Columnas:
###Markdown
4.- Imprima el índice del dataframe
###Code
print("Índice:")
df.index
###Output
Índice:
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
print("Tipos de datos de cada columna:")
df.dtypes
###Output
Tipos de datos de cada columna:
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df['occupation']
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
len(df['occupation'].unique())
###Output
_____no_output_____
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df['occupation'].describe()
#Corresponde a top: student
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
df['age'].mean()
###Output
_____no_output_____
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
print(f"Conteo:\n{df['age'].value_counts()}")
#Corresponde a las ultimas edades con frecuencia 1
###Output
Conteo:
30 39
25 38
22 37
28 36
27 35
..
7 1
66 1
11 1
10 1
73 1
Name: age, Length: 61, dtype: int64
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
grupo = df.groupby(['occupation'])
df_leng = grupo.agg({'age':[np.mean]}).reset_index()
df_leng
###Output
_____no_output_____
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
mask = lambda df: df.notnull().all(axis=1)
df = df[mask]
df
2. Encuentra el nombre de la compañía de automóviles más cara
grupo = df.groupby(['company'])
df_leng = grupo.agg({'price':[np.mean]}).reset_index()
df_leng
#porsche tiene el precio medio mas alto
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
condition = df[(df['company'] == "toyota")]
condition
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
grupo = df.groupby('company')
grupo['length'].count().reset_index()
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
grupo = df.groupby(['company'])
df_leng = grupo.agg({'price':[np.max]}).reset_index()
df_leng
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
grupo = df.groupby(['company'])
df_leng = grupo.agg({'average-mileage':[np.mean]}).reset_index()
df_leng
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df.sort_values(by=['price'])
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1 = pd.DataFrame(
{'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]} )
carsDf2 = pd.DataFrame(
{'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]} )
carsDf = pd.concat([carsDf1,carsDf2])
carsDf['Made in']=['Germany','Germany','Germany','Germany','Japan','Japan','Japan','Japan']
carsDf
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1 = pd.DataFrame(
{'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400] } )
carsDf2 = pd.DataFrame(
{'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160] } )
carsDf = pd.merge(carsDf1,carsDf2, on='Company')
carsDf
#pd.merge(left, right, on='key')
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
f,c=df.shape
print(f'Se tienen {f} observaciones.')
###Output
Se tienen 943 observaciones.
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
f,c=df.shape
print(f' Se tienen {c} columnas.')
###Output
Se tienen 5 columnas.
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
nombres=df.columns
print(f'Los nombres de las columnas son {nombres}.')
###Output
Los nombres de las columnas son Index(['user_id', 'age', 'gender', 'occupation', 'zip_code'], dtype='object').
###Markdown
4.- Imprima el índice del dataframe
###Code
indice=df.index
print(f'el índice del data frame es {indice}.')
###Output
el índice del data frame es RangeIndex(start=0, stop=943, step=1).
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
tp=df.dtypes
print(f'Los tipos de datos de cada columna son:')
print(f'{tp}')
###Output
Los tipos de datos de cada columna son:
user_id int64
age int64
gender object
occupation object
zip_code object
dtype: object
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
print("La descripcion del conjunto de datos es:")
df.describe(include='all')
###Output
La descripcion del conjunto de datos es:
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
print(f'La columna de occupation es:')
df.iloc[:,3]
###Output
La columna de occupation es:
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
diferentes=df['occupation'].nunique()
print(f'Hay un total de {diferentes} ocupaciones diferentes')
###Output
Hay un total de 21 ocupaciones diferentes
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
frecuente=df['occupation'].value_counts().idxmax()
print(f'La ocupación mas frecuente es {frecuente}')
###Output
La ocupación mas frecuente es student
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
edad=df['age'].mean()
print(f'La edad media de los usuarios es {edad}, pero como la edad es un numero entero se tiene que la media es {int(edad)}.')
###Output
La edad media de los usuarios es 34.05196182396607, pero como la edad es un numero entero se tiene que la media es 34.
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
menos=df['age'].value_counts().idxmin()
print(f'La edad con menos ocurrencia es {menos}')
###Output
La edad con menos ocurrencia es 7
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
df_2=df.groupby("occupation")
print("Se puede observar la edad promedio de cada ocupación en la columna mean.")
df_2.describe()["age"]
###Output
Se puede observar la edad promedio de cada ocupación en la columna mean.
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- Elimine los valores nulos (Nan)
###Code
mask = lambda df: df.notnull().all(axis=1)
df = df[mask]
print("La tabla sin los valores nulos es:")
df.head()
###Output
La tabla sin los valores nulos es:
###Markdown
2.- Encuentra el nombre de la compañía de automóviles más cara
###Code
print(f'El nombre de la compañia de automóviles mas cara es:')
df[['company']][df.price == df['price'].max()]
###Output
El nombre de la compañia de automóviles mas cara es:
###Markdown
3.- Imprimir todos los detalles de Toyota Cars
###Code
print("Los detalles de Toyota Cars son:")
df.loc[df['company'] == "toyota"].describe()
###Output
Los detalles de Toyota Cars son:
###Markdown
4.- Cuente el total de automóviles por compañía
###Code
print("Se eligio la columna de price por conveniencia, cualquier columna sirve pues en todas son los mismos números")
df.groupby('company').count()['price']
###Output
Se eligio la columna de price por conveniencia, cualquier columna sirve pues en todas son los mismos números
###Markdown
5.- Encuentra el coche con el precio más alto por compañía
###Code
print("Los costos de mas altos por compañia son:")
df.groupby("company")["price"].max()
###Output
Los costos de mas altos por compañia son:
###Markdown
6.- Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
df_2=df.groupby("company")
print("Se puede observar el kilometraje promedio de cada compañia en la columna mean.")
df_2.describe()["average-mileage"]
###Output
Se puede observar el kilometraje promedio de cada compañia en la columna mean.
###Markdown
7.- Ordenar todos los autos por columna de precio (**price**)
###Code
print("Se puede observar la tabla ordenada por precio de menor a mayor:")
df.sort_values(by="price")
###Output
Se puede observar la tabla ordenada por precio de menor a mayor:
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1 = pd.DataFrame(
{
'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]
}
)
carsDf2 = pd.DataFrame(
{
'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]
}
)
carsDf=pd.concat([carsDf1, carsDf2],keys=["Germany", "Japan"])
carsDf
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1 = pd.DataFrame(
{
'Company': ['Toyota', 'Honda', 'BMV', 'Audi'],
'Price': [23845, 17995, 135925 , 71400]
}
)
carsDf2 = pd.DataFrame(
{
'Company': ['Toyota', 'Honda', 'BMV', 'Audi'],
'horsepower': [141, 80, 182 , 160]
}
)
carsDf=pd.merge(carsDf1, carsDf2, on='Company')
carsDf
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import numpy as np
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
df.shape[0]
###Output
_____no_output_____
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
df.shape[1]
###Output
_____no_output_____
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
print(list(df.columns))
###Output
_____no_output_____
###Markdown
4.- Imprima el índice del dataframe
###Code
df.index
###Output
_____no_output_____
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df['occupation'].head()
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
len(df['occupation'].unique())
###Output
_____no_output_____
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
frec_oc = df['occupation'].value_counts()
freq_max = frec_oc.max()
frec_oc[frec_oc == freq_max]
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
df['age'].mean()
###Output
_____no_output_____
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
frec_edad = df['age'].value_counts()
freq_min = frec_edad.min()
frec_edad[frec_edad == freq_min]
###Output
_____no_output_____
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
group = df.groupby('occupation')
df_leng = group.agg({'age':[np.mean]}).reset_index()
df_leng
###Output
_____no_output_____
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
df.fillna(0)
df.head()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
df.columns
grupo = df.groupby('company')
#no se tomarán en cuenta automóviles no disponibles
df_aux = df[df['price']!=0]
grupo2 = df_aux.groupby('company')
df_precios2 = grupo2.agg({'price':[np.mean]}).reset_index()
print('\nCompañía con autos de mayor precio promedio: ')
df_precios2.columns
mean_max = df_precios2['price', 'mean'].max()
df_precios2[df_precios2['price','mean'] == mean_max]
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
toyota = df.groupby('company').filter(lambda x: (x['company'] == 'toyota').any())
toyota.reset_index()
toyota.describe()
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
def funcion(x):
"""
Cuenta autos en el dataframe y retorna marca/numero de autos
en una serie pandas
"""
names = {
'total_vehiculos':x['company'].count()}
return pd.Series(names, index=['total_vehiculos'])
group = df.groupby('company')
group.apply(funcion).reset_index()
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
def funcion2(x):
"""
Obtiene el auto mas caro por compañia y retorna marca/mayor precio
"""
names = {
'mayor_precio':x['price'].max()
}
return pd.Series(names, index=['mayor_precio'])
group = df.groupby('company')
group.apply(funcion2).reset_index()
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
def funcion3(x):
"""
Obtiene el average-mileage promedio por compañia y retorna
compañia/kilometraje promedio
"""
names = {
'average_mileage':x['average-mileage'].mean()}
return pd.Series(names, index=['average_mileage'])
group = df.groupby('company')
group.apply(funcion3).reset_index()
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df.sort_values(by=['price'],ascending = False).reset_index()
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1 = pd.DataFrame(GermanCars)
carsDf1
carsDf2 = pd.DataFrame(japaneseCars)
carsDf2
carsDf = pd.concat([carsDf1,carsDf2], keys=['Germany','Japan'], axis= 0, sort=False)
carsDf
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1 = pd.DataFrame(Car_Price)
carsDf1
carsDf2 = pd.DataFrame(car_Horsepower)
carsDf2
carsDf = pd.merge(carsDf1,carsDf2, on = 'Company', sort = False)
carsDf
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|").set_index('user_id')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
print("el numero de observaciones es:\n", df.shape[0])
###Output
el numero de observaciones es:
943
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
print("el numero de columnas es:\n", df.shape[1])
###Output
el numero de columnas es:
4
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
for i in range(df.shape[1]):
print(df.columns[i])
###Output
age
gender
occupation
zip_code
###Markdown
4.- Imprima el índice del dataframe
###Code
df.index
###Output
_____no_output_____
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df["occupation"]
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
ocupaciones=df["occupation"].unique()
print(ocupaciones.shape[0])
###Output
21
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df.describe(include="all")["occupation"]["top"]
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
print(df["age"].mean())
###Output
34.05196182396607
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
age_l=df["age"].unique()
f=pd.Series()
for i in age_l:
c=0
for j in df["age"]:
if i==j:
c+=1
f.loc[i]=c
fmin=f.min()
mask=f==fmin
#print(f[mask])
print ("las edades con menos frecuencia son:")
for i in f[mask].index:
print (i)
###Output
las edades con menos frecuencia son:
7
66
11
10
73
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
o=df["occupation"].unique()
for i in o:
cant=0
suma=0
k=1
for j in df["occupation"]:
if i==j:
suma+=df["age"][k]
cant+=1
k+=1
prom=suma/cant
print("La edad promedio de la ocupacion", i, "es",prom )
###Output
La edad promedio de la ocupacion technician es 33.148148148148145
La edad promedio de la ocupacion other es 34.523809523809526
La edad promedio de la ocupacion writer es 36.31111111111111
La edad promedio de la ocupacion executive es 38.71875
La edad promedio de la ocupacion administrator es 38.74683544303797
La edad promedio de la ocupacion student es 22.081632653061224
La edad promedio de la ocupacion lawyer es 36.75
La edad promedio de la ocupacion educator es 42.01052631578948
La edad promedio de la ocupacion scientist es 35.54838709677419
La edad promedio de la ocupacion entertainment es 29.22222222222222
La edad promedio de la ocupacion programmer es 33.121212121212125
La edad promedio de la ocupacion librarian es 40.0
La edad promedio de la ocupacion homemaker es 32.57142857142857
La edad promedio de la ocupacion artist es 31.392857142857142
La edad promedio de la ocupacion engineer es 36.38805970149254
La edad promedio de la ocupacion marketing es 37.61538461538461
La edad promedio de la ocupacion none es 26.555555555555557
La edad promedio de la ocupacion healthcare es 41.5625
La edad promedio de la ocupacion retired es 63.07142857142857
La edad promedio de la ocupacion salesman es 35.666666666666664
La edad promedio de la ocupacion doctor es 43.57142857142857
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
df=df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
#si consideramos la compañia mas cara como la que vende el auto mas caro
Max=df.describe(include="all")["price"]["max"]
cont=1
for i in df["company"]:
if df["price"][1]==Max:
print("La compañia mas cara es:",i)
cont+=1
#si consideraremos la compañia mas cara aquella que en promedio vende el auto mas caro
o=df["company"].unique()
Max=0
M=0
for i in o:
cant=0
suma=0
k=1
for j in df["company"]:
if i==j:
suma+=df["price"][k]
cant+=1
k+=1
prom=suma/cant
if prom>=Max:
Max=prom
M=i
print("La compañia mas cara es", M)
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
print(df[df["company"]=="toyota"])
###Output
company body-style wheel-base length engine-type num-of-cylinders \
index
66 toyota hatchback 95.7 158.7 ohc four
67 toyota hatchback 95.7 158.7 ohc four
68 toyota hatchback 95.7 158.7 ohc four
69 toyota wagon 95.7 169.7 ohc four
70 toyota wagon 95.7 169.7 ohc four
71 toyota wagon 95.7 169.7 ohc four
79 toyota wagon 104.5 187.8 dohc six
horsepower average-mileage price
index
66 62 35 5348.0
67 62 31 6338.0
68 62 31 6488.0
69 62 31 6918.0
70 62 27 7898.0
71 62 27 8778.0
79 156 19 15750.0
###Markdown
4. Cuente el total de automóviles por compañía
###Code
c=df["company"].unique()
for i in c:
cont=0
for j in df["company"]:
if i==j:
cont+=1
print("el numero de automoviles en la compañia", i, "es", cont)
###Output
el numero de automoviles en la compañia alfa-romero es 3
el numero de automoviles en la compañia audi es 4
el numero de automoviles en la compañia bmw es 6
el numero de automoviles en la compañia chevrolet es 3
el numero de automoviles en la compañia dodge es 2
el numero de automoviles en la compañia honda es 3
el numero de automoviles en la compañia isuzu es 3
el numero de automoviles en la compañia jaguar es 3
el numero de automoviles en la compañia mazda es 5
el numero de automoviles en la compañia mercedes-benz es 4
el numero de automoviles en la compañia mitsubishi es 4
el numero de automoviles en la compañia nissan es 5
el numero de automoviles en la compañia porsche es 3
el numero de automoviles en la compañia toyota es 7
el numero de automoviles en la compañia volkswagen es 4
el numero de automoviles en la compañia volvo es 2
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
c=df["company"].unique()
for i in c:
p=0
k=1
for j in df["company"]:
if i==j:
if df["price"][k]>=cont:
p=df["price"][k]
k+=1
print("el precio mas alto de la compañia", i, "es", p)
###Output
el precio mas alto de la compañia alfa-romero es 13950.0
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
c=df["company"].unique()
for i in c:
cont=0
suma=0
k=1
for j in df["company"]:
if i==j:
cont+=1
suma+=df["average-mileage"][k]
k+=1
prom=suma/cont
print("el kilometraje promedio de la compañia", i, "es", prom)
###Output
el kilometraje promedio de la compañia alfa-romero es 21.333333333333332
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df=df.sort_values(by=["price"])
df.head()
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1 = pd.DataFrame(GermanCars)
carsDf2 = pd.DataFrame(japaneseCars)
pd.concat([carsDf1,carsDf2])
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1 = pd.DataFrame(Car_Price)
carsDf2 = pd.DataFrame(car_Horsepower)
pd.concat([carsDf1,carsDf2])
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
len(df.index)
###Output
_____no_output_____
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
len(df.columns)
###Output
_____no_output_____
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
df.columns
###Output
_____no_output_____
###Markdown
4.- Imprima el índice del dataframe
###Code
print(df.index)
###Output
RangeIndex(start=0, stop=943, step=1)
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe()
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df["occupation"]
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
len(set(df['occupation']))
###Output
_____no_output_____
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df_agrupado=df.groupby('occupation')
argumento=df_agrupado.count()['user_id']
argumento[argumento==argumento.max()]
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
df['age'].mean()
###Output
_____no_output_____
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
df['age'].min()
###Output
_____no_output_____
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
df.groupby('occupation').mean()['age']
###Output
_____no_output_____
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
df=df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
df_mas_caro=pd.DataFrame(df[df['price']==df['price'].max()][['company','price']])
df_mas_caro
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
df.loc[df.loc[:,'company']=='toyota']
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
df.groupby('company').size().reset_index(name='cantidad_autos')
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
df.groupby('company').max()
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
df_promedios=pd.DataFrame(df.groupby('company')['average-mileage'].mean())
df_promedios
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
df.sort_values(by=['price'],ascending=False)
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1=pd.DataFrame(GermanCars)
carsDf1
carsDf2=pd.DataFrame(japaneseCars)
carsDf2
pd.concat([carsDf1,carsDf2],keys=['Germany','Japan'])
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1=pd.DataFrame(Car_Price)
carsDf1
carsDf2=pd.DataFrame(car_Horsepower)
carsDf2
result = pd.merge(carsDf1, carsDf2, on='Company')
result
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Objetivos de la clase* Reforzar los conceptos básicos de pandas. Contenidos* [Problema 01](p1) Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información tal como: edad ,sexo, profesión, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|").set_index('user_id')
df.head()
df.dropna(inplace = True)
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas: 1. ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
print("El numero de observaciones es:")
df.shape[0]
###Output
El numero de observaciones es:
###Markdown
2. ¿Cuál es el número de columnas en el conjunto de datos?
###Code
print("El numero de columnas es:")
df.shape[1]
###Output
El numero de columnas es:
###Markdown
3. Imprime el nombre de todas las columnas
###Code
print("Las columnas son:")
df.columns
###Output
Las columnas son:
###Markdown
4. Imprima el índice del dataframe
###Code
print("Indice:")
df.index
###Output
Indice:
###Markdown
5. ¿Cuál es el tipo de datos de cada columna?
###Code
print("Tipo de dato por columna:")
df.dtypes
###Output
Tipo de dato por columna:
###Markdown
6. Resumir el conjunto de datos
###Code
df.describe()
###Output
_____no_output_____
###Markdown
7. Resume conjunto de datos con todas las columnas
###Code
df.describe(include="all")
###Output
_____no_output_____
###Markdown
8. Imprimir solo la columna de **occupation**.
###Code
df["occupation"].head()
###Output
_____no_output_____
###Markdown
9. ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
occupations_uniq = df["occupation"].unique()
print("Hay",len(occupations_uniq) ,"ocupaciones diferentes en el conjunto de datos")
###Output
Hay 21 ocupaciones diferentes en el conjunto de datos
###Markdown
10. ¿Cuál es la ocupación más frecuente?
###Code
oc_freq = df["occupation"].value_counts()
print("La ocupacion mas frecuente es:")
oc_freq.index[0]
###Output
La ocupacion mas frecuente es:
###Markdown
11. ¿Cuál es la edad media de los usuarios?
###Code
print("La edad media es:")
df["age"].mean()
###Output
La edad media es:
###Markdown
12. ¿Cuál es la edad con menos ocurrencia?
###Code
age_freq = df["age"].value_counts()
print("La edad con menos ocurrencia es:")
age_freq.index[-1]
###Output
La edad con menos ocurrencia es:
###Markdown
Nota: Al observar age_freq se puede ver que hay mas de una edad con menos ocurrencia las cuales dejamos a continuacion
###Code
age_freq.tail(5)
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|").set_index('user_id')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
print("el número de observacione es: \n", df.shape[0])
###Output
el número de observacione es:
943
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
print("el número de columnas es :\n", df.shape[1])
###Output
el número de columnas es :
4
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
for i in range(df.shape[1]):
print("el nombre de la columna", i, "es:", df.columns[i])
###Output
el nombre de la columna 0 es: age
el nombre de la columna 1 es: gender
el nombre de la columna 2 es: occupation
el nombre de la columna 3 es: zip_code
###Markdown
4.- Imprima el índice del dataframe
###Code
df.index
###Output
_____no_output_____
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df["occupation"]
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
ocup=df["occupation"].unique()
print("hay", ocup.shape[0], "ocupaciones diferentes")
###Output
hay 21 ocupaciones diferentes
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df.describe(include="all")["occupation"]["top"]
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
print("la edad media de los usuarios es:", df["age"].mean(), "años")
###Output
la edad media de los usuarios es: 34.05196182396607 años
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
emo=df["age"].unique() #revisa todas las edades que hay
f=pd.Series()
for i in emo:
c=0
for j in df["age"]:
if i==j:
c+=1
f.loc[i]=c
f_min=f.min() #definir valor minimo
mask=f==f_min #mak busca valor objetivo
f[mask] #aplica mask sobre la serie
print("las edades con menos ocurrencia son:\n", f[mask])
###Output
las edades con menos ocurrencia son:
7 1
66 1
11 1
10 1
73 1
dtype: int64
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
occ=df["occupation"].unique() #lista de todas las ocupaciones
for i in occ:
suma=0
c=0
n=1
for j in df["occupation"]:
if i==j:
suma+=df["age"][n] #edad de esa ocupacion
c+=1
n+=1
d=suma/c
print("la edad promedio de", i, "es", d, "años")
###Output
la edad promedio de technician es 33.148148148148145 años
la edad promedio de other es 34.523809523809526 años
la edad promedio de writer es 36.31111111111111 años
la edad promedio de executive es 38.71875 años
la edad promedio de administrator es 38.74683544303797 años
la edad promedio de student es 22.081632653061224 años
la edad promedio de lawyer es 36.75 años
la edad promedio de educator es 42.01052631578948 años
la edad promedio de scientist es 35.54838709677419 años
la edad promedio de entertainment es 29.22222222222222 años
la edad promedio de programmer es 33.121212121212125 años
la edad promedio de librarian es 40.0 años
la edad promedio de homemaker es 32.57142857142857 años
la edad promedio de artist es 31.392857142857142 años
la edad promedio de engineer es 36.38805970149254 años
la edad promedio de marketing es 37.61538461538461 años
la edad promedio de none es 26.555555555555557 años
la edad promedio de healthcare es 41.5625 años
la edad promedio de retired es 63.07142857142857 años
la edad promedio de salesman es 35.666666666666664 años
la edad promedio de doctor es 43.57142857142857 años
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
A=df.dropna()
A.head()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
#consideraremos la compañia mas cara aquella que tenga el auto mas caro
pMax=A.describe(include="all")["price"]["max"]
f=1
for i in A["company"]:
if A["price"][f]== pMax:
print("la compañia mas cara es", i)
f+=1
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
print(A[A["company"]=="toyota"])
###Output
company body-style wheel-base length engine-type num-of-cylinders \
index
66 toyota hatchback 95.7 158.7 ohc four
67 toyota hatchback 95.7 158.7 ohc four
68 toyota hatchback 95.7 158.7 ohc four
69 toyota wagon 95.7 169.7 ohc four
70 toyota wagon 95.7 169.7 ohc four
71 toyota wagon 95.7 169.7 ohc four
79 toyota wagon 104.5 187.8 dohc six
horsepower average-mileage price
index
66 62 35 5348.0
67 62 31 6338.0
68 62 31 6488.0
69 62 31 6918.0
70 62 27 7898.0
71 62 27 8778.0
79 156 19 15750.0
###Markdown
4. Cuente el total de automóviles por compañía
###Code
occ=A["company"].unique() #lista de todas las compañias
for i in occ:
c=0
for j in A["company"]:
if i==j:
c+=1
print("la cantidad de automoviles en la compañia", i, "es de", c, "autos")
###Output
la cantidad de automoviles en la compañia alfa-romero es de 3 autos
la cantidad de automoviles en la compañia audi es de 4 autos
la cantidad de automoviles en la compañia bmw es de 6 autos
la cantidad de automoviles en la compañia chevrolet es de 3 autos
la cantidad de automoviles en la compañia dodge es de 2 autos
la cantidad de automoviles en la compañia honda es de 3 autos
la cantidad de automoviles en la compañia isuzu es de 1 autos
la cantidad de automoviles en la compañia jaguar es de 3 autos
la cantidad de automoviles en la compañia mazda es de 5 autos
la cantidad de automoviles en la compañia mercedes-benz es de 4 autos
la cantidad de automoviles en la compañia mitsubishi es de 4 autos
la cantidad de automoviles en la compañia nissan es de 5 autos
la cantidad de automoviles en la compañia porsche es de 2 autos
la cantidad de automoviles en la compañia toyota es de 7 autos
la cantidad de automoviles en la compañia volkswagen es de 4 autos
la cantidad de automoviles en la compañia volvo es de 2 autos
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
occ=A["company"].unique() #lista de todas las compañias
for i in occ:
pmax=0
n=1
for j in A["company"]:
if i==j:
if A["price"][n]>=pmax:
pmax=A["price"][n]
n+=1
print("el precio mas alto de la compañia", i, "es de", pmax, "pesos")
###Output
el precio mas alto de la compañia alfa-romero es de 16500.0 pesos
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
occ=A["company"].unique() #lista de todas las compañias
for i in occ:
suma=0
c=0
n=1
for j in A["company"]:
if i==j:
suma+=A["average-mileage"][n] #kilometraje en esa compañia
c+=1
n+=1
d=suma/c
print("el kilometraje promedio de", i, "es", d, "millas")
###Output
el kilometraje promedio de alfa-romero es 21.333333333333332 millas
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
A=A.sort_values(by=["price"])
A.head()
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
carsDf1=pd.DataFrame(GermanCars)
carsDf2=pd.DataFrame(japaneseCars)
pd.concat([carsDf1, carsDf2])
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1=pd.DataFrame(Car_Price)
carsDf2=pd.DataFrame(car_Horsepower)
pd.concat([carsDf1, carsDf2])
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°03 Problema 01EL conjunto de datos se denomina `ocupation.csv`, el cual contiene información de distintos usuarios (edad ,sexo, profesión, etc.).Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
import pandas as pd
import os
# cargar datos
df = pd.read_csv(os.path.join("data","ocupation.csv"), sep="|")
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1.- ¿Cuál es el número de observaciones en el conjunto de datos?
###Code
g = df.shape[0]
print(f'Se tienen {g} observaciones.')
###Output
Se tienen 943 observaciones.
###Markdown
2.- ¿Cuál es el número de columnas en el conjunto de datos?
###Code
g = df.shape[1]
print(f'Se tienen {g} columnas.')
###Output
Se tienen 5 columnas.
###Markdown
3.- Imprime el nombre de todas las columnas
###Code
print("\ncols:")
df.columns
###Output
cols:
###Markdown
4.- Imprima el índice del dataframe
###Code
print("\nindex:")
df.index
###Output
index:
###Markdown
5.- ¿Cuál es el tipo de datos de cada columna?
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
6.- Describir el conjunto de datos (**hint**: .describe())
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
7.- Imprimir solo la columna de **occupation**.
###Code
df['occupation']
###Output
_____no_output_____
###Markdown
8.- ¿Cuántas ocupaciones diferentes hay en este conjunto de datos?
###Code
df['occupation'].nunique()
###Output
_____no_output_____
###Markdown
9.- ¿Cuál es la ocupación más frecuente?
###Code
df['occupation'].value_counts().idxmax()
###Output
_____no_output_____
###Markdown
10.- ¿Cuál es la edad media de los usuarios?
###Code
mean_df = df['age'].mean()
print(mean_df)
###Output
34.05196182396607
###Markdown
11.- ¿Cuál es la edad con menos ocurrencia?
###Code
df['age'].value_counts().idxmin()
###Output
_____no_output_____
###Markdown
12.- Encontrar la edad promedio según la variable **occupation**
###Code
a = df.groupby('occupation').describe()
a['age']
###Output
_____no_output_____
###Markdown
Problema 02EL conjunto de datos se denomina `Automobile_data.csv`, el cual contiene información tal como: compañia, precio, kilometraje, etc.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# cargar datos
df = pd.read_csv(os.path.join("data","Automobile_data.csv")).set_index('index')
df.head()
###Output
_____no_output_____
###Markdown
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:1. Elimine los valores nulos (Nan)
###Code
mask = lambda df: df.notnull().all(axis=1)
df = df[mask]
df.head()
###Output
_____no_output_____
###Markdown
2. Encuentra el nombre de la compañía de automóviles más cara
###Code
a=df['price'].idxmax()
df['company'][a]
###Output
_____no_output_____
###Markdown
3. Imprimir todos los detalles de Toyota Cars
###Code
grouped_data = df.groupby('company')
grouped_data.describe()
###Output
_____no_output_____
###Markdown
4. Cuente el total de automóviles por compañía
###Code
grouped_data = df.groupby('company')
grouped_data.count()['body-style']
###Output
_____no_output_____
###Markdown
5. Encuentra el coche con el precio más alto por compañía
###Code
grouped_data = df.groupby('company')
grouped_data.max()['price']
###Output
_____no_output_____
###Markdown
6. Encuentre el kilometraje promedio (**average-mileage**) de cada compañía automotriz
###Code
grouped_data = df.groupby('company')
grouped_data.mean()['average-mileage']
###Output
_____no_output_____
###Markdown
7. Ordenar todos los autos por columna de precio (**price**)
###Code
by_year = df.sort_values('price',ascending=False)# ordenado de mayor a menor
by_year
###Output
_____no_output_____
###Markdown
Problema 03Siguiendo la temática de los automóviles, resuelva los siguientes problemas: a) Subproblema 01A partir de los siguientes diccionarios:
###Code
GermanCars = {'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400]}
japaneseCars = {'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Concatene ambos dataframes (**carsDf**) y añada una llave ["Germany", "Japan"], según corresponda.
###Code
# dataframe with GermanCars
carsDf1 = pd.DataFrame(
{
'Company': ['Ford', 'Mercedes', 'BMV', 'Audi'],
'Price': [23845, 171995, 135925 , 71400],
}
)
carsDf1
# dataframe with japaneseCars
carsDf2 = pd.DataFrame(
{
'Company': ['Toyota', 'Honda', 'Nissan', 'Mitsubishi '],
'Price': [29995, 23600, 61500 , 58900],
}
)
carsDf2
carsDf = pd.concat([carsDf1, carsDf2], keys=["Germany", "Japan"])
carsDf
###Output
_____no_output_____
###Markdown
b) Subproblema 02A partir de los siguientes diccionarios:
###Code
Car_Price = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'Price': [23845, 17995, 135925 , 71400]}
car_Horsepower = {'Company': ['Toyota', 'Honda', 'BMV', 'Audi'], 'horsepower': [141, 80, 182 , 160]}
###Output
_____no_output_____
###Markdown
* Cree dos dataframes (**carsDf1** y **carsDf2**) según corresponda.* Junte ambos dataframes (**carsDf**) por la llave **Company**.
###Code
carsDf1 = pd.DataFrame(
{
'Company': ['Toyota', 'Honda', 'BMV', 'Audi'],
'Price': [23845, 17995, 135925 , 71400],
}
)
carsDf1
carsDf2 = pd.DataFrame(
{
'Company': ['Toyota', 'Honda', 'BMV', 'Audi'],
'horsepower': [141, 80, 182 , 160],
}
)
carsDf2
carsDf = pd.merge(carsDf1, carsDf2, on='Company')
carsDf
###Output
_____no_output_____ |
Assignment_2/src/Nowcast/Nowcast.ipynb | ###Markdown
Download the pretrained models
###Code
import pandas as pd
import urllib.request
import os
os.environ["HDF5_USE_FILE_LOCKING"]='FALSE'
import sys
import h5py
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def main():
model_info = pd.read_csv('/content/drive/MyDrive/neurips-2020-sevir-master/models/model_urls.csv')
for i,r in model_info.iterrows():
print(f'Downloading {r.model}...')
download_file(r.url,f'{r.application}/{r.model}')
def download_file(url,filename):
print(f'wget {url}')
os.system(f'wget {url}')
if __name__=='__main__':
main()
###Output
Downloading gan_mae_weights.h5...
wget https://www.dropbox.com/s/d1e2p36nu4sqq7m/gan_mae_weights.h5?dl=0
Downloading mse_vgg_weights.h5...
wget https://www.dropbox.com/s/a39ig25nxkrmbkx/mse_vgg_weights.h5?dl=0
Downloading mse_weights.h5...
wget https://www.dropbox.com/s/6cqtrv2yliwcyh5/mse_weights.h5?dl=0
Downloading gan_generator.h5...
wget https://www.dropbox.com/s/9y3m4axfc3ox9i7/gan_generator.h5?dl=0
Downloading mse_and_style.h5...
wget https://www.dropbox.com/s/lqpro9dks5rykxk/mse_and_style.h5?dl=0
Downloading style_model.h5...
wget https://www.dropbox.com/s/yrfx3t3nckaofqu/style_model.h5?dl=0
Downloading mse_model.h5...
wget https://www.dropbox.com/s/95vmmlci5x3acar/mse_model.h5?dl=0
###Markdown
Download specific files to generate Testing Data.
###Code
!pip install boto3
import boto3
import h5py
import pandas as pd
from botocore.handlers import disable_signing
resource = boto3.resource('s3')
resource.meta.client.meta.events.register('choose-signer.s3.*', disable_signing)
bucket=resource.Bucket('sevir')
objs=bucket.objects.filter(Prefix='')
for o in objs:
if o.key == '/data/vil/2019/SEVIR_VIL_STORMEVENTS_2019_0101_0630.h5':
print(o.key)
satellite = pd.read_csv("/content/drive/MyDrive/CATALOG.csv")
files = list(satellite[satellite.event_id == 781628].file_name)
event_subset = satellite.loc[satellite['event_id'].isin([781628])]
#event_subset = event_subset.loc[~event_subset['img_type'].isin(['vis'])]
event_subset
event_subset.to_csv('/content/event_subset.csv')
!ls
satellite = pd.read_csv("/content/drive/MyDrive/CATALOG.csv")
#files = list(satellite.loc[satellite['file_name'].isin([2018_0801_0831])])
print(files)
for file in files:
key = 'data/' + file
print(key)
filename = file.split('/')
bucket.download_file(key,filename[2])
!mv SEVIR_VIL_STORMEVENTS_2018_0701_1231.h5 /content/sample_data/vil/2018
!mv SEVIR_IR069_STORMEVENTS_2018_0701_1231.h5 /content/sample_data/ir069/2018
!mv SEVIR_IR107_STORMEVENTS_2018_0701_1231.h5 /content/sample_data/ir107/2018
!mv SEVIR_LGHT_ALLEVENTS_2018_0801_0901.h5 /content/sample_data/lght/2018
!mv SEVIR_VIS_STORMEVENTS_2018_0801_0831.h5 /content/sample_data/vis/2018
###Output
_____no_output_____
###Markdown
Generate the Data and store it in h5 File.
###Code
!python /content/make_nowcast_dataset.py --sevir_data /content/sample_data/ --sevir_catalog /content/drive/MyDrive/Cat.csv --output_location /content/drive/MyDrive/Output_Nowcast
import os
os.environ["HDF5_USE_FILE_LOCKING"]='FALSE'
import sys
#sys.path.append('../src/')
import h5py
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import ListedColormap
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import matplotlib.patches as patches
import pandas as pd
from display import get_cmap
###Output
/usr/local/lib/python3.7/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
###Markdown
Load pretrained models
###Code
# Load pretrained nowcasting models
mse_file = '/content/mse_model.h5?dl=0' #Give the path where the file is located in your systems.
mse_model = tf.keras.models.load_model(mse_file,compile=False,custom_objects={"tf": tf})
style_file = '/content/style_model.h5?dl=0'
style_model = tf.keras.models.load_model(style_file,compile=False,custom_objects={"tf": tf})
mse_style_file = '/content/mse_and_style.h5?dl=0'
mse_style_model = tf.keras.models.load_model(mse_style_file,compile=False,custom_objects={"tf": tf})
gan_file = '/content/gan_generator.h5?dl=0'
gan_model = tf.keras.models.load_model(gan_file,compile=False,custom_objects={"tf": tf})
###Output
_____no_output_____
###Markdown
Load sample test data
###Code
# Load a part of the test dataset
from nowcast_reader import read_data
x_test,y_test = read_data('/content/drive/MyDrive/Output_Nowcast/nowcast_testing.h5',end=50) #Give the path where the file is located in your systems.
###Output
_____no_output_____
###Markdown
Plot samples for test set
###Code
##
# Functions for plotting results
##
norm = {'scale':47.54,'shift':33.44}
hmf_colors = np.array( [
[82,82,82],
[252,141,89],
[255,255,191],
[145,191,219]
])/255
# Model that implements persistence forecast that just repeasts last frame of input
class persistence:
def predict(self,x_test):
return np.tile(x_test[:,:,:,-1:],[1,1,1,12])
def plot_hit_miss_fa(ax,y_true,y_pred,thres):
mask = np.zeros_like(y_true)
mask[np.logical_and(y_true>=thres,y_pred>=thres)]=4
mask[np.logical_and(y_true>=thres,y_pred<thres)]=3
mask[np.logical_and(y_true<thres,y_pred>=thres)]=2
mask[np.logical_and(y_true<thres,y_pred<thres)]=1
cmap=ListedColormap(hmf_colors)
ax.imshow(mask,cmap=cmap)
def visualize_result(models,x_test,y_test,idx,ax,labels):
fs=10
cmap_dict = lambda s: {'cmap':get_cmap(s,encoded=True)[0],
'norm':get_cmap(s,encoded=True)[1],
'vmin':get_cmap(s,encoded=True)[2],
'vmax':get_cmap(s,encoded=True)[3]}
for i in range(1,13,3):
xt = x_test[idx,:,:,i]*norm['scale']+norm['shift']
ax[(i-1)//3][0].imshow(xt,**cmap_dict('vil'))
ax[0][0].set_title('Inputs',fontsize=fs)
pers = persistence().predict(x_test[idx:idx+1])
pers = pers*norm['scale']+norm['shift']
x_test = x_test[idx:idx+1]
y_test = y_test[idx:idx+1]*norm['scale']+norm['shift']
y_preds=[]
for i,m in enumerate(models):
yp = m.predict(x_test)
if isinstance(yp,(list,)):
yp=yp[0]
y_preds.append(yp*norm['scale']+norm['shift'])
for i in range(0,12,3):
ax[i//3][2].imshow(y_test[0,:,:,i],**cmap_dict('vil'))
ax[0][2].set_title('Target',fontsize=fs)
# Plot Persistence
for i in range(0,12,3):
plot_hit_miss_fa(ax[i//3][4],y_test[0,:,:,i],pers[0,:,:,i],74)
ax[0][4].set_title('Persistence\nScores',fontsize=fs)
for k,m in enumerate(models):
for i in range(0,12,3):
ax[i//3][5+2*k].imshow(y_preds[k][0,:,:,i],**cmap_dict('vil'))
plot_hit_miss_fa(ax[i//3][5+2*k+1],y_test[0,:,:,i],y_preds[k][0,:,:,i],74)
ax[0][5+2*k].set_title(labels[k],fontsize=fs)
ax[0][5+2*k+1].set_title(labels[k]+'\nScores',fontsize=fs)
for j in range(len(ax)):
for i in range(len(ax[j])):
ax[j][i].xaxis.set_ticks([])
ax[j][i].yaxis.set_ticks([])
for i in range(4):
ax[i][1].set_visible(False)
for i in range(4):
ax[i][3].set_visible(False)
ax[0][0].set_ylabel('-45 Minutes')
ax[1][0].set_ylabel('-30 Minutes')
ax[2][0].set_ylabel('-15 Minutes')
ax[3][0].set_ylabel(' 0 Minutes')
ax[0][2].set_ylabel('+15 Minutes')
ax[1][2].set_ylabel('+30 Minutes')
ax[2][2].set_ylabel('+45 Minutes')
ax[3][2].set_ylabel('+60 Minutes')
legend_elements = [Patch(facecolor=hmf_colors[1], edgecolor='k', label='False Alarm'),
Patch(facecolor=hmf_colors[2], edgecolor='k', label='Miss'),
Patch(facecolor=hmf_colors[3], edgecolor='k', label='Hit')]
ax[-1][-1].legend(handles=legend_elements, loc='lower right', bbox_to_anchor= (-5.4, -.35),
ncol=5, borderaxespad=0, frameon=False, fontsize='16')
plt.subplots_adjust(hspace=0.05, wspace=0.05)
###Output
_____no_output_____
###Markdown
Plot a few test cases
###Code
idx=25 # adjust this to pick a case
fig,ax = plt.subplots(4,13,figsize=(24,8), gridspec_kw={'width_ratios': [1,.2,1,.2,1,1,1,1,1,1,1,1,1]})
visualize_result([mse_model,style_model,mse_style_model,gan_model],x_test,y_test,idx,ax,labels=['MSE','SC','MSE+SC','cGAN+MAE'])
idx=45 # adjust this to pick a case
fig,ax = plt.subplots(4,13,figsize=(24,8), gridspec_kw={'width_ratios': [1,.2,1,.2,1,1,1,1,1,1,1,1,1]})
visualize_result([mse_model,style_model,mse_style_model,gan_model],x_test,y_test,idx,ax,labels=['MSE','SC','MSE+SC','cGAN+MAE'])
idx=32 # adjust this to pick a case
fig,ax = plt.subplots(4,13,figsize=(24,8), gridspec_kw={'width_ratios': [1,.2,1,.2,1,1,1,1,1,1,1,1,1]})
visualize_result([mse_model,style_model,mse_style_model,gan_model],x_test,y_test,idx,ax,labels=['MSE','SC','MSE+SC','cGAN+MAE'])
###Output
_____no_output_____ |
week3/krishnac/Q2 - 1/Attempt1_filesubmission_cubic_quintic_spirals.ipynb | ###Markdown
Smooth local pathsWe will use cubic spirals to generate smooth local paths. Without loss of generality, as $\theta$ smoothly changes from 0 to 1, we impose a condition on the curvature as follows$\kappa = f'(x) = K(x(1-x))^n $This ensures curvature vanishes at the beginning and end of the path. Integrating, the yaw changes as$\theta = \int_0^x f'(x')dx'$With $n = 1$ we get a cubic spiral, $n=2$ we get a quintic spiral and so on. Let us use the sympy package to find the family of spirals1. Declare $x$ a Symbol2. You want to find Integral of $f'(x)$3. You can choose $K$ so that all coefficients are integersVerify if $\theta(0) = 0$ and $\theta(1) = 1$
###Code
K = 30 #choose for cubic/quintic
n = 2 #choose for cubic/ quintic
x = Symbol('x') #declare as Symbol
print(integrate(K*(x*(1-x))**n, x)) # complete the expression
x = np.linspace(0, 1, num=100)
thetas = -2*x**3 + 3*x**2
plt.figure()
plt.plot(x, thetas,'.')
thetas = 6*x**5 - 15*x**4 + 10*x**3
plt.plot(x, thetas,'.')
#write function to compute a cubic spiral
#input can be any theta_i and theta_f (not just 0 and 1)
def cubic_spiral(theta_i, theta_f, n=10):
x = np.linspace(0, 1, num=n)
#-2*x**3 + 3*x**2
return (theta_f-theta_i)*(-2*x**3 + 3*x**2) + theta_i
def quintic_spiral(theta_i, theta_f, n=10):
x = np.linspace(0, 1, num=n)
#6*x**5 - 15*x**4 + 10*x**3
return (theta_f-theta_i)*(6*x**5 - 15*x**4 + 10*x**3) + theta_i
###Output
_____no_output_____
###Markdown
PlottingPlot cubic, quintic spirals along with how $\theta$ will change from $\pi/2$ to $0$ when moving in a circular arc. Remember circular arc is when $\omega $ is constant
###Code
num_pts = 100
plt.figure()
plt.plot(np.pi/2*(1-np.linspace(0,1,num_pts)), label='Circular')
plt.plot(cubic_spiral(np.pi/2, 0, num_pts), label='Cubic')
plt.plot(quintic_spiral(np.pi/2, 0, num_pts),label='Quintic')
plt.grid()
plt.legend()
###Output
_____no_output_____
###Markdown
TrajectoryUsing the spirals, convert them to trajectories $\{(x_i,y_i,\theta_i)\}$. Remember the unicycle model $dx = v\cos \theta dt$$dy = v\sin \theta dt$$\theta$ is given by the spiral functions you just wrote. Use cumsum() in numpy to calculate {(x_i, y_i)}What happens when you change $v$?
###Code
num_pts = 50
v = 1
dt = 0.1
#cubic
theta = cubic_spiral(np.pi/2, np.pi, num_pts)
x = np.cumsum(v*np.cos(theta)*dt)
y = np.cumsum(v*np.sin(theta)*dt)
#Quintic
theta = quintic_spiral(np.pi/2, np.pi, num_pts+2)
xq = np.cumsum(v*np.cos(theta)*dt)
yq = np.cumsum(v*np.sin(theta)*dt)
#Circular
theta = np.pi/2*(1+np.linspace(0,1,num_pts-2))
xc = np.cumsum(v*np.cos(theta)*dt)
yc = np.cumsum(v*np.sin(theta)*dt)
# plot trajectories for circular/ cubic/ quintic
plt.figure()
plt.plot(xc, yc, label='Circular')
plt.plot(x, y, label='Cubic')
plt.plot(xq, yq, label='Quintic')
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Symmetric posesWe have been doing only examples with $|\theta_i - \theta_f| = \pi/2$. What about other orientation changes? Given below is an array of terminal angles (they are in degrees!). Start from 0 deg and plot the family of trajectories
###Code
dt = 0.1
thetas = np.deg2rad([15, 30, 45, 60, 90, 120, 150, 180]) #convert to radians
plt.figure()
for tf in thetas:
t = cubic_spiral(0, tf,50)
x = np.cumsum(np.cos(t)*dt)
y = np.cumsum(np.sin(t)*dt)
plt.plot(x, y)
# On the same plot, move from 180 to 180 - theta
thetas = np.pi - np.deg2rad([15, 30, 45, 60, 90, 120, 150, 180])
for tf in thetas:
t = cubic_spiral(np.pi, tf, 50)
x = np.cumsum(np.cos(t)*dt)
y = np.cumsum(np.sin(t)*dt)
plt.plot(x, y)
plt.grid()
###Output
_____no_output_____
###Markdown
Modify your code to print the following for the positive terminal angles $\{\theta_f\}$1. Final x, y position in corresponding trajectory: $x_f, y_f$ 2. $\frac{y_f}{x_f}$ and $\tan \frac{\theta_f}{2}$What do you notice? What happens when $v$ is doubled?
###Code
dt = 0.05
v = 2.0
thetas = np.deg2rad([15, 30, 45, 60, 90, 120, 150, 180]) #convert to radians
plt.figure()
for tf in thetas:
t = cubic_spiral(0, tf,100)
x = np.cumsum(v*np.cos(t)*dt)
y = np.cumsum(v*np.sin(t)*dt)
print(f"tf:{np.rad2deg(tf):0.1f} xf:{x[-1]:0.3f} yf:{y[-1]:0.3f} yf/xf:{y[-1]/x[-1]:0.3f} tan(theta/2):{np.tan(tf/2):0.3f}")
###Output
tf:15.0 xf:9.873 yf:1.300 yf/xf:0.132 tan(theta/2):0.132
tf:30.0 xf:9.497 yf:2.545 yf/xf:0.268 tan(theta/2):0.268
tf:45.0 xf:8.892 yf:3.683 yf/xf:0.414 tan(theta/2):0.414
tf:60.0 xf:8.087 yf:4.669 yf/xf:0.577 tan(theta/2):0.577
tf:90.0 xf:6.041 yf:6.041 yf/xf:1.000 tan(theta/2):1.000
tf:120.0 xf:3.743 yf:6.484 yf/xf:1.732 tan(theta/2):1.732
tf:150.0 xf:1.610 yf:6.010 yf/xf:3.732 tan(theta/2):3.732
tf:180.0 xf:-0.000 yf:4.812 yf/xf:-7880729543884461.000 tan(theta/2):16331239353195370.000
|
docs/scipy-optimize.ipynb | ###Markdown
Notebook magic
###Code
from IPython.core.magic import Magics, magics_class, line_cell_magic
from IPython.core.magic import cell_magic, register_cell_magic, register_line_magic
from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
import subprocess
import os
@magics_class
class PyboardMagic(Magics):
@cell_magic
@magic_arguments()
@argument('-skip')
@argument('-unix')
@argument('-pyboard')
@argument('-file')
@argument('-data')
@argument('-time')
@argument('-memory')
def micropython(self, line='', cell=None):
args = parse_argstring(self.micropython, line)
if args.skip: # doesn't care about the cell's content
print('skipped execution')
return None # do not parse the rest
if args.unix: # tests the code on the unix port. Note that this works on unix only
with open('/dev/shm/micropython.py', 'w') as fout:
fout.write(cell)
proc = subprocess.Popen(["../../micropython/ports/unix/micropython", "/dev/shm/micropython.py"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(proc.stdout.read().decode("utf-8"))
print(proc.stderr.read().decode("utf-8"))
return None
if args.file: # can be used to copy the cell content onto the pyboard's flash
spaces = " "
try:
with open(args.file, 'w') as fout:
fout.write(cell.replace('\t', spaces))
printf('written cell to {}'.format(args.file))
except:
print('Failed to write to disc!')
return None # do not parse the rest
if args.data: # can be used to load data from the pyboard directly into kernel space
message = pyb.exec(cell)
if len(message) == 0:
print('pyboard >>>')
else:
print(message.decode('utf-8'))
# register new variable in user namespace
self.shell.user_ns[args.data] = string_to_matrix(message.decode("utf-8"))
if args.time: # measures the time of executions
pyb.exec('import utime')
message = pyb.exec('t = utime.ticks_us()\n' + cell + '\ndelta = utime.ticks_diff(utime.ticks_us(), t)' +
"\nprint('execution time: {:d} us'.format(delta))")
print(message.decode('utf-8'))
if args.memory: # prints out memory information
message = pyb.exec('from micropython import mem_info\nprint(mem_info())\n')
print("memory before execution:\n========================\n", message.decode('utf-8'))
message = pyb.exec(cell)
print(">>> ", message.decode('utf-8'))
message = pyb.exec('print(mem_info())')
print("memory after execution:\n========================\n", message.decode('utf-8'))
if args.pyboard:
message = pyb.exec(cell)
print(message.decode('utf-8'))
ip = get_ipython()
ip.register_magics(PyboardMagic)
###Output
_____no_output_____
###Markdown
pyboard
###Code
import pyboard
pyb = pyboard.Pyboard('/dev/ttyACM0')
pyb.enter_raw_repl()
pyb.exit_raw_repl()
pyb.close()
%%micropython -pyboard 1
import utime
import ulab as np
def timeit(n=1000):
def wrapper(f, *args, **kwargs):
func_name = str(f).split(' ')[1]
def new_func(*args, **kwargs):
run_times = np.zeros(n, dtype=np.uint16)
for i in range(n):
t = utime.ticks_us()
result = f(*args, **kwargs)
run_times[i] = utime.ticks_diff(utime.ticks_us(), t)
print('{}() execution times based on {} cycles'.format(func_name, n, (delta2-delta1)/n))
print('\tbest: %d us'%np.min(run_times))
print('\tworst: %d us'%np.max(run_times))
print('\taverage: %d us'%np.mean(run_times))
print('\tdeviation: +/-%.3f us'%np.std(run_times))
return result
return new_func
return wrapper
def timeit(f, *args, **kwargs):
func_name = str(f).split(' ')[1]
def new_func(*args, **kwargs):
t = utime.ticks_us()
result = f(*args, **kwargs)
print('execution time: ', utime.ticks_diff(utime.ticks_us(), t), ' us')
return result
return new_func
###Output
###Markdown
__END_OF_DEFS__ OptimizeFunctions in the `optimize` module can be called by prepending them by `scipy.optimize.`. The module defines the following three functions:1. [scipy.optimize.bisect](bisect)1. [scipy.optimize.fmin](fmin)1. [scipy.optimize.newton](newton)Note that routines that work with user-defined functions still have to call the underlying `python` code, and therefore, gains in speed are not as significant as with other vectorised operations. As a rule of thumb, a factor of two can be expected, when compared to an optimised `python` implementation. bisect `scipy`: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.bisect.html`bisect` finds the root of a function of one variable using a simple bisection routine. It takes three positional arguments, the function itself, and two starting points. The function must have opposite signsat the starting points. Returned is the position of the root.Two keyword arguments, `xtol`, and `maxiter` can be supplied to control the accuracy, and the number of bisections, respectively.
###Code
%%micropython -unix 1
from ulab import scipy as spy
def f(x):
return x*x - 1
print(spy.optimize.bisect(f, 0, 4))
print('only 8 bisections: ', spy.optimize.bisect(f, 0, 4, maxiter=8))
print('with 0.1 accuracy: ', spy.optimize.bisect(f, 0, 4, xtol=0.1))
###Output
0.9999997615814209
only 8 bisections: 0.984375
with 0.1 accuracy: 0.9375
###Markdown
PerformanceSince the `bisect` routine calls user-defined `python` functions, the speed gain is only about a factor of two, if compared to a purely `python` implementation.
###Code
%%micropython -pyboard 1
from ulab import scipy as spy
def f(x):
return (x-1)*(x-1) - 2.0
def bisect(f, a, b, xtol=2.4e-7, maxiter=100):
if f(a) * f(b) > 0:
raise ValueError
rtb = a if f(a) < 0.0 else b
dx = b - a if f(a) < 0.0 else a - b
for i in range(maxiter):
dx *= 0.5
x_mid = rtb + dx
mid_value = f(x_mid)
if mid_value < 0:
rtb = x_mid
if abs(dx) < xtol:
break
return rtb
@timeit
def bisect_scipy(f, a, b):
return spy.optimize.bisect(f, a, b)
@timeit
def bisect_timed(f, a, b):
return bisect(f, a, b)
print('bisect running in python')
bisect_timed(f, 3, 2)
print('bisect running in C')
bisect_scipy(f, 3, 2)
###Output
bisect running in python
execution time: 1270 us
bisect running in C
execution time: 642 us
###Markdown
fmin`scipy`: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.htmlThe `fmin` function finds the position of the minimum of a user-defined function by using the downhill simplex method. Requires two positional arguments, the function, and the initial value. Three keyword arguments, `xatol`, `fatol`, and `maxiter` stipulate conditions for stopping.
###Code
%%micropython -unix 1
from ulab import scipy as spy
def f(x):
return (x-1)**2 - 1
print(spy.optimize.fmin(f, 3.0))
print(spy.optimize.fmin(f, 3.0, xatol=0.1))
###Output
0.9996093749999952
1.199999999999996
###Markdown
newton`scipy`:https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html`newton` finds a zero of a real, user-defined function using the Newton-Raphson (or secant or Halley’s) method. The routine requires two positional arguments, the function, and the initial value. Three keywordarguments can be supplied to control the iteration. These are the absolute and relative tolerances `tol`, and `rtol`, respectively, and the number of iterations before stopping, `maxiter`. The function retuns a single scalar, the position of the root.
###Code
%%micropython -unix 1
from ulab import scipy as spy
def f(x):
return x*x*x - 2.0
print(spy.optimize.newton(f, 3., tol=0.001, rtol=0.01))
###Output
1.260135727246117
###Markdown
Notebook magic
###Code
from IPython.core.magic import Magics, magics_class, line_cell_magic
from IPython.core.magic import cell_magic, register_cell_magic, register_line_magic
from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
import subprocess
import os
@magics_class
class PyboardMagic(Magics):
@cell_magic
@magic_arguments()
@argument('-skip')
@argument('-unix')
@argument('-pyboard')
@argument('-file')
@argument('-data')
@argument('-time')
@argument('-memory')
def micropython(self, line='', cell=None):
args = parse_argstring(self.micropython, line)
if args.skip: # doesn't care about the cell's content
print('skipped execution')
return None # do not parse the rest
if args.unix: # tests the code on the unix port. Note that this works on unix only
with open('/dev/shm/micropython.py', 'w') as fout:
fout.write(cell)
proc = subprocess.Popen(["../../micropython/ports/unix/micropython", "/dev/shm/micropython.py"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(proc.stdout.read().decode("utf-8"))
print(proc.stderr.read().decode("utf-8"))
return None
if args.file: # can be used to copy the cell content onto the pyboard's flash
spaces = " "
try:
with open(args.file, 'w') as fout:
fout.write(cell.replace('\t', spaces))
printf('written cell to {}'.format(args.file))
except:
print('Failed to write to disc!')
return None # do not parse the rest
if args.data: # can be used to load data from the pyboard directly into kernel space
message = pyb.exec(cell)
if len(message) == 0:
print('pyboard >>>')
else:
print(message.decode('utf-8'))
# register new variable in user namespace
self.shell.user_ns[args.data] = string_to_matrix(message.decode("utf-8"))
if args.time: # measures the time of executions
pyb.exec('import utime')
message = pyb.exec('t = utime.ticks_us()\n' + cell + '\ndelta = utime.ticks_diff(utime.ticks_us(), t)' +
"\nprint('execution time: {:d} us'.format(delta))")
print(message.decode('utf-8'))
if args.memory: # prints out memory information
message = pyb.exec('from micropython import mem_info\nprint(mem_info())\n')
print("memory before execution:\n========================\n", message.decode('utf-8'))
message = pyb.exec(cell)
print(">>> ", message.decode('utf-8'))
message = pyb.exec('print(mem_info())')
print("memory after execution:\n========================\n", message.decode('utf-8'))
if args.pyboard:
message = pyb.exec(cell)
print(message.decode('utf-8'))
ip = get_ipython()
ip.register_magics(PyboardMagic)
###Output
_____no_output_____
###Markdown
pyboard
###Code
import pyboard
pyb = pyboard.Pyboard('/dev/ttyACM0')
pyb.enter_raw_repl()
pyb.exit_raw_repl()
pyb.close()
%%micropython -pyboard 1
import utime
import ulab as np
def timeit(n=1000):
def wrapper(f, *args, **kwargs):
func_name = str(f).split(' ')[1]
def new_func(*args, **kwargs):
run_times = np.zeros(n, dtype=np.uint16)
for i in range(n):
t = utime.ticks_us()
result = f(*args, **kwargs)
run_times[i] = utime.ticks_diff(utime.ticks_us(), t)
print('{}() execution times based on {} cycles'.format(func_name, n, (delta2-delta1)/n))
print('\tbest: %d us'%np.min(run_times))
print('\tworst: %d us'%np.max(run_times))
print('\taverage: %d us'%np.mean(run_times))
print('\tdeviation: +/-%.3f us'%np.std(run_times))
return result
return new_func
return wrapper
def timeit(f, *args, **kwargs):
func_name = str(f).split(' ')[1]
def new_func(*args, **kwargs):
t = utime.ticks_us()
result = f(*args, **kwargs)
print('execution time: ', utime.ticks_diff(utime.ticks_us(), t), ' us')
return result
return new_func
###Output
###Markdown
__END_OF_DEFS__ scipy.optimizeFunctions in the `optimize` module can be called by prepending them by `scipy.optimize.`. The module defines the following three functions:1. [scipy.optimize.bisect](bisect)1. [scipy.optimize.fmin](fmin)1. [scipy.optimize.newton](newton)Note that routines that work with user-defined functions still have to call the underlying `python` code, and therefore, gains in speed are not as significant as with other vectorised operations. As a rule of thumb, a factor of two can be expected, when compared to an optimised `python` implementation. bisect `scipy`: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.bisect.html`bisect` finds the root of a function of one variable using a simple bisection routine. It takes three positional arguments, the function itself, and two starting points. The function must have opposite signsat the starting points. Returned is the position of the root.Two keyword arguments, `xtol`, and `maxiter` can be supplied to control the accuracy, and the number of bisections, respectively.
###Code
%%micropython -unix 1
from ulab import scipy as spy
def f(x):
return x*x - 1
print(spy.optimize.bisect(f, 0, 4))
print('only 8 bisections: ', spy.optimize.bisect(f, 0, 4, maxiter=8))
print('with 0.1 accuracy: ', spy.optimize.bisect(f, 0, 4, xtol=0.1))
###Output
0.9999997615814209
only 8 bisections: 0.984375
with 0.1 accuracy: 0.9375
###Markdown
PerformanceSince the `bisect` routine calls user-defined `python` functions, the speed gain is only about a factor of two, if compared to a purely `python` implementation.
###Code
%%micropython -pyboard 1
from ulab import scipy as spy
def f(x):
return (x-1)*(x-1) - 2.0
def bisect(f, a, b, xtol=2.4e-7, maxiter=100):
if f(a) * f(b) > 0:
raise ValueError
rtb = a if f(a) < 0.0 else b
dx = b - a if f(a) < 0.0 else a - b
for i in range(maxiter):
dx *= 0.5
x_mid = rtb + dx
mid_value = f(x_mid)
if mid_value < 0:
rtb = x_mid
if abs(dx) < xtol:
break
return rtb
@timeit
def bisect_scipy(f, a, b):
return spy.optimize.bisect(f, a, b)
@timeit
def bisect_timed(f, a, b):
return bisect(f, a, b)
print('bisect running in python')
bisect_timed(f, 3, 2)
print('bisect running in C')
bisect_scipy(f, 3, 2)
###Output
bisect running in python
execution time: 1270 us
bisect running in C
execution time: 642 us
###Markdown
fmin`scipy`: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.htmlThe `fmin` function finds the position of the minimum of a user-defined function by using the downhill simplex method. Requires two positional arguments, the function, and the initial value. Three keyword arguments, `xatol`, `fatol`, and `maxiter` stipulate conditions for stopping.
###Code
%%micropython -unix 1
from ulab import scipy as spy
def f(x):
return (x-1)**2 - 1
print(spy.optimize.fmin(f, 3.0))
print(spy.optimize.fmin(f, 3.0, xatol=0.1))
###Output
0.9996093749999952
1.199999999999996
###Markdown
newton`scipy`:https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html`newton` finds a zero of a real, user-defined function using the Newton-Raphson (or secant or Halley’s) method. The routine requires two positional arguments, the function, and the initial value. Three keywordarguments can be supplied to control the iteration. These are the absolute and relative tolerances `tol`, and `rtol`, respectively, and the number of iterations before stopping, `maxiter`. The function retuns a single scalar, the position of the root.
###Code
%%micropython -unix 1
from ulab import scipy as spy
def f(x):
return x*x*x - 2.0
print(spy.optimize.newton(f, 3., tol=0.001, rtol=0.01))
###Output
1.260135727246117
|
Assignments/S11/Assignment-A/S11_Assignment_A.ipynb | ###Markdown
**Import Libraries**
###Code
import cv2
import numpy as np
from google.colab.patches import cv2_imshow
###Output
_____no_output_____
###Markdown
**Mount Google Drive**
###Code
from google.colab import drive
drive.mount("/content/gdrive", force_remount=True)
###Output
Mounted at /content/gdrive
###Markdown
**Load Yolo Weights & Config**
###Code
# Load YOLO
net = cv2.dnn.readNet("/content/gdrive/My Drive/yolo/yolov3.weights", "/content/gdrive/My Drive/yolo/yolov3.cfg")
classes = []
with open("/content/gdrive/My Drive/yolo/coco.names.txt", "r") as f:
classes = [line.strip() for line in f.readlines()]
del classes[80:83] # this was some extras space that got added by mistake, change later don't forget
classes
len(classes)
#Get Yolo Layers
net.getUnconnectedOutLayers()
layer_names = net.getLayerNames()
# Yolo Layers
output_layers = [layer_names[i[0]-1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
###Output
_____no_output_____
###Markdown
**Load Input Image/s**
###Code
img1 = cv2.imread("/content/gdrive/My Drive/yolo/ocvYolo-Almg2.jpeg") # read the image using OpenCV
img2 = cv2.imread("/content/gdrive/My Drive/yolo/bookImage.jpg")
img3 = cv2.imread("/content/gdrive/My Drive/data/IMG_20200705_144735.jpg")
img4 = cv2.imread("/content/gdrive/My Drive/data/goldie2.jpg")
img5 = cv2.imread("/content/gdrive/My Drive/data/IMG_20200705_142803.jpg")
img6 = cv2.imread("/content/gdrive/My Drive/data/IMG_20200705_142805.jpg")
img = cv2.resize(img3, None, fx=0.4, fy=0.4) # Resize the image, with
height, width, channels = img.shape # (512, 384, 3)
img.shape
###Output
_____no_output_____
###Markdown
**Output of YoloLayers**
###Code
# Detecting Objects
blob = cv2.dnn.blobFromImage(img, 0.00392, size=(416,416), mean=(0,0,0), swapRB=True, crop=False)
net.setInput(blob)
output = net.forward(output_layers)
output #type List
###Output
_____no_output_____
###Markdown
**Object Detection**
###Code
# Compiled all these into a Function
def detectObj(img): #:--> Pass in the image path as img argument for this function
# read the image
img = cv2.imread(img)
img = cv2.resize(img, None, fx=0.4, fy=0.4)
height, width, channels = img.shape
print("Image Shape: ", img.shape)
# Detecting objects
blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
# Setting the input
net.setInput(blob)
outs = net.forward(output_layers)
class_ids, confidences, boxes = [], [], []
for out in outs: # outs len=3
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores) # holding on to the max score
confidence = scores[class_id]
if confidence > 0.5:
# Object detected
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
# Rectangle coordinates
x = int(center_x - w / 2)
y = int(center_y - h / 2)
# appending bounding box dims
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
# Non-Max Suppression
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
font = cv2.FONT_HERSHEY_SIMPLEX
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
color = colors[i]
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), font, 2, color, 2)
cv2_imshow(img)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
***Test Results***
###Code
detectObj("/content/gdrive/MyDrive/yolo/mobile_phone.jpeg") # Person, Cellphone
detectObj("/content/gdrive/MyDrive/yolo/clock.jpeg") #Person, Clock
###Output
_____no_output_____ |
wandb/run-20210825_150359-1jzdois3/tmp/code/00.ipynb | ###Markdown
WorkFlow Imports Load the data Cleanning FE Data.corr() Analytics Preproccessing Decomposition Feature Selection Modelling Random Search Gird Search Imports
###Code
import random
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import torch,torchvision
from torch.nn import *
from torch.optim import *
# Preproccessing
from sklearn.preprocessing import (
StandardScaler,
RobustScaler,
MinMaxScaler,
MaxAbsScaler,
OneHotEncoder,
Normalizer,
Binarizer
)
# Decomposition
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
# Feature Selection
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import RFECV
from sklearn.feature_selection import SelectFromModel
# Model Eval
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score,train_test_split
from sklearn.metrics import mean_absolute_error,mean_squared_error,accuracy_score,precision_score,f1_score,recall_score
# Models
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LogisticRegression,LogisticRegressionCV
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import GradientBoostingRegressor,AdaBoostRegressor,VotingRegressor,BaggingRegressor,RandomForestRegressor
from sklearn.svm import SVR
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from catboost import CatBoost,CatBoostRegressor
from xgboost import XGBRegressor,XGBRFRegressor
from flaml import AutoML
# Other
import pickle
import wandb
PROJECT_NAME = 'House-Prices-Advanced-Regression-Techniques-V9'
device = 'cuda'
np.random.seed(21)
random.seed(21)
torch.manual_seed(21)
###Output
_____no_output_____
###Markdown
Funtions
###Code
def make_submission(model,name):
data = pd.read_csv('./data/test.csv')
ids = data['Id']
str_cols = []
int_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(data.columns),data.isna().sum(),data.dtypes):
if dtype == object:
str_cols.append(col_name)
else:
int_cols.append(col_name)
for str_col in str_cols:
data,idx,labels_and_int_index,new_data = object_to_int(data,str_col)
nan_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(data.columns),data.isna().sum(),data.dtypes):
if num_of_missing_rows > 0:
nan_cols.append(col_name)
for nan_col in nan_cols:
data[nan_col].fillna(data[nan_col].median(),inplace=True)
preds = model.predict(data)
df = pd.DataFrame({'Id':ids,'SalePrice':preds})
df.to_csv(f'./submission/{name}.csv',index=False)
def valid(model,X,y,valid=False):
preds = model.predict(X)
if valid:
results = {
'val mean_absolute_error':mean_absolute_error(y_true=y,y_pred=preds),
'val mean_squared_error':mean_squared_error(y_true=y,y_pred=preds),
}
else:
results = {
'mean_absolute_error':mean_absolute_error(y_true=y,y_pred=preds),
'mean_squared_error':mean_squared_error(y_true=y,y_pred=preds),
}
return results
def train(model,X_train,X_test,y_train,y_test,name):
wandb.init(project=PROJECT_NAME,name=name)
model.fit(X_train,y_train)
wandb.log(valid(model,X_train,y_train))
wandb.log(valid(model,X_test,y_test,True))
make_submission(model,name)
return model
def object_to_int(data,col):
data_col = data[col].to_dict()
idx = -1
labels_and_int_index = {}
for data_col_vals in data_col.values():
if data_col_vals not in labels_and_int_index.keys():
idx += 1
labels_and_int_index[data_col_vals] = idx
new_data = []
for data_col_vals in data_col.values():
new_data.append(labels_and_int_index[data_col_vals])
data[col] = new_data
return data,idx,labels_and_int_index,new_data
def fe(data,col,quantile_max_num=0.99,quantile_min_num=0.05):
max_num = data[col].quantile(quantile_max_num)
min_num = data[col].quantile(quantile_min_num)
print(max_num)
print(min_num)
data = data[data[col] < max_num]
data = data[data[col] > min_num]
return data
def decomposition(X,pca=False,kernal_pca=False):
if pca:
pca = PCA()
X = pca.fit_transform(X)
if kernal_pca:
kernal_pca = KernelPCA()
X = kernal_pca.fit_transform(X)
return X
def feature_selection_prep_data(model,X,y,select_from_model=False,variance_threshold=False,select_k_best=False,rfecv=False):
if select_from_model:
transform = SelectFromModel(estimator=model.fit(X, y))
X = transform.transform(X)
if variance_threshold:
transform = VarianceThreshold()
X = transform.fit_transform(X)
if select_k_best:
X = SelectKBest(chi2, k='all').fit_transform(X, y)
if rfecv:
X = RFECV(model, step=1, cv=5).fit(X, y)
X = X.transform(X)
return X
def prep_data(X,transformer):
mct = make_column_transformer(
(transformer,list(X.columns)),
remainder='passthrough'
)
X = mct.fit_transform(X)
return X
###Output
_____no_output_____
###Markdown
Load the data
###Code
data = pd.read_csv('./data/train.csv')
preproccessings = [StandardScaler,RobustScaler,MinMaxScaler,MaxAbsScaler,OneHotEncoder,Normalizer,Binarizer]
models = [
['KNeighborsRegressor',KNeighborsRegressor],
['LogisticRegression',LogisticRegression],
['LogisticRegressionCV',LogisticRegressionCV],
['DecisionTreeRegressor',DecisionTreeRegressor],
['GradientBoostingRegressor',GradientBoostingRegressor],
['AdaBoostRegressor',AdaBoostRegressor],
['RandomForestRegressor',RandomForestRegressor],
['BaggingRegressor',BaggingRegressor],
['GaussianNB',GaussianNB],
['ExtraTreesRegressor',ExtraTreesRegressor],
['CatBoost',CatBoost],
['CatBoostRegressor',CatBoostRegressor],
['XGBRegressor',XGBRegressor],
['XGBRFRegressor',XGBRFRegressor],
['ExtraTreesRegressor',ExtraTreesRegressor],
]
###Output
_____no_output_____
###Markdown
Cleanning the data
###Code
X = data.drop('SalePrice',axis=1)
y = data['SalePrice']
str_cols = []
int_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(X.columns),X.isna().sum(),X.dtypes):
if dtype == object:
str_cols.append(col_name)
else:
int_cols.append(col_name)
for str_col in str_cols:
X,idx,labels_and_int_index,new_data = object_to_int(X,str_col)
X.head()
nan_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(X.columns),X.isna().sum(),X.dtypes):
if num_of_missing_rows > 0:
nan_cols.append(col_name)
for nan_col in nan_cols:
X[nan_col].fillna(X[nan_col].median(),inplace=True)
nan_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(X.columns),X.isna().sum(),X.dtypes):
if num_of_missing_rows > 0:
nan_cols.append(col_name)
# train(GradientBoostingRegressor(),X,X,y,y,name='baseline-without-fe')
X_old = X.copy()
###Output
_____no_output_____
###Markdown
FE
###Code
# for col_name in list(X.columns):
# try:
# X = X_old.copy()
# X = fe(X,col_name)
# train(GradientBoostingRegressor(),X,X,y,y,name=f'baseline-with-fe-{col_name}')
# except:
# print('*'*50)
# print('*'*50)
# X = X_old.copy()
X_corr = X_old.corr()
keep_cols = []
###Output
_____no_output_____
###Markdown
Data.corr()
###Code
# for key,val in zip(X_corr.to_dict().keys(),X_corr.to_dict().values()):
# for val_key,val_vals in zip(val.keys(),val.values()):
# if val_key == key:
# pass
# else:
# if val_vals > 0.0:
# if val_key not in keep_cols:
# print(val_vals)
# keep_cols.append(val_key)
# fig,ax = plt.subplots(figsize=(25,12))
# ax = sns.heatmap(X_corr,annot=True,linewidths=0.5,fmt='.2f',cmap='YlGnBu')
# keep_cols
# len(keep_cols)
###Output
_____no_output_____
###Markdown
Analytics
###Code
X.head()
###Output
_____no_output_____
###Markdown
Preproccessing
###Code
X_old = X.copy()
# for preproccessing in preproccessings:
# X = X_old.copy()
# preproccessing = preproccessing()
# X = preproccessing.fit_transform(X)
# train(GradientBoostingRegressor(),X,X,y,y,name=f'{preproccessing}-preproccessing')
X = X_old.copy()
X = X_old.copy()
# X = feature_selection_prep_data(GradientBoostingRegressor(),X,y,select_from_model=False,variance_threshold=True,select_k_best=False,rfecv=False)
# train(GradientBoostingRegressor(),X,X,y,y,name=f'select_from_model=False-variance_threshold=True-select_k_best=False-rfecv=False-decomposition')
X = X_old.copy()
X = X_old.copy()
# X = feature_selection_prep_data(GradientBoostingRegressor(),X,y,select_from_model=False,variance_threshold=False,select_k_best=False,rfecv=True)
# train(GradientBoostingRegressor(),X,X,y,y,name=f'select_from_model=False-variance_threshold=False-select_k_best=False-rfecv=True-decomposition')
X = X_old.copy()
###Output
_____no_output_____
###Markdown
Modelling
###Code
for model in models:
try:
train(model[1](),X,X,y,y,name=f'{model[0]}')
except:
pass
###Output
[34m[1mwandb[0m: Currently logged in as: [33mranuga-d[0m (use `wandb login --relogin` to force relogin)
|
doc/docstrings/kdeplot.ipynb | ###Markdown
Plot a univariate distribution along the x axis:
###Code
tips = sns.load_dataset("tips")
sns.kdeplot(data=tips, x="total_bill")
###Output
_____no_output_____
###Markdown
Flip the plot by assigning the data variable to the y axis:
###Code
sns.kdeplot(data=tips, y="total_bill")
###Output
_____no_output_____
###Markdown
Plot distributions for each column of a wide-form dataset:
###Code
iris = sns.load_dataset("iris")
sns.kdeplot(data=iris)
###Output
_____no_output_____
###Markdown
Use less smoothing:
###Code
sns.kdeplot(data=tips, x="total_bill", bw_adjust=.2)
###Output
_____no_output_____
###Markdown
Use more smoothing, but don't smooth past the extreme data points:
###Code
ax= sns.kdeplot(data=tips, x="total_bill", bw_adjust=5, cut=0)
###Output
_____no_output_____
###Markdown
Plot conditional distributions with hue mapping of a second variable:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time")
###Output
_____no_output_____
###Markdown
"Stack" the conditional distributions:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="stack")
###Output
_____no_output_____
###Markdown
Normalize the stacked distribution at each value in the grid:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="fill")
###Output
_____no_output_____
###Markdown
Estimate the cumulative distribution function(s), normalizing each subset:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="time",
cumulative=True, common_norm=False, common_grid=True,
)
###Output
_____no_output_____
###Markdown
Estimate distribution from aggregated data, using weights:
###Code
tips_agg = (tips
.groupby("size")
.agg(total_bill=("total_bill", "mean"), n=("total_bill", "count"))
)
sns.kdeplot(data=tips_agg, x="total_bill", weights="n")
###Output
_____no_output_____
###Markdown
Map the data variable with log scaling:
###Code
diamonds = sns.load_dataset("diamonds")
sns.kdeplot(data=diamonds, x="price", log_scale=True)
###Output
_____no_output_____
###Markdown
Use numeric hue mapping:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="size")
###Output
_____no_output_____
###Markdown
Modify the appearance of the plot:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="size",
fill=True, common_norm=False, palette="viridis",
alpha=.5, linewidth=0,
)
###Output
_____no_output_____
###Markdown
Plot a bivariate distribution:
###Code
geyser = sns.load_dataset("geyser")
sns.kdeplot(data=geyser, x="waiting", y="duration")
###Output
_____no_output_____
###Markdown
Map a third variable with a hue semantic to show conditional distributions:
###Code
sns.kdeplot(data=geyser, x="waiting", y="duration", hue="kind")
###Output
_____no_output_____
###Markdown
Show filled contours:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind", fill=True,
)
###Output
_____no_output_____
###Markdown
Show fewer contour levels, covering less of the distribution:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind",
levels=5, thresh=.2,
)
###Output
_____no_output_____
###Markdown
Fill the axes extent with a smooth distribution, using a different colormap:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration",
fill=True, thresh=0, levels=100, cmap="mako",
)
###Output
_____no_output_____
###Markdown
Plot a univariate distribution along the x axis:
###Code
import seaborn as sns; sns.set()
tips = sns.load_dataset("tips")
sns.kdeplot(data=tips, x="total_bill")
###Output
_____no_output_____
###Markdown
Flip the plot by assigning the data variable to the y axis:
###Code
sns.kdeplot(data=tips, y="total_bill")
###Output
_____no_output_____
###Markdown
Plot distributions for each column of a wide-form dataset:
###Code
iris = sns.load_dataset("iris")
sns.kdeplot(data=iris)
###Output
_____no_output_____
###Markdown
Use less smoothing:
###Code
sns.kdeplot(data=tips, x="total_bill", bw_adjust=.2)
###Output
_____no_output_____
###Markdown
Use more smoothing, but don't smooth past the extreme data points:
###Code
ax= sns.kdeplot(data=tips, x="total_bill", bw_adjust=5, cut=0)
###Output
_____no_output_____
###Markdown
Plot conditional distributions with hue mapping of a second variable:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time")
###Output
_____no_output_____
###Markdown
"Stack" the conditional distributions:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="stack")
###Output
_____no_output_____
###Markdown
Normalize the stacked distribution at each value in the grid:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="fill")
###Output
_____no_output_____
###Markdown
Estimate the cumulative distribution function(s), normalizing each subset:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="time",
cumulative=True, common_norm=False, common_grid=True,
)
###Output
_____no_output_____
###Markdown
Estimate distribution from aggregated data, using weights:
###Code
tips_agg = (tips
.groupby("size")
.agg(total_bill=("total_bill", "mean"), n=("total_bill", "count"))
)
sns.kdeplot(data=tips_agg, x="total_bill", weights="n")
###Output
_____no_output_____
###Markdown
Map the data variable with log scaling:
###Code
diamonds = sns.load_dataset("diamonds")
sns.kdeplot(data=diamonds, x="price", log_scale=True)
###Output
_____no_output_____
###Markdown
Use numeric hue mapping:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="size")
###Output
_____no_output_____
###Markdown
Modify the appearance of the plot:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="size",
fill=True, common_norm=False, palette="viridis",
alpha=.5, linewidth=0,
)
###Output
_____no_output_____
###Markdown
Plot a bivariate distribution:
###Code
geyser = sns.load_dataset("geyser")
sns.kdeplot(data=geyser, x="waiting", y="duration")
###Output
_____no_output_____
###Markdown
Map a third variable with a hue semantic to show conditional distributions:
###Code
sns.kdeplot(data=geyser, x="waiting", y="duration", hue="kind")
###Output
_____no_output_____
###Markdown
Show filled contours:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind", fill=True,
)
###Output
_____no_output_____
###Markdown
Show fewer contour levels, covering less of the distribution:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind",
levels=5, thresh=.2,
)
###Output
_____no_output_____
###Markdown
Fill the axes extent with a smooth distribution, using a different colormap:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration",
fill=True, thresh=0, levels=100, cmap="mako",
)
###Output
_____no_output_____
###Markdown
Plot a univariate distribution along the x axis:
###Code
tips = sns.load_dataset("tips")
sns.kdeplot(data=tips, x="total_bill")
###Output
_____no_output_____
###Markdown
Flip the plot by assigning the data variable to the y axis:
###Code
sns.kdeplot(data=tips, y="total_bill")
###Output
_____no_output_____
###Markdown
Plot distributions for each column of a wide-form dataset:
###Code
iris = sns.load_dataset("iris")
sns.kdeplot(data=iris)
###Output
_____no_output_____
###Markdown
Use less smoothing:
###Code
sns.kdeplot(data=tips, x="total_bill", bw_adjust=.2)
###Output
_____no_output_____
###Markdown
Use more smoothing, but don't smooth past the extreme data points:
###Code
ax= sns.kdeplot(data=tips, x="total_bill", bw_adjust=5, cut=0)
###Output
_____no_output_____
###Markdown
Plot conditional distributions with hue mapping of a second variable:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time")
###Output
_____no_output_____
###Markdown
"Stack" the conditional distributions:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="stack")
###Output
_____no_output_____
###Markdown
Normalize the stacked distribution at each value in the grid:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="fill")
###Output
_____no_output_____
###Markdown
Estimate the cumulative distribution function(s), normalizing each subset:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="time",
cumulative=True, common_norm=False, common_grid=True,
)
###Output
_____no_output_____
###Markdown
Estimate distribution from aggregated data, using weights:
###Code
tips_agg = (tips
.groupby("size")
.agg(total_bill=("total_bill", "mean"), n=("total_bill", "count"))
)
sns.kdeplot(data=tips_agg, x="total_bill", weights="n")
###Output
_____no_output_____
###Markdown
Map the data variable with log scaling:
###Code
diamonds = sns.load_dataset("diamonds")
sns.kdeplot(data=diamonds, x="price", log_scale=True)
###Output
_____no_output_____
###Markdown
Use numeric hue mapping:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="size")
###Output
_____no_output_____
###Markdown
Modify the appearance of the plot:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="size",
fill=True, common_norm=False, palette="viridis",
alpha=.5, linewidth=0,
)
###Output
_____no_output_____
###Markdown
Plot a bivariate distribution:
###Code
geyser = sns.load_dataset("geyser")
sns.kdeplot(data=geyser, x="waiting", y="duration")
###Output
_____no_output_____
###Markdown
Map a third variable with a hue semantic to show conditional distributions:
###Code
sns.kdeplot(data=geyser, x="waiting", y="duration", hue="kind")
###Output
_____no_output_____
###Markdown
Show filled contours:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind", fill=True,
)
###Output
_____no_output_____
###Markdown
Show fewer contour levels, covering less of the distribution:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind",
levels=5, thresh=.2,
)
###Output
_____no_output_____
###Markdown
Fill the axes extent with a smooth distribution, using a different colormap:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration",
fill=True, thresh=0, levels=100, cmap="mako",
)
###Output
_____no_output_____
###Markdown
Plot a univariate distribution along the x axis:
###Code
tips = sns.load_dataset("tips")
sns.kdeplot(data=tips, x="total_bill")
###Output
_____no_output_____
###Markdown
Flip the plot by assigning the data variable to the y axis:
###Code
sns.kdeplot(data=tips, y="total_bill")
###Output
_____no_output_____
###Markdown
Plot distributions for each column of a wide-form dataset:
###Code
iris = sns.load_dataset("iris")
sns.kdeplot(data=iris)
###Output
_____no_output_____
###Markdown
Use less smoothing:
###Code
sns.kdeplot(data=tips, x="total_bill", bw_adjust=.2)
###Output
_____no_output_____
###Markdown
Use more smoothing, but don't smooth past the extreme data points:
###Code
ax= sns.kdeplot(data=tips, x="total_bill", bw_adjust=5, cut=0)
###Output
_____no_output_____
###Markdown
Plot conditional distributions with hue mapping of a second variable:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time")
###Output
_____no_output_____
###Markdown
"Stack" the conditional distributions:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="stack")
###Output
_____no_output_____
###Markdown
Normalize the stacked distribution at each value in the grid:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="fill")
###Output
_____no_output_____
###Markdown
Estimate the cumulative distribution function(s), normalizing each subset:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="time",
cumulative=True, common_norm=False, common_grid=True,
)
###Output
_____no_output_____
###Markdown
Estimate distribution from aggregated data, using weights:
###Code
tips_agg = (tips
.groupby("size")
.agg(total_bill=("total_bill", "mean"), n=("total_bill", "count"))
)
sns.kdeplot(data=tips_agg, x="total_bill", weights="n")
###Output
_____no_output_____
###Markdown
Map the data variable with log scaling:
###Code
diamonds = sns.load_dataset("diamonds")
sns.kdeplot(data=diamonds, x="price", log_scale=True)
###Output
_____no_output_____
###Markdown
Use numeric hue mapping:
###Code
sns.kdeplot(data=tips, x="total_bill", hue="size")
###Output
_____no_output_____
###Markdown
Modify the appearance of the plot:
###Code
sns.kdeplot(
data=tips, x="total_bill", hue="size",
fill=True, common_norm=False, palette="crest",
alpha=.5, linewidth=0,
)
###Output
_____no_output_____
###Markdown
Plot a bivariate distribution:
###Code
geyser = sns.load_dataset("geyser")
sns.kdeplot(data=geyser, x="waiting", y="duration")
###Output
_____no_output_____
###Markdown
Map a third variable with a hue semantic to show conditional distributions:
###Code
sns.kdeplot(data=geyser, x="waiting", y="duration", hue="kind")
###Output
_____no_output_____
###Markdown
Show filled contours:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind", fill=True,
)
###Output
_____no_output_____
###Markdown
Show fewer contour levels, covering less of the distribution:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind",
levels=5, thresh=.2,
)
###Output
_____no_output_____
###Markdown
Fill the axes extent with a smooth distribution, using a different colormap:
###Code
sns.kdeplot(
data=geyser, x="waiting", y="duration",
fill=True, thresh=0, levels=100, cmap="mako",
)
###Output
_____no_output_____ |
N-grams/n_grams.ipynb | ###Markdown
###Code
import re
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
from collections import defaultdict, Counter
from google.colab import drive
drive.mount("/content/gdrive")
data_path = '/content/gdrive/My Drive/merged-file.txt'
with open(data_path, 'r') as f:
lines = f.read().split('\n')
datas = " ".join(lines)
class MarkovChain:
def __init__(self):
self.lookup_dict = defaultdict(list)
def add_document(self, string):
preprocessed_list = self._preprocess(string)
pairs = self.__generate_tuple_keys(preprocessed_list)
for pair in pairs:
self.lookup_dict[pair[0]].append(pair[1])
pairs2 = self.__generate_2tuple_keys(preprocessed_list)
for pair in pairs2:
self.lookup_dict[tuple([pair[0], pair[1]])].append(pair[2])
pairs3 = self.__generate_3tuple_keys(preprocessed_list)
for pair in pairs3:
self.lookup_dict[tuple([pair[0], pair[1], pair[2]])].append(pair[3])
def _preprocess(self, string):
cleaned = data.lower()
tokenized = word_tokenize(cleaned)
return tokenized
def __generate_tuple_keys(self, data):
if len(data) < 1:
return
for i in range(len(data) - 1):
yield [ data[i], data[i + 1] ]
def __generate_2tuple_keys(self, data):
if len(data) < 2:
return
for i in range(len(data) - 2):
yield [ data[i], data[i + 1], data[i+2] ]
def __generate_3tuple_keys(self, data):
if len(data) < 3:
return
for i in range(len(data) - 3):
yield [ data[i], data[i + 1], data[i+2], data[i+3] ]
def oneword(self, string):
return Counter(self.lookup_dict[string]).most_common()[:3]
def twowords(self, string):
suggest = Counter(self.lookup_dict[tuple(string)]).most_common()[:3]
if len(suggest)==0:
return self.oneword(string[-1])
return suggest
def threewords(self, string):
suggest = Counter(self.lookup_dict[tuple(string)]).most_common()[:3]
if len(suggest)==0:
return self.twowords(string[-2:])
return suggest
def morewords(self, string):
return self.threewords(string[-3:])
def generate_text(self, string):
if len(self.lookup_dict) > 0:
tokens = string.split(" ")
if len(tokens)==1:
print("Next word suggestions:", self.oneword(string))
elif len(tokens)==2:
print("Next word suggestions:", self.twowords(string.split(" ")))
elif len(tokens)==3:
print("Next word suggestions:", self.threewords(string.split(" ")))
elif len(tokens)>3:
print("Next word suggestions:", self.morewords(string.split(" ")))
return
my_markov = MarkovChain()
my_markov.add_document(datas)
my_markov.generate_text(input().lower())
###Output
kaise
Next word suggestions: [('kare', 242), ('kamaye', 134), ('banaye', 124)]
|
stats/HTcondor_stats.ipynb | ###Markdown
Generating plots for BDP report
###Code
import pandas as pd
import os
import csv
import subprocess # Can activate conda env from within script --> ask federica
import seaborn as sn
import matplotlib.pyplot as plt
import sklearn
from sklearn.linear_model import LinearRegression
import numpy as np
# !pip install sklearn
# #centos@main:~/BDP-projcect-aws-main/condor_out$ grep "Duration" *
# train0.out:Duration 100
# train1.out:Duration 154
# train2.out:Duration 204
# train3.out:Duration 268
# train4.out:Duration 336
# train5.out:Duration 383
!cat ~/condor_job_duration
df = pd.read_csv('/Users/ila/condor_job_duration', delimiter="\n", header=None)
# Adding custom columns:
duration_df = pd.DataFrame(df.values, columns=['seconds'])
duration_df
average_time = duration_df.seconds.mean()
average_time
###Output
_____no_output_____
###Markdown
DF for Memory usage per run
###Code
memory_df = pd.read_csv('/Users/ila/memory_run1', delimiter='\n', header=None)
memory_df = pd.DataFrame(memory_df.values, columns=['MB'])
memory_df['RAM MiB'] = memory_df.MB*0.9537
memory_df
###Output
_____no_output_____
###Markdown
Evaluating Time of Execution
###Code
df_size = pd.read_csv('/Users/ila/input_size_condor_all', sep='\n', header=None)
df_bytes = pd.DataFrame(df_size.values, columns=['bytes'])
df_bytes
# omitting line one: only the file for prediction - as it will remain constant
train_bytes_df = df_bytes.loc[1:6]
train_bytes_df = pd.DataFrame(train_bytes_df.values, index=[0,1,2,3,4,5], columns=['bytes'])
train_bytes_df
bytes_time_df = train_bytes_df.join(duration_df)
bytes_time_df
bytes_time_df['input MiB'] = bytes_time_df.bytes/(1024*1024)
bytes_time_df['minutes'] = bytes_time_df.seconds/60
bytes_time_df['days'] = bytes_time_df.seconds/8400
bytes_time_df = bytes_time_df.join(memory_df)
###Output
_____no_output_____
###Markdown
Inputsize vs Memory usage
###Code
bytes_time_df
ax = sn.lineplot(data=bytes_time_df, x='input MiB', y='RAM MiB', markersize=11,
marker="o", color="#965786")
ax.set(title='RAM Utilization by Input Size')
fig1 = ax.get_figure()
# fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/ram_utilization_jobs.png')
# fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/ram_utilization_jobs.pdf')
ax = sn.lineplot(data=bytes_time_df, x='input MiB', y='seconds', markersize=11,
marker="o", color="#965786")
ax.set(title='Execution Time by Input Size')
fig2 = ax.get_figure()
# fig2.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/execution_by_input_size.png')
# fig2.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/execution_by_input_size.pdf')
###Output
_____no_output_____
###Markdown
Linear regression Prediction of time based on input size
###Code
data = bytes_time_df
X=data['input MiB'].values.reshape(-1,1)
Y=data['seconds'].values.reshape(-1,1)
linear_reg = sklearn.linear_model.LinearRegression().fit(X,Y)
Y_pred = linear_reg.predict(X) # make predictions
X
Y_pred
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='red')
# Add title and axis names
plt.title('Linear Regression of 6 Test Jobs')
plt.xlabel('Input Size in MiB')
plt.ylabel('Time in Seconds')
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/regression_time.pdf')
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/regression_time.png')
plt.show()
# Appending value of a real input size that would be used to train a model in a real use case
X_real = X
X_real = np.append(X_real,200).reshape(-1,1)
X_real
Y_pred_real = linear_reg.predict(X_real)
Y_pred_real
pwd
minutes_for_200_mib_input = 2326.95991389/60
plt.scatter(X, Y)
plt.scatter(200., 2326.95991389, color='red')
# Add label to dot
plt.annotate("38 minutes", (200., 2326.95991389))
plt.plot(X_real, Y_pred_real, color='green') # Marking real use case in red
# Add title and axis names
plt.title('Extrapolating to Real Use Case')
plt.xlabel('Input Size in Mib')
plt.ylabel('Time in Seconds')
# To keep text withing limits of the axes
plt.xlim([-5,250])
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/extrapolating_time_label.pdf')
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/extrapolating_time_label.png')
plt.show()
###Output
_____no_output_____
###Markdown
Source confirms that SVM scale linearly https://www.researchgate.net/post/What-is-the-running-time-complexity-of-SVM-and-ANN Bottou, Léon, and Chih-Jen Lin. "Support vector machine solvers." Large scale kernel machines (2007): 301-320. Linear regression Prediction of RAM based on input size
###Code
data = bytes_time_df
X=data['input MiB'].values.reshape(-1,1)
Y=data['RAM MiB'].values.reshape(-1,1)
linear_reg = sklearn.linear_model.LinearRegression().fit(X,Y)
Y_pred = linear_reg.predict(X) # make predictions
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='red')
# Add title and axis names
plt.title('Linear Regression of 6 Test Jobs')
plt.xlabel('Input Size in MiB')
plt.ylabel('RAM Utilization in MiB')
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/regression_ram.pdf')
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/regression_ram.png')
plt.show()
# Appending value of a real input size that would be used to train a model in a real use case
X_real = X
X_real = np.append(X_real,200).reshape(-1,1)
X_real
Y_pred_real = linear_reg.predict(X_real)
Y_pred_real
X_real
plt.scatter(X, Y)
plt.scatter(200., 629.39314757, color='red')
plt.annotate("630 MiB", (200., 629.39314757))
plt.plot(X_real, Y_pred_real, color='green') # Marking real use case in red
# Add title and axis names
plt.title('Extrapolating to Real Use Case')
plt.xlabel('Input Size in Mib')
plt.ylabel('Time in Seconds')
plt.xlim([-5,250])
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/extrapolating_ram_label.pdf')
# plt.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/extrapolating_ram_label.png')
plt.show()
###Output
_____no_output_____
###Markdown
Docker Duration
###Code
df2 = pd.read_csv('/Users/ila/condor_job_duration_docker', delimiter=" ", header=None)
docker_duration_df = pd.DataFrame(df2.values, columns=["seconds"])
docker_duration_df
###Output
_____no_output_____
###Markdown
Scaling up to gridsearch (10 combinations of C and gamma) crossvalidating producing 50 models 25 worker nodes a 2 vCPU plus one main node of the same make: 26 nodes in the cluser→ 2 jobs per node! → run 50 in one go within 38 minutes provided there is no errors
###Code
# 2 vCPU in USD
# Hourly price for the cluster
t4g_medium = 0.0336*26
print("26 t4g_medium Machienes ammounting to 50 CPU's cost", t4g_medium, 'USD per hour')
per_min_cost_cluster = t4g_medium/60
per_min_cost_cluster
sec_cost_cluster = per_min_cost_cluster/60
sec_cost_cluster
###Output
_____no_output_____
###Markdown
Given an up time 38 minutes
###Code
# run time cost for entire cluster:
run_time_non_trivial = per_min_cost_cluster*38
run_time_non_trivial
###Output
_____no_output_____
###Markdown
This would ammount to a total spending of 0.5532 USD for a 38 min run on 25 wn plus 1 main
###Code
price_per_months_full_cluster = t4g_medium*24*30
price_per_months_full_cluster
st_sp = 50*153
print("The completion of the challeng requres only ", st_sp, "MB storage space.")
# 7650
bytes_time_df['cost'] = bytes_time_df.seconds*sec_cost_cluster
bytes_time_df
ax = sn.lineplot(data=bytes_time_df, x='cost', y='seconds', markersize=11,
marker="o", color="#965786")
ax.set(title='Cost per Execution Time')
fig2 = ax.get_figure()
# fig2.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/time_cost.png')
# fig2.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/time_cost.pdf')
df2 = bytes_time_df.seconds
df2
###Output
_____no_output_____
###Markdown
Time vs cost vs number of machines **t4g_medium** used
###Code
# t_c_nm = pd.read_csv('./time_cost_machine.csv', sep=',')
# fixing typo
# t_c_nm['machines'] = t_c_nm['machienes']
# drop uneeded cols
# t_c_nm.drop(['cost per hour','hour per job', 'jobs pairs', 'jobs per machine', 'jobs', 'Unnamed: 7'], axis=1, inplace=True)
t_c_nm['tot machines'] = t_c_nm['main node'] + t_c_nm['worker nodes']
t_c_nm
ax = sn.lineplot(data=t_c_nm, x='tot machines', y='total time Hrs', markersize=11, marker="o", color="red")
# ax.twinx()
ax.set(title='Number of Machines vs Time')
fig1 = ax.get_figure()
fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/n_machines_time.png')
# fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/n_machines_time.pdf')
ax = sn.lineplot(data=t_c_nm, x='tot machines', y='total cost $', markersize=11, marker="o", color="green")
ax.set(title='Machines vs Cost')
fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/n_machines_cost.png')
# fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/n_machines_cost.pdf')
fig1 = ax.get_figure()
ax = sn.lineplot(data=t_c_nm, x='tot machines', y='total time Hrs', markersize=11, marker="o", color="red")
ax.twinx()
ax = sn.lineplot(data=t_c_nm, x='tot machines', y='total cost $', markersize=11, color="green")
ax.set(title='Number of Machines vs Time vs Cost')
fig1 = ax.get_figure()
fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/n_machines_time_cost.png')
# fig1.savefig('/Users/ila/01_unibo/BDP-projcect-aws-main/stats/n_machines_time_cost.pdf')
!pwd
!grep "Duration" ../condor_out/*
!grep "Duration" ../condor_out/* | awk '{print $NF}'
!grep "Duration" ../condor_out_docker/* | awk '{print $NF}'
# Runtime native in seconds
native= [100,
154,
204,
268,
336,
383]
# Runtime docker in seconds
docker = [105,
157,
214,
277,
351,
397]
x = []
for n in range(len(native)):
x.append(docker[n]/native[n])
for i in x:
print((i-1)*100)
###Output
5.000000000000004
1.9480519480519431
4.90196078431373
3.3582089552238736
4.464285714285721
3.6553524804177506
###Markdown
Docker was between 1,9 % and 5.0 % slowerContainer specs - see below:
###Code
!cat ../docker/DOCKERFILE
# Re ran with a libsvm ubuntu container:
centos@main:~/BDP-projcect-aws-main$ grep "Duration" condor_out_docker2/*
condor_out_docker2/train0.out:Duration 97
condor_out_docker2/train1.out:Duration 149
condor_out_docker2/train2.out:Duration 214
condor_out_docker2/train3.out:Duration 272
condor_out_docker2/train4.out:Duration 338
condor_out_docker2/train5.out:Duration 385
!grep "Duration" ../condor_out_docker2/*
!grep "Duration" ../condor_out_docker2/* | awk '{print $NF}'
d_ubuntu = [97,
149,
214,
272,
338,
385]
x = []
for n in range(len(native)):
x.append(d_ubuntu[n]/native[n])
for i in x:
print((i-1)*100)
###Output
-3.0000000000000027
-3.2467532467532423
4.90196078431373
1.4925373134328401
0.5952380952380931
0.5221932114882533
|
demo/data_playground.ipynb | ###Markdown
Overview This is demo Google collaborative notebook for downloading data from ONC to your own Google drive. It also shows you how to...- show datafields of downloaded datasets- examine dataset with simple TSNE visualizationTo view the file contents, stay on this page.**To play (run and edit) this Python script...**1. Click on ```Open in playground``` button 2. Choose Login to your Google account > ```Runtime``` (top menu, fifth item) > ```Run all``` (first item)3. Click on ```RUN ANYWAY``` when prompted by the pop-up windown4. Go through each code block below. Most critically, follow the 3-steps under code block ```A``` carefully to authenticate in order to run this entire script successfully A) Mount to your own Google drive 1. Click on link (in blue)2. Copy the authentication code (example authentication code:```4/xwHlEYe2HDKq6XPPPdIbhVZIBf-hDyooaZJSST-k2Lv3IgarPvfeKvw```)3. Return to browser's tab where the notebook is being viewed.4. Paste the copied code into the text box. 5. Press ENTER
###Code
from google.colab import drive
drive.mount('/content/drive')
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
B) Use Unix commands to make and move to directory called ```opensource_datasets```
###Code
try:
! mkdir '/content/drive/My Drive/Colab Notebooks/opensource_datasets/'
except e as Exception:
pass
import os
os.chdir('/content/drive/My Drive/Colab Notebooks/opensource_datasets/')
###Output
mkdir: cannot create directory ‘/content/drive/My Drive/Colab Notebooks/opensource_datasets/’: File exists
###Markdown
C) Create a subfolder ```ONC2020``` where ONC datasets will be downloaded to
###Code
try:
! mkdir '/content/drive/My Drive/Colab Notebooks/opensource_datasets/ONC2020/'
except e as Exception:
pass
import os
os.chdir('/content/drive/My Drive/Colab Notebooks/opensource_datasets/ONC2020')
###Output
mkdir: cannot create directory ‘/content/drive/My Drive/Colab Notebooks/opensource_datasets/ONC2020/’: File exists
###Markdown
D) Using ```wget``` to copy ONC data using API Note: by default, returns request in json files; appending ```&format=csv``` at the end of query for csv files may work if length of query is known. Below will still download data in json files.
###Code
try:
os.chdir('/content/drive/My Drive/Colab Notebooks/opensource_datasets/ONC2020/')
! wget -O market-readiness2015.csv https://dashboard.healthit.gov/api/open-api.php?source=2015-edition-market-readiness-hospitals-clinicians-data.csv
except:
pass
try:
! wget -O AHA_2008-2005.csv https://dashboard.healthit.gov/api/open-api.php?source=AHA_2008-2015.csv
except:
pass
try:
! wget -O budget_performance_measures.csv https://dashboard.healthit.gov/api/open-api.php?source=performance-measures.csv
except:
pass
try:
! wget -O prescription_adoption_by_country2014.csv https://dashboard.healthit.gov/api/open-api.php?source=Surescripts_County_04-2014.csv
# ! wget -O prescription_adoption_by_state2014.csv https://dashboard.healthit.gov/api/open-api.php?source=Surescripts_State_04-2014.csv
! wget -O prescription_adoption_by_state2014.csv https://dashboard.healthit.gov/datadashboard/data/Surescripts_04-2014_State.csv
except:
pass
###Output
--2020-03-22 23:45:52-- https://dashboard.healthit.gov/api/open-api.php?source=2015-edition-market-readiness-hospitals-clinicians-data.csv
Resolving dashboard.healthit.gov (dashboard.healthit.gov)... 54.86.156.245
Connecting to dashboard.healthit.gov (dashboard.healthit.gov)|54.86.156.245|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 0 [application/json]
Saving to: ‘market-readiness2015.csv’
market-readiness201 [<=> ] 0 --.-KB/s
market-readiness201 [ <=> ] 0 --.-KB/s in 0s
2020-03-22 23:45:53 (0.00 B/s) - ‘market-readiness2015.csv’ saved [0/0]
--2020-03-22 23:45:53-- https://dashboard.healthit.gov/api/open-api.php?source=AHA_2008-2015.csv
Resolving dashboard.healthit.gov (dashboard.healthit.gov)... 54.86.156.245
Connecting to dashboard.healthit.gov (dashboard.healthit.gov)|54.86.156.245|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/json]
Saving to: ‘AHA_2008-2005.csv’
AHA_2008-2005.csv [ <=> ] 505.84K 1.60MB/s in 0.3s
2020-03-22 23:45:54 (1.60 MB/s) - ‘AHA_2008-2005.csv’ saved [517977]
--2020-03-22 23:45:55-- https://dashboard.healthit.gov/api/open-api.php?source=performance-measures.csv
Resolving dashboard.healthit.gov (dashboard.healthit.gov)... 54.86.156.245
Connecting to dashboard.healthit.gov (dashboard.healthit.gov)|54.86.156.245|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/json]
Saving to: ‘budget_performance_measures.csv’
budget_performance_ [ <=> ] 71.57K --.-KB/s in 0.1s
2020-03-22 23:45:56 (578 KB/s) - ‘budget_performance_measures.csv’ saved [73291]
--2020-03-22 23:45:56-- https://dashboard.healthit.gov/api/open-api.php?source=Surescripts_County_04-2014.csv
Resolving dashboard.healthit.gov (dashboard.healthit.gov)... 54.86.156.245
Connecting to dashboard.healthit.gov (dashboard.healthit.gov)|54.86.156.245|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/json]
Saving to: ‘prescription_adoption_by_country2014.csv’
prescription_adopti [ <=> ] 666.32K 1.85MB/s in 0.4s
2020-03-22 23:45:57 (1.85 MB/s) - ‘prescription_adoption_by_country2014.csv’ saved [682310]
--2020-03-22 23:45:58-- https://dashboard.healthit.gov/datadashboard/data/Surescripts_04-2014_State.csv
Resolving dashboard.healthit.gov (dashboard.healthit.gov)... 54.86.156.245
Connecting to dashboard.healthit.gov (dashboard.healthit.gov)|54.86.156.245|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 252065 (246K) [text/csv]
Saving to: ‘prescription_adoption_by_state2014.csv’
prescription_adopti 100%[===================>] 246.16K 954KB/s in 0.3s
2020-03-22 23:45:58 (954 KB/s) - ‘prescription_adoption_by_state2014.csv’ saved [252065/252065]
###Markdown
E) List the files downloaded to your Google drive
###Code
os.chdir('/content/drive/My Drive/Colab Notebooks/opensource_datasets/ONC2020/')
import glob
files = glob.glob('*csv')
for f in files:
print(f)
###Output
AHA_2008-2005.csv
performance-measures.csv
budget_performance_measures.csv
prescription_adoption_by_country2014.csv
prescription_adoption_by_state2014.csv
market-readiness2015.csv
###Markdown
F) Import modules for displaying tabular data in HTML format
###Code
from IPython.core.display import display, HTML
display(HTML('<h1>Hello, world!</h1>'))
###Output
_____no_output_____
###Markdown
G) Loop over this list of datasets (files) and show the header and 1 example data entry within each file.
###Code
import pandas as pd
import numpy as np
dfs=dict()
for f in files:
try:
k=f.strip('.csv')
data = pd.read_json(f)
#data = pd.read_json(f,dtype=float ).replace('',np.NaN).replace( '.', np.NaN)
dfs[k] = data
print( '\nContents of %s.csv:' % k )
display(HTML( dfs[k].head(2).transpose().to_html() ))
except Exception as e:
print(f, e)
###Output
Contents of AHA_2008-2005.csv:
###Markdown
Repeat above step but this time, mark missing data with NaN
###Code
dfs=dict()
for f in files:
try:
k=f.strip('.csv')
data = pd.read_json(f,dtype=float ).replace('',np.NaN).replace( '.', np.NaN)
dfs[k] = data
print( '\nContents of %s.csv:' % k )
display(HTML( dfs[k].head(2).transpose().to_html() ))
except Exception as e:
print(f, e)
###Output
Contents of AHA_2008-2005.csv:
###Markdown
New Section New Section
###Code
! wget -O prescription_adoption_by_state2014.csv https://dashboard.healthit.gov/api/open-api.php?source=Surescripts_04-2014_State.csv
dfs['prescription_adoption_by_state2014']=pd.read_json( 'prescription_adoption_by_state2014.csv' )
dfs['prescription_adoption_by_state2014'].head(2).transpose()
#! wget -O market-readiness2015.csv https://dashboard.healthit.gov/datadashboard/data/2015-edition-market-readiness-hospitals-clinicians-data.csv
! wget -O market-readiness2015.csv https://dashboard.healthit.gov/api/open-api.php?source=2015-edition-market-readiness-hospitals-clinicians-data.csv
# file is empty (0 bytes transferred); do not read, nothing to read
#dfs['market-readiness2015']=pd.read_json( 'market-readiness2015.csv' )
for k in dfs.keys():
print('%s.csv contains cells of size'%k, dfs[k].shape)
###Output
AHA_2008-2005.csv contains cells of size (417, 29)
performance-measure.csv contains cells of size (337, 5)
budget_performance_measure.csv contains cells of size (337, 5)
prescription_adoption_by_country2014.csv contains cells of size (2938, 8)
prescription_adoption_by_state2014.csv contains cells of size (3825, 15)
###Markdown
H) Examine ```AHA_2008-2005.csv``` more closely
###Code
import pandas as pd
import numpy as np
k='AHA_2008-2005'
print('This file contains cells of size', dfs[k].shape) # 417 samples each with 29 attributes
display(HTML(dfs[k].head(2).transpose().to_html() ) )
###Output
This file contains cells of size (417, 29)
###Markdown
I) Extract the numeric variables out of this dataset for subsequent data analyses
###Code
# ID's are first 2 columns (region and region_code)
data = dfs[k].iloc[:,2::].values
labels = dfs[k].iloc[:,1].values
import numpy as np
cdata = np.zeros( data.shape )
for c in range( 2,data.shape[1] ):
cc=c-2
cdata[:,cc]=pd.to_numeric( dfs[k].iloc[:,c] )
print(cc, np.nanmin( cdata[:,cc] ), np.nanmax( cdata[:,cc]), dfs[k].columns[c] , )
###Output
0 2008.0 2015.0 period
1 0.07 1.0 pct_hospitals_basic_ehr_notes
2 0.0 1.0 pct_rural_hospitals_basic_ehr_notes
3 0.0 1.0 pct_small_hospitals_basic_ehr_notes
4 0.0 1.0 pct_critical_access_hospitals_basic_ehr_notes
5 0.01 1.0 pct_hospitals_basic_ehr_no_notes
6 0.0 1.0 pct_rural_hospitals_basic_ehr_no_notes
7 0.0 1.0 pct_small_hospitals_basic_ehr_no_notes
8 0.0 1.0 pct_critical_access_hospitals_basic_ehr_no_notes
9 0.72 1.0 pct_hospitals_cehrt
10 0.758553914 1.0 pct_small_rural_hospitals_cehrt
11 0.8396548290000001 1.0 pct_cah_hospitals_cehrt
12 0.16 1.0 pct_hospitals_share_labs_any_outside_provs
13 0.0 1.0 pct_hospitals_share_labs_any_outside_hospitals
14 0.16 1.0 pct_hospitals_share_labs_any_outside_ambu_provs
15 0.23 1.0 pct_hospitals_patients_ecopy_ehr
16 0.26 1.0 pct_hospitals_patients_ecopy_discharge_instr
17 0.11 1.0 pct_hospitals_share_care_summaries_any_outside_provs
18 0.0 0.884738116 pct_hospitals_share_care_summaries_any_outside_hospitals
19 0.07 1.0 pct_hospitals_share_care_summaries_any_outside_ambu_provs
20 0.0 1.0 pct_hospitals_patients_vdt
21 0.11 1.0 pct_hospitals_patients_secure_message
22 0.13 0.89 pct_hospitals_find_clinical_info
23 0.44 1.0 pct_hospitals_send_clinical_info
24 0.2 0.89 pct_hospitals_receive_clinical_info
###Markdown
J) Set missing values to ```-1```
###Code
cdata[np.isnan(cdata)]=-1
###Output
_____no_output_____
###Markdown
K) Perform dimensionality reduction using TSNE; the output data should have 2 dimensions
###Code
from sklearn.manifold import TSNE
cdata_embedded = TSNE(n_components=2).fit_transform(cdata)
cdata_embedded.shape
###Output
_____no_output_____
###Markdown
L) Plot a histogram of data distribution by region-codes
###Code
labels = dfs[k].iloc[:,1].values
import matplotlib.pyplot as plt
%matplotlib inline
fig,axes=plt.subplots(1,1, figsize=( 20, 6 ));
dfs[k].region_code.hist();
###Output
_____no_output_____
###Markdown
M) Query the rows with labels ```ND```
###Code
np.where( labels=='ND' )[0]
###Output
_____no_output_____
###Markdown
N) Reduce the data of 417-2 variables into 2 dimensions and visualize the groupings by ```region code```
###Code
grps=['AK', 'AL', 'AR', 'AZ']
clrs=['r','g','b','m', 'c', 'k' ]
fig,axes=plt.subplots(1,1, figsize=( 10, 6 ))
data_pts=[]
for i, c in enumerate( grps ): # finish list to include all 6 words here
q = np.where( labels == c )[0]
for qq in q:
print( c, '(%.2f,%.2f)' %(cdata_embedded[qq,0], cdata_embedded[qq,1]), end=', ' )
data_pts.append(qq)
print('')
plt.scatter( cdata_embedded[q,0], cdata_embedded[q,1], color=clrs[i], label=c )
plt.legend()
print( len(data_pts), 'data pts' )
###Output
AK (-30.69,-2.57), AK (-20.84,-18.21), AK (10.17,22.41), AK (-1.50,15.77), AK (-11.33,30.21), AK (23.56,-6.89), AK (18.77,-21.78), AK (4.67,-12.23),
AL (-29.10,-1.08), AL (-21.90,-19.61), AL (8.11,21.61), AL (-0.82,15.76), AL (-13.13,30.82), AL (24.13,-7.03), AL (16.07,-19.60), AL (5.98,-12.54),
AR (-28.59,-1.14), AR (-23.00,-20.38), AR (10.46,22.97), AR (-0.99,14.73), AR (-11.94,31.55), AR (22.85,-6.62), AR (16.28,-18.78), AR (11.37,-13.69),
AZ (-29.09,1.12), AZ (-22.54,-18.83), AZ (8.61,21.38), AZ (-0.12,14.45), AZ (-11.77,30.64), AZ (22.79,-5.29), AZ (17.15,-14.86), AZ (6.55,-11.95),
32 data pts
|
notebooks/api_guide/waveform_examples.ipynb | ###Markdown
Square Wave
###Code
%%timeit
cpwm = signal.square(ct, duty=0.5)
%%timeit
gpwm = cusignal.square(gt, duty=0.5)
###Output
8.68 ms ± 2.17 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Gaussian Modulated Sinusoid
###Code
%%timeit
ci, cq, ce = signal.gausspulse(ct, fc=5, retquad=True, retenv=True)
%%timeit
gi, gq, ge = cusignal.gausspulse(gt, fc=5, retquad=True, retenv=True)
###Output
3.13 ms ± 417 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Chirp
###Code
%%timeit
cw = signal.chirp(ct, f0=6, f1=1, t1=10, method='linear')
%%timeit
gw = cusignal.chirp(gt, f0=6, f1=1, t1=10, method='linear')
%%timeit
cw = signal.chirp(ct, f0=1500, f1=250, t1=10, method='quadratic')
%%timeit
gw = cusignal.chirp(gt, f0=1500, f1=250, t1=10, method='quadratic')
###Output
2.8 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Unit Impulse
###Code
%%timeit
cimp = signal.unit_impulse(int(1e8), 'mid')
%%timeit
gimp = cusignal.unit_impulse(int(1e8), 'mid')
###Output
1.35 ms ± 32.9 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
|
codes/cnn_pli_final_new_.ipynb | ###Markdown
###Code
from keras import backend as K
K.set_image_dim_ordering('tf')
import os
import tensorflow as tf
import numpy as np
import scipy.io
import time
import datetime
import pandas as pd
from sklearn.model_selection import train_test_split, StratifiedKFold
from scipy.interpolate import griddata
from sklearn.preprocessing import scale
from functools import reduce
from keras.layers import Conv3D, MaxPool3D, Flatten, Dense, Conv2D, MaxPooling2D, Conv1D, MaxPool1D
from keras.models import Sequential
from keras.layers import Dropout, Input, BatchNormalization
from sklearn.metrics import confusion_matrix, accuracy_score
# from plotly.offline import iplot, init_notebook_mode
from keras.losses import categorical_crossentropy
from keras.optimizers import Adadelta
# import plotly.graph_objs as go
# from matplotlib.pyplot import cm
# from keras.models import Model
import numpy as np
import keras
# import h5py
from keras.utils import to_categorical
from sklearn.model_selection import cross_val_score
from keras.wrappers.scikit_learn import KerasClassifier
def load(datafile):
try:
dataMat = scipy.io.loadmat('images_average.mat', mat_dtype=True)
print("Data loading complete. Shape is %r" % (dataMat['images_plv'].shape))
except:
try:
dataMat = pd.read_csv(datafile, index=False, header= None)
except:
dataMat=pd.read_excel(datafile, index=False, header= None)
try:
return dataMat['images_plv']
except:
return dataMat
def reformatInput(data, labels):
indices = np.random.permutation(147200)
trainIndices = [indices[:int(147200*.8)]]
validIndices = [indices[int(147200*.8):]]
if data.ndim == 3:
return [(data[trainIndices], np.squeeze(labels[trainIndices]).astype(np.int32)),
(data[validIndices], np.squeeze(labels[validIndices]).astype(np.int32))]
# (data[testIndices], np.squeeze(labels[testIndices]).astype(np.int32))]
elif data.ndim == 5:
return [(data[:, trainIndices], np.squeeze(labels[trainIndices]).astype(np.int32)),
(data[:, validIndices], np.squeeze(labels[validIndices]).astype(np.int32))]
def make_matrix(df):
# mat=np.array(df[1,:])
return df.values
# df=pd.read_csv('plv_csv.csv', header=None)
df=pd.read_csv('drive/My Drive/EEG/PLV_final_dist2.csv', header=None)
ldf=pd.read_csv('drive/My Drive/EEG/arousal_label_total.csv', header= None)
mat=make_matrix(df)
ldf=make_matrix(ldf)
ldf=np.asarray(ldf)
mat.shape
finalmat=[]
ldf.shape
for i in range(len(mat)):
finalmat.append(mat[i,:].reshape(32,32))
train=np.asarray(finalmat)
(X_train, y_train), (X_test, y_test) = reformatInput(train, ldf)
# y_train=to_categorical(y_train)
# y_test=to_categorical(y_test)
X_train = X_train.astype(float).reshape(117760,32,32,1)
X_test = X_test.astype(float).reshape(147200-117760,32,32,1)
y_train
from keras import backend as K
K.set_image_dim_ordering('tf')
def make_model():
num_category = 2
# t_train=y_train
# y_test=y_val
# y_train = keras.utils.to_categorical(y_train, num_category)
# y_test = keras.utils.to_categorical(y_val, num_category)
model = Sequential()
#convolutional layer with rectified linear unit activation
model.add(Conv2D(32, kernel_size=3,activation='tanh',input_shape=(32,32,1), ))
#32 convolution filters used each of size 3x3
#again
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
# model.add(Dropout(0.2))
model.add(Conv2D(64, 3, activation=keras.activations.tanh))
model.add(Conv2D(128, kernel_size=3,activation='tanh'))
# #64 convolution filters used each of size 3x3
# #choose the best features via pooling
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
# model.add(Dropout(0.25))
# model.add(Conv2D(128, kernel_size=3,activation='relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(BatchNormalization())
# model.add(Dropout(0.2))
# randomly turn neurons on and off to improve convergence
# model.add(Dropout(0.25))
# model.add(Conv2D(256, (3, 3), activation='relu'))
# #64 convolution filters used each of size 3x3
# #choose the best features via pooling
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(BatchNormalization())
# model.add(Dropout(0.2))
# model.add(Conv2D(512, kernel_size=(3, 3),
# activation='relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(BatchNormalization())
# model.add(Dropout(0.1))
# # randomly turn neurons on and off to improve convergence
# model.add(Dropout(0.25))
# # flatten since too many dimensions, we only want a classification output
model.add(Flatten())
#fully connected to get all relevant data
model.add(Dense(128, activation='tanh'))
#one more dropout for convergence' sake :)
# model.add(Dropout(0.5))
#output a softmax to squash the matrix into output probabilities
model.add(Dense(2, activation='softplus'))
print(model.summary())
# model.compile(loss=keras.losses.binary_crossentropy,
# optimizer=keras.optimizers.Adam(.0001),
# metrics=['accuracy'])
return model
model=make_model()
# print(y_train.shape)
# model.compile(loss=keras.losses.sparse_categorical_crossentropy,
# optimizer=keras.optimizers.Adadelta(),
# metrics=['accuracy'])
# batch_size = 256
# num_epoch = 100
# #model training
# model_log = model.fit(X_train, y_train,
# batch_size=batch_size,
# epochs=num_epoch,
# verbose=1,
# validation_data=(X_test, y_test))
print(y_train.shape)
model.compile(loss=keras.losses.sparse_categorical_crossentropy,
optimizer=keras.optimizers.SGD(.00001, decay=10**-6, momentum=0.9, nesterov=True),
metrics=['accuracy'])
batch_size = 256
num_epoch = 1000
#model training
model_log = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=num_epoch,
verbose=1,
validation_data=(X_test, y_test))
# a=(model.predict(X_train))
# accuracy_score(y_pred=a,y_true=y_test)
# neural_network = KerasClassifier(build_fn=model,
# epochs=1000,
# batch_size=100,
# verbose=0)
# kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
# cross_val_score(model, train, ldf, cv=kfold, scoring="accuracy")
# y_test
# a
# def fold(k):
# folds = list(StratifiedKFold(n_splits=k, shuffle=True, random_state=1).split(train, ldf))
# return folds, X_train, y_train
# k = 7
# folds, X_train, y_train = fold(k=7)
# for j, (train_idx, val_idx) in enumerate(folds):
# print('\nFold ',j)
# X_train_cv = X_train[train_idx]
# y_train_cv = y_train[train_idx]
# X_valid_cv = X_train[val_idx]
# y_valid_cv= y_train[val_idx]
# # name_weights = "final_model_fold" + str(j) + "_weights.h5"
# # callbacks = get_callbacks(name_weights = name_weights, patience_lr=10)
# # generator = gen.flow(X_train_cv, y_train_cv, batch_size = batch_size)
# # model = get_model()
# model.fit(
# # generator,
# steps_per_epoch=len(X_train_cv)/batch_size,
# epochs=15,
# shuffle=True,
# verbose=1,
# validation_data = (X_valid_cv, y_valid_cv),
# callbacks = callbacks)
# print(model.evaluate(X_valid_cv, y_valid_cv))
###Output
_____no_output_____ |
MLP_and_Back_Propagation_demo.ipynb | ###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
print ("hello")
print ("Third commit!")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 1
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 1
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
print("Hello, how are you?")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 'one'
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 1
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = "1"
a + b
#This is code for reference, the error has been resolved
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 1
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
print("check module")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 1
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Basics of MLP and Backpropagation on MNIST datasetIn this tutorial we will understand how to implement a Multilayer perceptron architecture with one hidden layer. This tutorial has two parts: (a) Implementing Back-propagation from scratch (b) Using the in-built 'Autograd' module to train the MLP network.To make data loading simple, we would use the torchvision package created as part of PyTorch which has data loaders for standard datasets such as ImageNet, CIFAR10, MNIST. Import all the required packages
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
print("Importing done!")
###Output
_____no_output_____
###Markdown
Initialize the variables
###Code
batch_size = 32 # Batch size
input_dim = 784 # Input dimension (For MNIST dataset each image is of size 28 x 28 = 784)
num_of_hidden_nodes = 100 # number of hidden nodes in hidden layer
output_dim = 10 # Number of output nodes = no of classes in th dataset. In this case it is 10
learning_rate = 0.1
num_epochs = 5
a = 'hello'
b = 1
a + b
###Output
_____no_output_____
###Markdown
Load the MNIST data. For convenience we have already downloaded the MNIST dataset and saved in the '../../data' folder. So, the argument download is set to 'False'. We then whiten the dataset.
###Code
train_loader = torch.utils.data.DataLoader(datasets.MNIST('.', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size= batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Sigmoid activation function and its derivative$\sigma(x)=\frac{1}{1+e^{-x}}$$\sigma^{'}(x) = \sigma(x)(1-\sigma(x))$
###Code
def sigmoid(x):
return 1/torch.exp(x.mul(-1)).add(1)
def sigmoid_diff(x):
return torch.mul(sigmoid(x), sigmoid(x).mul(-1).add(1))
# tensor = torch.FloatTensor([[1,2,3],[1,2,3]])
# print(sigmoid(tensor)) # You can use it for debugging
# torch.sigmoid(tensor)
###Output
_____no_output_____
###Markdown
Initialize the weight matrices with some random values$W_1 \in \mathbb{R}^{784 x 100}$$W_2 \in \mathbb{R}^{100 x 10}$
###Code
# Initiliaze the weights
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor) # Weights between input and hidden layer
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor) # Weights between hidden layer and output
###Output
_____no_output_____
###Markdown
The training loop with manual backpropagationIn each epoch, we will have several batches of data. We take each of the batches and do the forward pass. Then based on the error we back-propagate.![alt text](images/mlp.png "MLP with 3-layers")Assume, batch_size = 1, matrix multiplication $*$ and element-wise multiplication $.$ Mean-Squared Loss Function:$L = 0.5*(output - true\_output)^2$ Forward Pass:$Z = \sigma(W_1^{T}X)$ [$\mathbb{R}^{1 x 100}$]$output = \sigma(W_2^{T}Z)$ [$\mathbb{R}^{1 x 10}$] Backward Pass:Derivative of loss: $diff = (output - true\_output)$ [$\mathbb{R}^{1 x 10}$]$\frac{\partial L}{\partial W_2} = Z^{T}*(diff.\sigma^{'}(output))$ [$\mathbb{R}^{100 x 10}$]$\frac{\partial L}{\partial W_1} = X^{T} *((diff.\sigma^{'}(output))*W_2^{T}).\sigma^{'}(Z)$ [$\mathbb{R}^{784 x 100}$] Parameter Update:$W_1 = W_1 - \eta \frac{\partial L}{\partial W_1}$$W_2 = W_2 - \eta \frac{\partial L}{\partial W_2}$
###Code
for epoch in range(0, num_epochs):
correct = 0
loss = 0
y_batch_onehot = torch.FloatTensor(batch_size, output_dim)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Forward Pass
x_batch = x_batch.view(-1, 784)
hidden_state_output = sigmoid(torch.mm(x_batch, W_1))
output = sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.zero_()
y_batch_onehot.scatter_(1, y_batch[:, None], 1)
# Loss (Mean-Squared error)
loss += (output - y_batch_onehot).pow(2).sum()*0.5
_, predicted_class = output.max(1)
correct += predicted_class.eq(y_batch).sum()
#Backward Pass (Back-Propagation)
# Derivative of MSE Loss
diff = (output - y_batch_onehot)
grad_w2 = torch.mm(hidden_state_output.t(),torch.mul(diff, sigmoid_diff(output))) # 100 x 10 dimensional
grad_w1 = torch.mm(x_batch.t(),torch.mul(torch.mm(torch.mul(diff, sigmoid_diff(output)), W_2.t())
,sigmoid_diff(hidden_state_output))) # 784 x 100
# Perform gradient descent
W_1 -= learning_rate*grad_w1
W_2 -= learning_rate*grad_w2
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____
###Markdown
Using in-built Autograd functionloss.backward(): calculates the gradients of the loss function w.r.t all the parameters in the networkoptimizer.step(): updates all the parameters of the networks
###Code
# import pdb
learning_rate = 0.1
W_1 = torch.randn(input_dim, num_of_hidden_nodes).type(torch.FloatTensor).cuda()
W_2 = torch.randn(num_of_hidden_nodes, output_dim).type(torch.FloatTensor).cuda()
W_1.requires_grad=True
W_2.requires_grad=True
y_batch_onehot = torch.FloatTensor(batch_size, output_dim).cuda()
for epoch in range(0, num_epochs):
correct = 0
total_loss = 0
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
x_batch = x_batch.view(-1,784).cuda()
y_batch = y_batch.cuda()
# Forward Pass
hidden_state_output = torch.sigmoid(torch.mm(x_batch, W_1))
output = torch.sigmoid(torch.mm(hidden_state_output, W_2))
# Convert the labels to one hot encoded format
y_batch_onehot.data.zero_()
y_batch_onehot.data.scatter_(1, y_batch[:, None].data, 1)
# Loss (Mean-Squared error)
# pdb.set_trace()
loss = (output - y_batch_onehot).pow(2).sum().mul(0.5)
total_loss += loss.item()
loss.backward()
# Calculate no of correct classifications
_, predicted_class = output.max(1)
correct += predicted_class.data.eq(y_batch.data).sum()
W_1.data -= learning_rate * W_1.grad.data
W_2.data -= learning_rate * W_2.grad.data
# Manually zero the gradients before running the backward pass
W_1.grad.data.zero_()
W_2.grad.data.zero_()
print("Epoch: {0} | loss: {1} | accuracy: {2}".format(epoch, total_loss/len(train_loader)
, correct/float(len(train_loader.dataset))))
###Output
_____no_output_____ |
14-Strings-and-Regular-Expressions.ipynb | ###Markdown
*This notebook comes from [A Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas (OReilly Media, 2016). This content is licensed [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE). The full notebook listing is available at https://github.com/jakevdp/WhirlwindTourOfPython.* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation pattens come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations; they are functionally equivalent:
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting Strings: Adjusting CasePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting Strings: Adding and Removing SpacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and Replacing SubstringsIf you want to find occurrances of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrance of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ValueError:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrance from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrances of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions below. Splitting and Partitioning StringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the above methods we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in Section X.X:
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
Here the "``0``", as above, refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are [entire books](http://shop.oreilly.com/product/9780596528126.do) written on the topic, so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems which might be addressed using regular expressions, as well as a basic idea of how to use them in Python.Below I'll suggest some references for learning more.Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e. files with extension ``.ipynb``) with ``"Python"`` in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character which matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character which indicates *one or more* of the entity preceeding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from above:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A More Sophisticated ExampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the above regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters which end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions which will match *all* valid emails, but beware: they are much more involved than the simple expression used above! Basics of Regular Expression SyntaxThe Syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. 1. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
2. Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters which have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these below.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. 3. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp above, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | match any digit || ``"\D"`` | match any non-digit || ``"\s"`` | match any whitespace || ``"\S"`` | match any non-whitespace || ``"\w"`` | match any alphanumeric char || ``"\W"`` | match any non-alphanumeric char |This is **not** a comprehensive list or description; for more details see, e.g., Python's [regular expression syntax](https://docs.python.org/3/library/re.htmlre-syntax) documentation. 4. Square brackets match custom character groupsIf the built-in character groups above aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For example, you may need to extract specific numerical codes in a document which consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
5. Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, e.g. ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions: curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions: for example, the ``"+"`` character will match *one or more* repetitions of what preceeds it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | match zero or one repetitions of preceeding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | match zero or more repetitions of preceeding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | match one or more repetitions of preceeding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | match ``n`` repetitions of preceeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | match between ``m`` and ``n`` repetitions of preceeding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With the above basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?) 6. Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using ``"()"`` to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching syntaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*Este notebook es una adaptación realizada por J. Rafael Rodríguez Galván del material "[Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp)" de Jake VanderPlas; tanto el [contenido original](https://github.com/jakevdp/WhirlwindTourOfPython) como la [adpatación actual](https://github.com/rrgalvan/PythonIntroMasterMatemat)] están disponibles en Github.**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Since the release of version 3.6, *f-strings* or *'formatted string literals'* are a nicer alternative to format strings. You can read more about it in [PEP 498](https://www.python.org/dev/peps/pep-0498/). With *f-string* representation the above line becomes:
###Code
f"The value of pi is {pi}"
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
With *f-strings* the above statement can be rewritten as:
###Code
first, last = 'A', 'Z'
f"""First letter: {first}. Last letter: {last}."""
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use either of the following:
###Code
print("pi = {0:.3f}".format(pi))
print(f"pi = {pi:.3f}")
###Output
pi = 3.142
pi = 3.142
###Markdown
In both the statements, the "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format. An alternative *f-string* representation of the above statement is as shown below. If debugging using *print* statements, this enables a terse yet clear representation of the variables being tracked.
###Code
print(f"{pi=}")
###Output
pi=3.14159
###Markdown
These two styles of format specification (*.format()* and *f-string*) are very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook comes from [A Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas (OReilly Media, 2016). This content is licensed [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE). The full notebook listing is available at https://github.com/jakevdp/WhirlwindTourOfPython.* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation pattens come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
String Manipulation and Regular Expressions 字符串操作和正则表达式 > One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Python语言对于字符串的操作是其一大亮点。本章会讨论Python的一些內建的字符串操作和格式化方法。在这之后,我们会简单讨论一下一个非常有用的话题*正则表达式*。这类字符串的操作经常会在数据科学中出现,因此也是Python中很重要的一节。> Strings in Python can be defined using either single or double quotations (they are functionally equivalent):Python中的字符串可以使用单引号或双引号定义(它们的功能是一致的):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
> In addition, it is possible to define multi-line strings using a triple-quote syntax:除此之外,还可以使用连续的三个引号定义多行的字符串:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
> With this, let's take a quick tour of some of Python's string manipulation tools.好了,接下来我们来快速的看一下Python的字符串操作工具。 Simple String Manipulation in Python Python的简单字符串操作> For basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper对于基本的字符串操作来说,Python內建的字符串方法使用起来非常方便。如果你有在C或其他底层语言的编程经历的话,你会发现Python的字符串操作非常简单。我们之前介绍了Python的字符串类型和一些方法;下面我们稍微深入的了解一下。 Formatting strings: Adjusting case 格式化字符串:转换大小写> Python makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:Python对字符串进行大小写转换非常容易。我们将会看到`upper()`,`lower()`,`capitalize()`,`title()`和`swapcase()`方法,下面我们用一个大小写混乱的字符串作为例子来说明:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
> To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:想要将整个字符串转为大写或者小写,使用`upper()`或者`lower()`方法:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
> A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:还有一个很常见的格式化需求,将每个单词的首字母编程大写,或者每个句子的首字母变为大写。可以使用`title()`和`capitalize()`方法:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
> The cases can be swapped using the ``swapcase()`` method:可以使用`swapcase()`方法切换大小写:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spaces 格式化字符串:增加和去除空格> Another common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:另外一个常见的需求是在字符串开头或结束为止去除空格(或者其他字符)。`strip()`方法可以去除开头和结尾的空白。
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
> To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:如果需要去除右边或者左边的空格,可以使用`rstrip()`或`lstrip()`方法:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
> To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:想要去除非空格的其他字符,你可以将你想要去除的字符作为参数传给`strip()`方法:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
> The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.与strip相反的操作,往字符串中加入空格或其他字符,可以使用`center()`,`ljust()`,`rjust()`方法。> For example, we can use the ``center()`` method to center a given string within a given number of spaces:例如,我们可以使用`center()`方法在给定长度的空格中居中:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
> Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:同理,`ljust()`和`rjust()`让字符串在给定长度的空格中居左或居右:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
> All these methods additionally accept any character which will be used to fill the space.For example:上述的方法都可以接收一个额外的参数用来取代空白字符,例如:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
> Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:因为0填充也经常需要用到,因此Python提供了`zfill()`方法来直接提供0填充的功能:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substrings 查找和替换子串> If you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.如果你想要在字符串中查找特定的子串,內建的`find()`/`rfind()`,`index()`/`rindex()`以及`replace()`方法是最合适的选择。> ``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:`find()`和`index()`是非常相似的,它们都是查找子串在字符串中第一个出现的位置,返回位置的序号值:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
> The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:两个方法唯一的区别在于如果找不到子串情况下的处理方式;`find()`会返回-1,而`index()`会生成一个`ValueError`异常:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
> The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:相应的`rfind()`和`rindex()`方法很类似,区别是这两个方法查找的是子串在字符串中最后出现的位置。
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
> For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:对于需要检查字符串是否以某个子串开始或者结束,Python提供了`startswith()`和`endswith()`方法:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
> To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:要将字符串中的某个子串替换成新的子串的内容,可以使用`replace()`方法。下例中将`'brown'`替换成`'red'`:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
> The ``replace()`` function returns a new string, and will replace all occurrences of the input:`replace()`方法会返回一个新的字符串,并将里面所有找到的子串替换:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
> For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions).想要更加灵活的使用`replace()`方法,参见[使用正则表达式进行模式匹配](Flexible-Pattern-Matching-with-Regular-Expressions)。 Splitting and partitioning strings 分割字符串> If you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.如果需要查找一个子串*并且*根据找到的子串的位置将字符串进行分割,`partition()`和/或`split()`方法正是你想要的。> The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:`partition()`方法返回三个元素的一个元组:查找的子串前面的子串,查找的子串本身和查找的子串后面的子串:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
> The ``rpartition()`` method is similar, but searches from the right of the string.`rpartition()`方法类似,不过是从字符串右边开始查找。> The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:`split()`方法可能更加有用;它会查找所有子串出现的位置,然后返回这些位置之间的内容列表。默认的子串会是任何的空白字符,返回字符串中所有的单词:
###Code
line.split()
###Output
_____no_output_____
###Markdown
> A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:还有一个`splitlines()`方法,会按照换行符分割字符串。我们以日本17世纪诗人松尾芭蕉的俳句为例:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
> Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:如果你需要撤销`split()`方法,可以使用`join()`方法,使用一个特定字符串将一个迭代器串联起来:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
> A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:使用换行符`"\n"`将刚才拆开的诗句连起来,恢复成原来的字符串:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format Strings 格式化字符串> In the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:在前面介绍的方法中,我们学习到了怎样从字符串中提取值和如果将字符串本身操作成需要的格式。对于字符串来说,还有一个重要的需求,就是将其他类型的值使用字符串*表达出来*。当然,你总是可以使用`str()`函数将其他类型的值转换为字符串,例如:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
> For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):对于更加复杂的格式,你可能试图使用在[Python语法: 操作符](04-Semantics-Operators.ipynb)介绍过的字符串运算来实现:
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
> A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:但是我们又一个更灵活的方式来处理格式化,那就是使用*格式化字符串*,也就是在字符串中含有特殊的标记代表格式(这个特殊标记指的是花括号),然后将需要表达的值插入到字符串的相应位置上。例如:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
> Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:在花括号`{}`之间,你可以加入需要的信息。例如你可以在花括号中加入数字,表示该位置插入的参数的序号:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
> If you include a string, it will refer to the key of any keyword argument:如果你在花括号中加入字符串,表示的是该位置插入的关键字参数的名称:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
> Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:最后,对于数字输入,你可以在花括号中加入格式化的代码控制数值转换为字符串的格式。例如,将一个浮点数转换为字符串,并且保留小数点后3位,可以这样写:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
> As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.如前所述,`"0"`表示参数位置序号。`":"`表示格式化代码分隔符。`".3f"`表示浮点数格式化的代码,小数点后保留3位。> This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation.这样的格式定义非常灵活,我们这里的例子仅仅是一个简单的介绍。想要查阅更多有关格式化字符串的语法内容,请参见Python在线文档有关[格式化定义](https://docs.python.org/3/library/string.htmlformatspec)的章节。 fstring(译者添加)Python3.6之后,提供了另外一种灵活高效的格式化字符串方法,叫做`fstring`。可以直接将变量值插入到格式化字符串中输出。如前面pi的例子:
###Code
f"The value of pi is {pi}"
###Output
_____no_output_____
###Markdown
`fstring`通过在格式化字符串前加上f,然后同样可以通过花括号定义格式化的内容,花括号中是变量名。再如:
###Code
first = 'A'
last = 'Z'
f"First letter: {first}. Last letter: {last}."
###Output
_____no_output_____
###Markdown
同理,数字的格式化也类似,仅需在变量名后使用`":"`将变量名和格式化代码分开即可。如上面的浮点数格式化例子:
###Code
f"pi = {pi:.3f}"
###Output
_____no_output_____
###Markdown
Flexible Pattern Matching with Regular Expressions 使用正则表达式实现模式匹配> The methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.Python的`str`类型的內建方法提供了一整套强大的格式化、分割和操作字符串的工具。Python內建的*正则表达式*模块提供了更为强大的字符串操作工具。正则表达式是一个巨大的课题;在这个课题上可以写一本书来详细介绍(包括Jeffrey E.F. Friedl写的[*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)),所以期望在一个小节中介绍完它是不现实的。> My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).作者期望通过这个小节的介绍,能够让读者对于什么情况下需要使用正则表达式以及在Python中最基本的正则表达式使用方法有初步的了解。作者建议在[更多的正则表达式资源](Further-Resources-on-Regular-Expressions)中进一步拓展阅读和学习。> Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:从最基础上来说,正则表达式其实就是一种在字符串中进行*灵活模式匹配*的方法。如果你经常使用命令行,你可能已经习惯了这种灵活匹配机制,比方说`"*"`号,就是一个典型的通配符。我们来看一个例子,我们可以列示所有的IPython notebook(扩展名为*.ipynb*),然后文件名中含有"Python"的文件列表。
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
> Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:正则表达式就是一种泛化了的"通配符",使用标准的语法对字符串进行模式匹配。Python中的正则表达式功能包含在`re`內建模块;作为一个简单的例子,我们使用`re`里面的`split()`方法来实现字符串`str`的字符串分割功能:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
> Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.本例中,我们首先*编译了*一个正则表达式,然后我们用这个表达式对字符串进行*分割*。就像`str`的`split()`方法会使用空白字符切割字符串一样,正则表达式的`split()`方法也会返回所有匹配输入的模式的字符串切割出来的字符串列表。> In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.在这个例子里,输入的模式是`"\s+"`:`"\s"`是正则表达式里面的一个特殊的字符,代表着任何空白字符(空格,制表符,换行等),`"+"`号代表前面匹配到的字符出现了*一次或多次*。因此,这个正则表达式的意思是匹配任何一个或多个的空白符号。> The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:这里的`split()`方法是一个在*模式匹配*之上的字符串分割方法;对于正则表达式来说,更加基础的可能是`match()`方法,它会返回字符串是否成功匹配到了某种模式:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
> Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:就像`split()`,正则表达式中也有相应的方法能够找到首个匹配位置(就像`str.index()`或者`str.find()`一样)或者是查找和替换(就像`str.replace()`)。我们还是以前面的那行字符串为例:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
> With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:可以使用`regex.search()`方法像`str.index()`或者`str.find()`那样查找模式位置:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
> Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:类似的,`regex.sub()`方法就像`str.replace()`那样替换字符串:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
> With a bit of thought, other native string operations can also be cast as regular expressions.其他的原始字符串操作也可以转换为正则表达式操作。 A more sophisticated example 一个更加复杂的例子> But, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.于是,你就会问,既然如此,为什么我们要用复杂的正则表达式的方法,而不用简单的字符串方法呢?原因就是正则表达式提供了更多的灵活性。> Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:下面我们来考虑一个更加复杂的例子:匹配电子邮件地址。作者会使用一个简单的(但又难以理解的)正则表达式,然后我们看看这个过程中发生了什么。如下:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
> Using this, if we're given a line from a document, we can quickly extract things that look like email addresses使用这个正则表达式,我们可以很快地在一行文本中提取出来所有的电子邮件地址:
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
> (Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).(请注意这两个地址都是编撰的;肯定有更好的方式能够联系上Guido,译者注:Guido是Python的创始人)。> We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:我们可以做更多的处理,比方说将电子邮件地址替换成另一个字符串,此处做了一个脱敏处理:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
> Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:最后,如果你需要匹配*任何*的电子邮件地址,那么上面的正则表达式还远远不够。它只允许地址由字母数字组成并且一级域名仅能支持少数的通用域名。因为下面的地址含有点`.`,因此只能匹配到一部分的电子邮件地址。
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
> This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here!这表明了如果你不小心的话,正则表达式会发生多奇怪的错误。如果你在网上搜索的话,你可以发现一些能够匹配*所有*的电子邮件地址的正则表达式,但是,它们比我们这个简单的版本难理解多了。 Basics of regular expression syntax 正则表达式基本语法> The syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively.正则表达式的语法对于这个小节的内容来说显得太庞大了。然而,了解一些基础的内容能够让读者走的更远:作者会在这里简单介绍一些最基本的结构,然后列出一个完整的资源以供读者继续深入研究和学习。作者希望通过这些简单的基础内容能让读者更加有效的阅读那些额外的资源。 Simple strings are matched directly 简单的字符串会直接匹配> If you build a regular expression on a simple string of characters or digits, it will match that exact string:如果你的正则表达式只包括简单的字符和数字的组合,那么它将匹配自身:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meanings 特殊含义的字符> While simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```> We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:普通的字符和数字会直接匹配,然后正则表达式中包含很多的特殊字符,他们是:```shell. ^ $ * + ? { } [ ] \ | ( )```一会我们会稍微详细的介绍其中的部分。同时,你需要知道的是,如果你希望直接匹配上述的特殊字符的话,你需要使用反斜杠`"\"`来转义他们:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
> The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:上面的正则表达式中的前缀`r`是说明改字符串是一个*原始字符串*; 在标准的Python字符串中,反斜杠用来转义并表示一个特殊字符。例如,制表符写成字符串的形式为`"\t"`:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
> Such substitutions are not made in a raw string:这种转义不会出现在原始字符串中:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
> For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string.因此,当你需要在正则表达式中使用反斜杠时,使用原始字符串是一个好的选择。 Special characters can match character groups 特殊字符能匹配一组字符> Just as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.就像反斜杠在正则表达式中能转义特殊字符那样,反斜杠也能将一些普通字符转义成特殊字符。这些特殊字符能代表一组或一类的字符组合,就像我们在前面的例子当中看到的那样。在电子邮件地址的正则表达式中,我们使用了字符`"\w"`,这个特殊字符代表着*所有的字母数字符号*。同样的,在前面的`split()`例子中,`"\s"`代表着*所有的空白字符*。> Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:把这两个特殊符号放在一起,我们就可以构造一个*任意两个字母或数字之间含有一个空格*的正则表达式:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
> This example begins to hint at the power and flexibility of regular expressions.这个例子已经开始展示正则表达式的力量和灵活性了。 > The following table lists a few of these characters that are commonly useful:> | Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |下表列出了常用的特殊符号:| 特殊符号 | 描述 || 特殊符号 | 描述 ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | 任意数字 || ``"\D"`` | 任意非数字 || ``"\s"`` | 任意空白符号 || ``"\S"`` | 任意非空白符号 || ``"\w"`` | 任意字符或数字 || ``"\W"`` | 任意非字符或数字 |> This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax).这张表很不完整;需要详细描述,请参见:[正则表达式语法文档](https://docs.python.org/3/library/re.htmlre-syntax)。 Square brackets match custom character groups 中括号匹配自定义的字符组> If the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:如果內建的字符组并不满足你的要求,你可以使用中括号来指定你需要的字符组。例如,下例中的正则表达式匹配任意小写元音字母:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
> Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:你还可以使用横线`"-"`来指定字符组的范围:例如,`"[a-z]"`匹配任意小写字母,`"[1-3]"`匹配`"1"`,`"2"`或`"3"`。例如,你希望从某个文档中提取出特定的数字代码,该代码由一个大写字母后面跟一个数字组成。你可以这样写:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated characters 通配符匹配重复次数的字符> If you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:如果你想要匹配一个字符串包含3个字符或数字,当然你可以这样写`"\w\w\w"`。但是因为这个需求太普遍了,因此正则表达式将它做成了重复次数的规则 - 使用花括号中的数字表示重复的次数:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
> There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:当然还有一些标记能够匹配任意次数的重复 - 例如,`"+"`号代表前面匹配到的字符重复*一次或多次*:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
> The following is a table of the repetition markers available for use in regular expressions:> | Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` |下表列示了正则表达式中可用的重复标记:| 特殊字符 | 描述 | 例子 ||-----------|-------------|---------|| ``?`` | 匹配0次或1次 | ``"ab?"`` 匹配 ``"a"`` 或 ``"ab"`` || ``*`` | 匹配0次或多次 | ``"ab*"`` 匹配 ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | 匹配1次或多次 | ``"ab+"`` 匹配 ``"ab"``, ``"abb"``, ``"abbb"``... 但不匹配 ``"a"`` || ``{n}`` | 匹配正好n次 | ``"ab{2}"`` 匹配 ``"abb"`` || ``{m,n}`` | 匹配最小m次最大n次 | ``"ab{2,3}"`` 匹配 ``"abb"`` 或 ``"abbb"`` | > With these basics in mind, let's return to our email address matcher:了解了上述基础只是后,让我们回到我们的电子邮件地址的例子:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
> We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.现在我们能理解这个表达式了:我们首先需要一个或多个字母数字字符`"\w+"`,然后需要字符`"@"`,然后需要一个或多个字母数字字符`"\w+"`,然后需要一个`"\."`(注意这里使用了反斜杠,因此这个点没有特殊含义),最后我们需要正好三个小写字母。> If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:如果我们需要修改这个正则表达式,让它可以匹配奥巴马的电子邮件地址的话,我们可以使用中括号写法:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
> We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?).上面我们将`"\w+"`改成了`"[\w.]+"`,因此我们可以在这里匹配上任意的字母数字*或*点号。经过这一修改后,这一正则表达式能够匹配更多的电子邮件地址了(虽然还不是全部 - 你能举例说明哪些电子邮件地址不能匹配到吗?)译者注:`"[\w.]+"`不需要写成`"[\w\.]+"`,原因是在正则表达式的中括号中,除了`^, -, ], \`这几个符号之外,所有其他的符号都没有特殊含义。 Parentheses indicate *groups* to extract 使用小括号进行分组匹配> For compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:对于像上面的电子邮件地址匹配那样复杂的正则表达式来说,我们通常希望提取他们的部分内容而非完全匹配。这可以使用小括号进行分组匹配来完成:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
> As we see, this grouping actually extracts a list of the sub-components of the email address.正如结果所示,这个分组后的正则表达式将电子邮件地址的各个部分分别提取了出来。> We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:更进一步,我们可以给提取出来的各个部分*命名*,这可以通过使用`"(?P)"`的语法实现,在这种情况下,匹配的分组将会提取到Python的字典结构当中:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
ls: cannot access '*Python*.ipynb': No such file or directory
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* 文字列操作と正規表現 String Manipulation and Regular Expressions Python言語が本当に優れている1つの場所は、文字列の操作です。このセクションでは、非常に役立つ*正規表現*の主題へのクイックガイドに進む前に、Pythonの組み込みの文字列メソッドとフォーマット操作のいくつかについて説明します。このような文字列操作パターンは、データサイエンスの作業のコンテキストで頻繁に発生し、このコンテキストでのPythonの大きな特典の1つです。Pythonの文字列は、単一引用符または二重引用符を使用して定義できます(これらは機能的に同等です)。One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
さらに、三重引用符の構文を使用して複数行の文字列を定義することが可能です。In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
これで、Pythonの文字列操作ツールのいくつかを簡単に見ていきましょう。With this, let's take a quick tour of some of Python's string manipulation tools. Pythonでの単純な文字列操作文字列の基本的な操作には、Pythonの組み込みの文字列メソッドが非常に便利です。Cまたは他の低水準言語で作業しているバックグラウンドがある場合、Pythonのメソッドのシンプルさが非常に更新されていることに気付くでしょう。以前にPythonの文字列型とこれらのメソッドのいくつかを紹介しました。 ここで少し詳しく説明します Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper 文字列のフォーマット:大文字小文字の調整Pythonでは、文字列の大文字と小文字を簡単に調整できます。ここでは、``upper()``、``lower()``、``capitalize()``、``title()``、``swapcase()``メソッドを使用して、 例として次の乱雑な文字列: Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
文字列全体を大文字または小文字に変換するには、それぞれ``upper()``または``lower()``メソッドを使用できます:To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
一般的な書式設定の必要性は、各単語の最初の文字だけ、またはおそらく各文の最初の文字を大文字にすることです。これは `` title() ``と `` capitalize() ``メソッドで行うことができます:A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
ケースは `` swapcase() ``メソッドを使用して大文字を小文字へ、小文字を大文字へ交換できます:The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
文字列のフォーマット:スペースの追加と削除別の一般的なニーズは、文字列の最初または最後からスペース(または他の文字)を削除することです。文字を削除する基本的な方法は `` strip() ``メソッドで、行の最初と最後から空白を取り除きます: Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
右または左のスペースだけを削除するには、それぞれ `` rstrip() ``または `` lstrip() ``を使用します。To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
スペース以外の文字を削除するには、目的の文字を `` strip() ``メソッドに渡します:To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
この操作とは逆に、スペースやその他の文字を追加するには、``center()``、``ljust()``、``rjust()``メソッドを使用します。たとえば、 `` center() ``メソッドを使用して、指定した数のスペース内で指定した文字列を中央に配置できます。The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
同様に、``ljust()``と``rjust()``は、指定された長さのスペース内で文字列を左揃えまたは右揃えします。Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
これらすべてのメソッドは、スペースを埋めるために使用される任意の文字をさらに受け入れます。例えば:All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
ゼロ充填は非常に一般的なニーズであるため、Pythonは `` zfill() ``も提供します。これは、文字列にゼロを右詰めする特殊な方法です。Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
部分文字列の検索と置換文字列内の特定の文字の出現を検索する場合は、``find()``/``rfind()``、``index()``/``rindex()``、および`` replace()``メソッドは最高の組み込みメソッドです。``find()``と `` index()``は、文字列内の最初の文字または部分文字列を検索し、部分文字列のインデックスを返すという点で非常に似ています: Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
``find()``と ``index()``の唯一の違いは、検索文字列が見つからない場合の動作です。 ``find()``は ``-1``を返しますが、 ``index()``は `` ValueError``を送出します:The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
関連する ``rfind()``と ``rindex()``も同様に機能しますが、文字列の先頭ではなく、末尾から最初の出現を検索します:The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
line.rindex('a')
###Output
_____no_output_____
###Markdown
文字列の最初または最後で部分文字列をチェックする特別な場合のために、Pythonは ``startswith()``および ``endswith()``メソッドを提供しています:For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
さらに一歩進んで、指定された部分文字列を新しい文字列に置き換えるには、 ``replace()``メソッドを使用できます。ここで、`` '茶色'``を`` '赤色'``に置き換えましょう:To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
``replace()``関数は新しい文字列を返し、すべての入力を置き換えます:The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
この ``replace()``機能へのより柔軟なアプローチについては、[正規表現による柔軟なパターンマッチング](#Flexible-Pattern-Matching-with-Regular-Expressions)の正規表現の説明を参照してください。For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). 文字列の分割と分割部分文字列を検索し、その位置に基づいて文字列を分割する場合は、``partition()``または``split()``メソッドが探しているものです。どちらも部分文字列のシーケンスを返します。``partition()``メソッドは3つの要素を持つタプルを返します:分割ポイントの最初のインスタンスの前の部分文字列、分割ポイント自体、そしてその後の部分文字列: Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
``rpartition()``メソッドも同様ですが、文字列の右側から検索します。``split()``メソッドはおそらくもっと便利です。 分割ポイントの **すべての** インスタンスを見つけ、その間の部分文字列を返します。デフォルトでは、空白で分割され、文字列内の個々の単語のリストが返されます。The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
関連するメソッドは ``splitlines()``で、改行文字で分割されます。17世紀の詩人である松尾芭蕉によく知られている俳句を使ってみましょう。A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
``split()``を元に戻したい場合は、 ``join()``メソッドを使用できます。これは、スプリットポイントと反復可能オブジェクトから構築された文字列を返します:Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
print("\t".join(['aaaa','bbbb','cccc']))
###Output
aaaa bbbb cccc
###Markdown
一般的なパターンは、特殊文字 `` "\ n" ``(改行)を使用して、以前に分割された行を結合し、入力を回復することです。A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
フォーマット文字列これまでの方法では、文字列から値を抽出する方法と、文字列自体を目的の形式に操作する方法を学びました。文字列メソッドのもう1つの用途は、他のタイプの値の文字列 **表現** を操作することです。もちろん、文字列表現は常に ``str()``関数を使用して見つけることができます。 例えば: Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
より複雑な形式の場合、[基本的なPythonセマンティクス:演算子](04-Semantics-Operators.ipynb)で説明されているように、文字列演算を使用したくなるかもしれません。For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
これを行うためのより柔軟な方法は、**フォーマット文字列**を使用することです。これは、文字列形式の値が挿入される特別なマーカー(中括弧で示されます)が付いた文字列です。基本的な例を次に示します。A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
`` {} ``マーカー内には、そこに表示したい*何*かに関する情報を含めることもできます。数値を含める場合、挿入する引数のインデックスを参照します。Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
文字列を含めると、キーワード引数のキーを参照します。If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
最後に、数値入力の場合、値を文字列に変換する方法を制御するフォーマットコードを含めることができます。たとえば、数値を小数点以下3桁の浮動小数点数として出力するには、以下を使用できます。Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
前と同じように、ここで"``0``"は挿入される値のインデックスを指します。"``::``"は、フォーマットコードが続くことを示します。"``.3f``"は必要な精度をエンコードします:小数点を超える3桁の浮動小数点形式です。このフォーマット指定のスタイルは非常に柔軟であり、ここでの例は、使用可能なフォーマットオプションの表面をかろうじて引っ掻きます。これらのフォーマット文字列の構文の詳細については、Pythonのオンラインドキュメントの[フォーマット仕様](https://docs.python.org/3/library/string.htmlformatspec)セクションを参照してください。As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. 正規表現による柔軟なパターンマッチングPythonの `` str``型のメソッドは、文字列データをフォーマット、分割、操作するための強力なツールセットを提供します。しかし、Pythonの組み込み **正規表現** モジュールでは、さらに強力なツールを利用できます。正規表現は大きなトピックです。このトピックについて書かれた本はすべてある(Jeffrey EF Friedlの[ **Mastering Regular Expressions、3rd Edition** ](http://shop.oreilly.com/product/9780596528126.do)を含む)ので、それは難しいでしょう単一のサブセクション内で正義を行います。ここでの私の目標は、正規表現を使用して対処できる問題のタイプと、それらをPythonで使用する方法の基本的な考え方を説明することです。[正規表現に関するその他のリソース](Further-Resources-on-Regular-Expressions)でさらに学習するための参照をいくつか提案します。基本的に、正規表現は文字列の**柔軟なパターンマッチング**の手段です。コマンドラインを頻繁に使用する場合、ワイルドカードとして機能する"``*``"文字を使用したこのタイプの柔軟なマッチングに慣れていると思います。たとえば、ファイル名に "Python" が含まれるすべてのIPythonノートブック(拡張子が**.ipynb**のファイル)を一覧表示するには、 "``*``" ワイルドカードを使用して、次の文字を照合します。 Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
正規表現は、この「ワイルドカード」の考えを、柔軟な文字列マッチング構文の広い範囲に一般化します。正規表現へのPythonインターフェースは組み込みの `` re``モジュールに含まれています。 簡単な例として、それを使って文字列 `` split() ``メソッドの機能を複製しましょう:Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
ここでは、まず正規表現を**コンパイル**してから、それを使用して文字列を**分割**しました。Pythonの `` split() ``メソッドが空白の間のすべての部分文字列のリストを返すのと同じように、正規表現 `` split() ``メソッドは、入力パターンに一致するすべての部分文字列のリストを返します。この場合、入力は `` "\ s +" ``です: "``\ s``"は空白(スペース、タブ、改行など)に一致する特殊文字で、 "``+``"は、その前にあるエンティティの**1つ以上**を示す文字です。したがって、正規表現は、1つ以上のスペースで構成される部分文字列に一致します。ここでの `` split() ``メソッドは、基本的にこの*パターンマッチング*動作に基づいて構築された便利なルーチンです。 より基本的なのは `` match() ``メソッドで、文字列の先頭がパターンに一致するかどうかを教えてくれます:Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
`` split() ``と同様に、最初の一致を見つける( `` str.index() ``や `` str.find() ``など)、または検索して置換する( `` str.replace()``)。前の行を再び使用します。Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
これにより、``regex.search()``メソッドが``str.index()``または``str.find()``とよく似ていることがわかります:With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
同様に、 `` regex.sub() ``メソッドは `` str.replace() ``のように動作します:Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
少し考えれば、他のネイティブ文字列操作も正規表現としてキャストできます。With a bit of thought, other native string operations can also be cast as regular expressions. より洗練された例しかし、もっと直感的で単純な文字列メソッドではなく、なぜ正規表現のより複雑で冗長な構文を使用したいのでしょうか。利点は、正規表現が**はるかに**高い柔軟性を提供することです。ここでは、より複雑な例、つまりメールアドレスの照合という一般的なタスクについて考えます。まず、(やや判読できない)正規表現を記述してから、何が行われているのかを説明します。ここに行く: A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
これを使用して、ドキュメントから行が与えられた場合、電子メールアドレスのように見えるものをすばやく抽出できますUsing this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(これらのアドレスは完全に構成されていることに注意してください。Guidoと連絡を取るためのより良い方法がおそらくあるでしょう)。これらの電子メールアドレスを別の文字列に置き換えるなど、さらに出力操作でアドレスを非表示にすることもできます。(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
最後に、本当に**任意の**メールアドレスに一致させたい場合、前述の正規表現は単純すぎることに注意してください。たとえば、いくつかの一般的なドメインサフィックスのいずれかで終わる英数字で構成されるアドレスのみが許可されます。したがって、たとえば、ここで使用されるピリオドは、住所の一部のみを見つけることを意味します。Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
これは、注意を怠った場合に許容できない正規表現がどれほどあり得るかを示しています。オンラインで検索すると、**すべての**有効なメールに一致する正規表現の候補が見つかりますが、ここで使用されている単純な表現よりもはるかに複雑です。This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! 正規表現構文の基本正規表現の構文は、この短いセクションではトピックが大きすぎます。それでも、多少の親しみは長い道のりを歩むことができます。ここでは、基本的な構成のいくつかについて説明し、さらに詳細を学習できるいくつかのより完全なリソースをリストします。以下のクイックプライマーがこれらのリソースを効果的に使用できるようになることを願っています。 Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. 単純な文字列は直接照合されます文字または数字の単純な文字列で正規表現を作成すると、その正確な文字列と一致します。 Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
一部の文字には特別な意味があります単純な文字または数字は直接一致しますが、正規表現内で特別な意味を持つ文字がいくつかあります。 彼らです:```. ^ $ * +? { } [ ] \ | ( )```これらのいくつかの意味を一時的に説明します。それまでの間、これらの文字のいずれかに直接一致させたい場合は、バックスラッシュで**エスケープ**できることを知っておく必要があります。 Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
`` r'\$' ``の `` r``序文は**生の文字列**を示します。 標準のPython文字列では、バックスラッシュは特殊文字を示すために使用されます。たとえば、タブは `` "\t" ``で示されます。The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
このような置換は生の文字列では行われません。Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
このため、正規表現でバックスラッシュを使用する場合は常に、未加工の文字列を使用することをお勧めします。For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. 特殊文字は文字グループと一致できます正規表現内の``"\"``文字が特殊文字をエスケープして通常の文字に変換できるように、通常の文字に特別な意味を与えるために使用することもできます。これらの特殊文字は、指定された文字のグループと一致し、以前に見たことがある。以前のメールアドレスの正規表現では、「任意の英数字」に一致する特別なマーカーである文字``"\w"``を使用しました。 同様に、単純な ``split()``の例では、``空白文字``を示す特別なマーカーである``\s``も見ました。これらをまとめると、**任意の2つの文字/数字とその間に空白を含む**に一致する正規表現を作成できます。 Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
この例は、正規表現の力と柔軟性をほのめかし始めています。This example begins to hint at the power and flexibility of regular expressions. 次の表に、一般的に役立つこれらの文字のいくつかを示します。The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |これは包括的なリストや説明ではありません。 詳細については、Pythonの[正規表現構文のドキュメント](https://docs.python.org/3/library/re.htmlre-syntax)をご覧ください。This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). 角括弧はカスタム文字グループと一致します組み込みの文字グループが十分に具体的でない場合は、角かっこを使用して、興味のある任意の文字セットを指定できます。たとえば、次の例はすべての小文字の母音と一致します。 Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
同様に、ダッシュを使用して範囲を指定できます。たとえば、``"[a-z]"``はすべての小文字に一致し、``"[1-3]"`` は ``"1"``、``"2"``または``"3"``。たとえば、大文字の後に数字が続く特定の数値コードをドキュメントから抽出する必要がある場合があります。 これは次のように行うことができます。Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
ワイルドカードは繰り返し文字と一致しますたとえば、3つの英数字を続けて文字列を照合する場合は、``"\w\w\w"``のように書くことができます。これは非常に一般的なニーズであるため、繰り返しと一致する特定の構文があります。 Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
任意の数の繰り返しに一致するマーカーも利用できます。たとえば、``"+"``文字は、その前の**1つ以上の**繰り返しに一致します。There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
以下は、正規表現で使用できる繰り返しマーカーの表です。The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | これらの基本事項を念頭に置いて、メールアドレスマッチャーに戻りましょう。With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
これが何を意味するのか理解できました:1つ以上の英数字(``"\w+"``)の後に*記号*(``"@"``)が続き、その後に1つ以上の英数字(``"\w+"``)、ピリオド(``"\."`` –バックスラッシュエスケープの必要性に注意)、その後に小文字3文字が続きます。これを変更して、Obamaの電子メールアドレスが一致するようにするには、角かっこ表記を使用して変更できます。We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected] [email protected]')
###Output
_____no_output_____
###Markdown
``"\w+"``を ``"[\w.]+"``に変更したため、任意の英数字またはピリオドに一致します。このより柔軟な表現により、より幅広い範囲の電子メールアドレスを照合できます(ただし、すべてではありません。この表現の他の欠点を特定できますか?)。We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). 括弧は抽出する**グループ**を示しますメールマッチャーのような複合正規表現の場合、完全一致ではなくコンポーネントを抽出することがよくあります。 これは、括弧を使用して結果を「グループ化」することができます。 Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
ご覧のように、このグループ化は実際には電子メールアドレスのサブコンポーネントのリストを抽出します。少し進んで、抽出されたコンポーネントに ``"(?P)"``構文を使用して**名前**を付けることができます。この場合、グループをPython辞書として抽出できます。As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = "a string"
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = " this is the content "
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip("0")
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
"435".rjust(10, "0")
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
"435".zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = "the quick brown fox jumped over a lazy dog"
line.find("fox")
line.index("fox")
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find("bear")
line.index("bear")
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind("a")
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith("dog")
line.startswith("fox")
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace("brown", "red")
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace("o", "--")
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition("fox")
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
"--".join(["1", "2", "3"])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(["matsushima-ya", "aah matsushima-ya", "matsushima-ya"]))
###Output
_____no_output_____
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
# f-strings (Python 3.6+)
f"The value of pi is {pi}"
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format("A", "Z")
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last="Z", first="A")
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
# "pi = {0:.3f}".format(pi)
# f-strings (Python 3.6+)
f"pi = {pi:.3f}"
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
_____no_output_____
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile("\s+")
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
_____no_output_____
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = "the quick brown fox jumped over a lazy dog"
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index("fox")
regex = re.compile("fox")
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace("fox", "BEAR")
regex.sub("BEAR", line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile("\w+@\w+\.[a-z]{3}")
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] " "or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub("[email protected]", text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall("[email protected]")
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile("ion")
regex.findall("Great Expectations")
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r"\$")
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print("a\tb\tc")
###Output
_____no_output_____
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r"a\tb\tc")
###Output
_____no_output_____
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r"\w\s\w")
regex.findall("the fox is 9 years old")
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile("[aeiou]")
regex.split("consequential")
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile("[A-Z][0-9]")
regex.findall("1043879, G2, H6")
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r"\w{3}")
regex.findall("The quick brown fox")
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r"\w+")
regex.findall("The quick brown fox")
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r"\w+@\w+\.[a-z]{3}")
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r"[\w.]+@\w+\.[a-z]{3}")
email2.findall("[email protected]")
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r"([\w.]+)@(\w+)\.([a-z]{3})")
text = "To email Guido, try [email protected] " "or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match("[email protected]")
match.groupdict()
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* String Manipulation and Regular Expressions One place where the Python language really shines is in the manipulation of strings.This section will cover some of Python's built-in string methods and formatting operations, before moving on to a quick guide to the extremely useful subject of *regular expressions*.Such string manipulation patterns come up often in the context of data science work, and is one big perk of Python in this context.Strings in Python can be defined using either single or double quotations (they are functionally equivalent):
###Code
x = 'a string'
y = "a string"
x == y
###Output
_____no_output_____
###Markdown
In addition, it is possible to define multi-line strings using a triple-quote syntax:
###Code
multiline = """
one
two
three
"""
###Output
_____no_output_____
###Markdown
With this, let's take a quick tour of some of Python's string manipulation tools. Simple String Manipulation in PythonFor basic manipulation of strings, Python's built-in string methods can be extremely convenient.If you have a background working in C or another low-level language, you will likely find the simplicity of Python's methods extremely refreshing.We introduced Python's string type and a few of these methods earlier; here we'll dive a bit deeper Formatting strings: Adjusting casePython makes it quite easy to adjust the case of a string.Here we'll look at the ``upper()``, ``lower()``, ``capitalize()``, ``title()``, and ``swapcase()`` methods, using the following messy string as an example:
###Code
fox = "tHe qUICk bROWn fOx."
###Output
_____no_output_____
###Markdown
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
###Code
fox.upper()
fox.lower()
###Output
_____no_output_____
###Markdown
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.This can be done with the ``title()`` and ``capitalize()`` methods:
###Code
fox.title()
fox.capitalize()
###Output
_____no_output_____
###Markdown
The cases can be swapped using the ``swapcase()`` method:
###Code
fox.swapcase()
###Output
_____no_output_____
###Markdown
Formatting strings: Adding and removing spacesAnother common need is to remove spaces (or other characters) from the beginning or end of the string.The basic method of removing characters is the ``strip()`` method, which strips whitespace from the beginning and end of the line:
###Code
line = ' this is the content '
line.strip()
###Output
_____no_output_____
###Markdown
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
###Code
line.rstrip()
line.lstrip()
###Output
_____no_output_____
###Markdown
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
###Code
num = "000000000000435"
num.strip('0')
###Output
_____no_output_____
###Markdown
The opposite of this operation, adding spaces or other characters, can be accomplished using the ``center()``, ``ljust()``, and ``rjust()`` methods.For example, we can use the ``center()`` method to center a given string within a given number of spaces:
###Code
line = "this is the content"
line.center(30)
###Output
_____no_output_____
###Markdown
Similarly, ``ljust()`` and ``rjust()`` will left-justify or right-justify the string within spaces of a given length:
###Code
line.ljust(30)
line.rjust(30)
###Output
_____no_output_____
###Markdown
All these methods additionally accept any character which will be used to fill the space.For example:
###Code
'435'.rjust(10, '0')
###Output
_____no_output_____
###Markdown
Because zero-filling is such a common need, Python also provides ``zfill()``, which is a special method to right-pad a string with zeros:
###Code
'435'.zfill(10)
###Output
_____no_output_____
###Markdown
Finding and replacing substringsIf you want to find occurrences of a certain character in a string, the ``find()``/``rfind()``, ``index()``/``rindex()``, and ``replace()`` methods are the best built-in methods.``find()`` and ``index()`` are very similar, in that they search for the first occurrence of a character or substring within a string, and return the index of the substring:
###Code
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
###Output
_____no_output_____
###Markdown
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
###Code
line.find('bear')
line.index('bear')
###Output
_____no_output_____
###Markdown
The related ``rfind()`` and ``rindex()`` work similarly, except they search for the first occurrence from the end rather than the beginning of the string:
###Code
line.rfind('a')
###Output
_____no_output_____
###Markdown
For the special case of checking for a substring at the beginning or end of a string, Python provides the ``startswith()`` and ``endswith()`` methods:
###Code
line.endswith('dog')
line.startswith('fox')
###Output
_____no_output_____
###Markdown
To go one step further and replace a given substring with a new string, you can use the ``replace()`` method.Here, let's replace ``'brown'`` with ``'red'``:
###Code
line.replace('brown', 'red')
###Output
_____no_output_____
###Markdown
The ``replace()`` function returns a new string, and will replace all occurrences of the input:
###Code
line.replace('o', '--')
###Output
_____no_output_____
###Markdown
For a more flexible approach to this ``replace()`` functionality, see the discussion of regular expressions in [Flexible Pattern Matching with Regular Expressions](Flexible-Pattern-Matching-with-Regular-Expressions). Splitting and partitioning stringsIf you would like to find a substring *and then* split the string based on its location, the ``partition()`` and/or ``split()`` methods are what you're looking for.Both will return a sequence of substrings.The ``partition()`` method returns a tuple with three elements: the substring before the first instance of the split-point, the split-point itself, and the substring after:
###Code
line.partition('fox')
###Output
_____no_output_____
###Markdown
The ``rpartition()`` method is similar, but searches from the right of the string.The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.The default is to split on any whitespace, returning a list of the individual words in a string:
###Code
line.split()
###Output
_____no_output_____
###Markdown
A related method is ``splitlines()``, which splits on newline characters.Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
###Code
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
###Output
_____no_output_____
###Markdown
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
###Code
'--'.join(['1', '2', '3'])
###Output
_____no_output_____
###Markdown
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
###Code
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
###Output
matsushima-ya
aah matsushima-ya
matsushima-ya
###Markdown
Format StringsIn the preceding methods, we have learned how to extract values from strings, and to manipulate strings themselves into desired formats.Another use of string methods is to manipulate string *representations* of values of other types.Of course, string representations can always be found using the ``str()`` function; for example:
###Code
pi = 3.14159
str(pi)
###Output
_____no_output_____
###Markdown
For more complicated formats, you might be tempted to use string arithmetic as outlined in [Basic Python Semantics: Operators](04-Semantics-Operators.ipynb):
###Code
"The value of pi is " + str(pi)
###Output
_____no_output_____
###Markdown
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.Here is a basic example:
###Code
"The value of pi is {}".format(pi)
###Output
_____no_output_____
###Markdown
Inside the ``{}`` marker you can also include information on exactly *what* you would like to appear there.If you include a number, it will refer to the index of the argument to insert:
###Code
"""First letter: {0}. Last letter: {1}.""".format('A', 'Z')
###Output
_____no_output_____
###Markdown
If you include a string, it will refer to the key of any keyword argument:
###Code
"""First letter: {first}. Last letter: {last}.""".format(last='Z', first='A')
###Output
_____no_output_____
###Markdown
Finally, for numerical inputs, you can include format codes which control how the value is converted to a string.For example, to print a number as a floating point with three digits after the decimal point, you can use the following:
###Code
"pi = {0:.3f}".format(pi)
###Output
_____no_output_____
###Markdown
As before, here the "``0``" refers to the index of the value to be inserted.The "``:``" marks that format codes will follow.The "``.3f``" encodes the desired precision: three digits beyond the decimal point, floating-point format.This style of format specification is very flexible, and the examples here barely scratch the surface of the formatting options available.For more information on the syntax of these format strings, see the [Format Specification](https://docs.python.org/3/library/string.htmlformatspec) section of Python's online documentation. Flexible Pattern Matching with Regular ExpressionsThe methods of Python's ``str`` type give you a powerful set of tools for formatting, splitting, and manipulating string data.But even more powerful tools are available in Python's built-in *regular expression* module.Regular expressions are a huge topic; there are there are entire books written on the topic (including Jeffrey E.F. Friedl’s [*Mastering Regular Expressions, 3rd Edition*](http://shop.oreilly.com/product/9780596528126.do)), so it will be hard to do justice within just a single subsection.My goal here is to give you an idea of the types of problems that might be addressed using regular expressions, as well as a basic idea of how to use them in Python.I'll suggest some references for learning more in [Further Resources on Regular Expressions](Further-Resources-on-Regular-Expressions).Fundamentally, regular expressions are a means of *flexible pattern matching* in strings.If you frequently use the command-line, you are probably familiar with this type of flexible matching with the "``*``" character, which acts as a wildcard.For example, we can list all the IPython notebooks (i.e., files with extension *.ipynb*) with "Python" in their filename by using the "``*``" wildcard to match any characters in between:
###Code
!ls *Python*.ipynb
###Output
01-How-to-Run-Python-Code.ipynb 02-Basic-Python-Syntax.ipynb
###Markdown
Regular expressions generalize this "wildcard" idea to a wide range of flexible string-matching sytaxes.The Python interface to regular expressions is contained in the built-in ``re`` module; as a simple example, let's use it to duplicate the functionality of the string ``split()`` method:
###Code
import re
regex = re.compile('\s+')
regex.split(line)
###Output
_____no_output_____
###Markdown
Here we've first *compiled* a regular expression, then used it to *split* a string.Just as Python's ``split()`` method returns a list of all substrings between whitespace, the regular expression ``split()`` method returns a list of all substrings between matches to the input pattern.In this case, the input is ``"\s+"``: "``\s``" is a special character that matches any whitespace (space, tab, newline, etc.), and the "``+``" is a character that indicates *one or more* of the entity preceding it.Thus, the regular expression matches any substring consisting of one or more spaces.The ``split()`` method here is basically a convenience routine built upon this *pattern matching* behavior; more fundamental is the ``match()`` method, which will tell you whether the beginning of a string matches the pattern:
###Code
for s in [" ", "abc ", " abc"]:
if regex.match(s):
print(repr(s), "matches")
else:
print(repr(s), "does not match")
###Output
' ' matches
'abc ' does not match
' abc' matches
###Markdown
Like ``split()``, there are similar convenience routines to find the first match (like ``str.index()`` or ``str.find()``) or to find and replace (like ``str.replace()``).We'll again use the line from before:
###Code
line = 'the quick brown fox jumped over a lazy dog'
###Output
_____no_output_____
###Markdown
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
###Code
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
###Output
_____no_output_____
###Markdown
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
###Code
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
###Output
_____no_output_____
###Markdown
With a bit of thought, other native string operations can also be cast as regular expressions. A more sophisticated exampleBut, you might ask, why would you want to use the more complicated and verbose syntax of regular expressions rather than the more intuitive and simple string methods?The advantage is that regular expressions offer *far* more flexibility.Here we'll consider a more complicated example: the common task of matching email addresses.I'll start by simply writing a (somewhat indecipherable) regular expression, and then walk through what is going on.Here it goes:
###Code
email = re.compile('\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
Using this, if we're given a line from a document, we can quickly extract things that look like email addresses
###Code
text = "To email Guido, try [email protected] or the older address [email protected]."
email.findall(text)
###Output
_____no_output_____
###Markdown
(Note that these addresses are entirely made up; there are probably better ways to get in touch with Guido).We can do further operations, like replacing these email addresses with another string, perhaps to hide addresses in the output:
###Code
email.sub('[email protected]', text)
###Output
_____no_output_____
###Markdown
Finally, note that if you really want to match *any* email address, the preceding regular expression is far too simple.For example, it only allows addresses made of alphanumeric characters that end in one of several common domain suffixes.So, for example, the period used here means that we only find part of the address:
###Code
email.findall('[email protected]')
###Output
_____no_output_____
###Markdown
This goes to show how unforgiving regular expressions can be if you're not careful!If you search around online, you can find some suggestions for regular expressions that will match *all* valid emails, but beware: they are much more involved than the simple expression used here! Basics of regular expression syntaxThe syntax of regular expressions is much too large a topic for this short section.Still, a bit of familiarity can go a long way: I will walk through some of the basic constructs here, and then list some more complete resources from which you can learn more.My hope is that the following quick primer will enable you to use these resources effectively. Simple strings are matched directlyIf you build a regular expression on a simple string of characters or digits, it will match that exact string:
###Code
regex = re.compile('ion')
regex.findall('Great Expectations')
###Output
_____no_output_____
###Markdown
Some characters have special meaningsWhile simple letters or numbers are direct matches, there are a handful of characters that have special meanings within regular expressions. They are:```. ^ $ * + ? { } [ ] \ | ( )```We will discuss the meaning of some of these momentarily.In the meantime, you should know that if you'd like to match any of these characters directly, you can *escape* them with a back-slash:
###Code
regex = re.compile(r'\$')
regex.findall("the cost is $20")
###Output
_____no_output_____
###Markdown
The ``r`` preface in ``r'\$'`` indicates a *raw string*; in standard Python strings, the backslash is used to indicate special characters.For example, a tab is indicated by ``"\t"``:
###Code
print('a\tb\tc')
###Output
a b c
###Markdown
Such substitutions are not made in a raw string:
###Code
print(r'a\tb\tc')
###Output
a\tb\tc
###Markdown
For this reason, whenever you use backslashes in a regular expression, it is good practice to use a raw string. Special characters can match character groupsJust as the ``"\"`` character within regular expressions can escape special characters, turning them into normal characters, it can also be used to give normal characters special meaning.These special characters match specified groups of characters, and we've seen them before.In the email address regexp from before, we used the character ``"\w"``, which is a special marker matching *any alphanumeric character*. Similarly, in the simple ``split()`` example, we also saw ``"\s"``, a special marker indicating *any whitespace character*.Putting these together, we can create a regular expression that will match *any two letters/digits with whitespace between them*:
###Code
regex = re.compile(r'\w\s\w')
regex.findall('the fox is 9 years old')
###Output
_____no_output_____
###Markdown
This example begins to hint at the power and flexibility of regular expressions. The following table lists a few of these characters that are commonly useful:| Character | Description || Character | Description ||-----------|-----------------------------||-----------|---------------------------------|| ``"\d"`` | Match any digit || ``"\D"`` | Match any non-digit || ``"\s"`` | Match any whitespace || ``"\S"`` | Match any non-whitespace || ``"\w"`` | Match any alphanumeric char || ``"\W"`` | Match any non-alphanumeric char |This is *not* a comprehensive list or description; for more details, see Python's [regular expression syntax documentation](https://docs.python.org/3/library/re.htmlre-syntax). Square brackets match custom character groupsIf the built-in character groups aren't specific enough for you, you can use square brackets to specify any set of characters you're interested in.For example, the following will match any lower-case vowel:
###Code
regex = re.compile('[aeiou]')
regex.split('consequential')
###Output
_____no_output_____
###Markdown
Similarly, you can use a dash to specify a range: for example, ``"[a-z]"`` will match any lower-case letter, and ``"[1-3]"`` will match any of ``"1"``, ``"2"``, or ``"3"``.For instance, you may need to extract from a document specific numerical codes that consist of a capital letter followed by a digit. You could do this as follows:
###Code
regex = re.compile('[A-Z][0-9]')
regex.findall('1043879, G2, H6')
###Output
_____no_output_____
###Markdown
Wildcards match repeated charactersIf you would like to match a string with, say, three alphanumeric characters in a row, it is possible to write, for example, ``"\w\w\w"``.Because this is such a common need, there is a specific syntax to match repetitions – curly braces with a number:
###Code
regex = re.compile(r'\w{3}')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
There are also markers available to match any number of repetitions – for example, the ``"+"`` character will match *one or more* repetitions of what precedes it:
###Code
regex = re.compile(r'\w+')
regex.findall('The quick brown fox')
###Output
_____no_output_____
###Markdown
The following is a table of the repetition markers available for use in regular expressions:| Character | Description | Example ||-----------|-------------|---------|| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` || ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... || ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` || ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` || ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` | With these basics in mind, let's return to our email address matcher:
###Code
email = re.compile(r'\w+@\w+\.[a-z]{3}')
###Output
_____no_output_____
###Markdown
We can now understand what this means: we want one or more alphanumeric character (``"\w+"``) followed by the *at sign* (``"@"``), followed by one or more alphanumeric character (``"\w+"``), followed by a period (``"\."`` – note the need for a backslash escape), followed by exactly three lower-case letters.If we want to now modify this so that the Obama email address matches, we can do so using the square-bracket notation:
###Code
email2 = re.compile(r'[\w.]+@\w+\.[a-z]{3}')
email2.findall('[email protected]')
###Output
_____no_output_____
###Markdown
We have changed ``"\w+"`` to ``"[\w.]+"``, so we will match any alphanumeric character *or* a period.With this more flexible expression, we can match a wider range of email addresses (though still not all – can you identify other shortcomings of this expression?). Parentheses indicate *groups* to extractFor compound regular expressions like our email matcher, we often want to extract their components rather than the full match. This can be done using parentheses to *group* the results:
###Code
email3 = re.compile(r'([\w.]+)@(\w+)\.([a-z]{3})')
text = "To email Guido, try [email protected] or the older address [email protected]."
email3.findall(text)
###Output
_____no_output_____
###Markdown
As we see, this grouping actually extracts a list of the sub-components of the email address.We can go a bit further and *name* the extracted components using the ``"(?P )"`` syntax, in which case the groups can be extracted as a Python dictionary:
###Code
email4 = re.compile(r'(?P<user>[\w.]+)@(?P<domain>\w+)\.(?P<suffix>[a-z]{3})')
match = email4.match('[email protected]')
match.groupdict()
###Output
_____no_output_____ |
6.4-sequence-processing-with-convnets.ipynb | ###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 4s - loss: 0.7713 - acc: 0.5287 - val_loss: 0.6818 - val_acc: 0.5970
Epoch 2/10
20000/20000 [==============================] - 3s - loss: 0.6631 - acc: 0.6775 - val_loss: 0.6582 - val_acc: 0.6646
Epoch 3/10
20000/20000 [==============================] - 3s - loss: 0.6142 - acc: 0.7580 - val_loss: 0.5987 - val_acc: 0.7118
Epoch 4/10
20000/20000 [==============================] - 3s - loss: 0.5156 - acc: 0.8124 - val_loss: 0.4936 - val_acc: 0.7736
Epoch 5/10
20000/20000 [==============================] - 3s - loss: 0.4029 - acc: 0.8469 - val_loss: 0.4123 - val_acc: 0.8358
Epoch 6/10
20000/20000 [==============================] - 3s - loss: 0.3455 - acc: 0.8653 - val_loss: 0.4040 - val_acc: 0.8382
Epoch 7/10
20000/20000 [==============================] - 3s - loss: 0.3078 - acc: 0.8634 - val_loss: 0.4059 - val_acc: 0.8240
Epoch 8/10
20000/20000 [==============================] - 3s - loss: 0.2812 - acc: 0.8535 - val_loss: 0.4147 - val_acc: 0.8098
Epoch 9/10
20000/20000 [==============================] - 3s - loss: 0.2554 - acc: 0.8334 - val_loss: 0.4296 - val_acc: 0.7878
Epoch 10/10
20000/20000 [==============================] - 3s - loss: 0.2356 - acc: 0.8052 - val_loss: 0.4296 - val_acc: 0.7600
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/home/ubuntu/data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 124s - loss: 0.4189 - val_loss: 0.4521
Epoch 2/20
500/500 [==============================] - 11s - loss: 0.3629 - val_loss: 0.4545
Epoch 3/20
500/500 [==============================] - 11s - loss: 0.3399 - val_loss: 0.4527
Epoch 4/20
500/500 [==============================] - 11s - loss: 0.3229 - val_loss: 0.4721
Epoch 5/20
500/500 [==============================] - 11s - loss: 0.3122 - val_loss: 0.4712
Epoch 6/20
500/500 [==============================] - 11s - loss: 0.3030 - val_loss: 0.4705
Epoch 7/20
500/500 [==============================] - 11s - loss: 0.2935 - val_loss: 0.4870
Epoch 8/20
500/500 [==============================] - 11s - loss: 0.2862 - val_loss: 0.4676
Epoch 9/20
500/500 [==============================] - 11s - loss: 0.2817 - val_loss: 0.4738
Epoch 10/20
500/500 [==============================] - 11s - loss: 0.2775 - val_loss: 0.4896
Epoch 11/20
500/500 [==============================] - 11s - loss: 0.2715 - val_loss: 0.4765
Epoch 12/20
500/500 [==============================] - 11s - loss: 0.2683 - val_loss: 0.4724
Epoch 13/20
500/500 [==============================] - 11s - loss: 0.2644 - val_loss: 0.4842
Epoch 14/20
500/500 [==============================] - 11s - loss: 0.2606 - val_loss: 0.4910
Epoch 15/20
500/500 [==============================] - 11s - loss: 0.2558 - val_loss: 0.5000
Epoch 16/20
500/500 [==============================] - 11s - loss: 0.2539 - val_loss: 0.4960
Epoch 17/20
500/500 [==============================] - 11s - loss: 0.2516 - val_loss: 0.4875
Epoch 18/20
500/500 [==============================] - 11s - loss: 0.2501 - val_loss: 0.4884
Epoch 19/20
500/500 [==============================] - 11s - loss: 0.2444 - val_loss: 0.5024
Epoch 20/20
500/500 [==============================] - 11s - loss: 0.2444 - val_loss: 0.4821
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
컨브넷을 사용한 시퀀스 처리이 노트북은 [케라스 창시자에게 배우는 딥러닝](https://tensorflow.blog/%EC%BC%80%EB%9D%BC%EC%8A%A4-%EB%94%A5%EB%9F%AC%EB%8B%9D/) 책의 6장 4절의 코드 예제입니다. 책에는 더 많은 내용과 그림이 있습니다. 이 노트북에는 소스 코드에 관련된 설명만 포함합니다. 1D 컨브넷 구현케라스에서 1D 컨브넷은 `Conv1D` 층을 사용하여 구현합니다. `Conv1D`는 `Conv2D`와 인터페이스가 비슷합니다. `(samples, time, features)` 크기의 3D 텐서를 입력받고 비슷한 형태의 3D 텐서를 반환합니다. 합성곱 윈도우는 시간 축의 1D 윈도우입니다. 즉, 입력 텐서의 두 번째 축입니다.간단한 두 개 층으로 된 1D 컨브넷을 만들어 익숙한 IMDB 감성 분류 문제에 적용해 보죠.기억을 되살리기 위해 데이터를 로드하고 전처리하는 코드를 다시 보겠습니다:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # 특성으로 사용할 단어의 수
max_len = 500 # 사용할 텍스트의 길이(가장 빈번한 max_features 개의 단어만 사용합니다)
print('데이터 로드...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), '훈련 시퀀스')
print(len(x_test), '테스트 시퀀스')
print('시퀀스 패딩 (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train 크기:', x_train.shape)
print('x_test 크기:', x_test.shape)
###Output
데이터 로드...
25000 훈련 시퀀스
25000 테스트 시퀀스
시퀀스 패딩 (samples x time)
x_train 크기: (25000, 500)
x_test 크기: (25000, 500)
###Markdown
1D 컨브넷은 5장에서 사용한 2D 컨브넷과 비슷한 방식으로 구성합니다. `Conv1D`와 `MaxPooling1D` 층을 쌓고 전역 풀링 층이나 `Flatten` 층으로 마칩니다. 이 구조는 3D 입력을 2D 출력으로 바꾸므로 분류나 회귀를 위해 모델에 하나 이상의 `Dense` 층을 추가할 수 있습니다.한 가지 다른 점은 1D 컨브넷에 큰 합성곱 윈도우를 사용할 수 있다는 것입니다. 2D 합성곱 층에서 3 × 3 합성곱 윈도우는 3 × 3 = 9 특성을 고려합니다. 하지만 1D 합성곱 층에서 크기 3인 합성곱 윈도우는 3개의 특성만 고려합니다. 그래서 1D 합성곱에 크기 7이나 9의 윈도우를 사용할 수 있습니다.다음은 IMDB 데이터셋을 위한 1D 컨브넷의 예입니다:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 3s 165us/step - loss: 0.8337 - acc: 0.5088 - val_loss: 0.6875 - val_acc: 0.5638
Epoch 2/10
20000/20000 [==============================] - 2s 96us/step - loss: 0.6700 - acc: 0.6399 - val_loss: 0.6642 - val_acc: 0.6582
Epoch 3/10
20000/20000 [==============================] - 2s 102us/step - loss: 0.6235 - acc: 0.7547 - val_loss: 0.6076 - val_acc: 0.7436
Epoch 4/10
20000/20000 [==============================] - 2s 102us/step - loss: 0.5253 - acc: 0.8096 - val_loss: 0.4848 - val_acc: 0.8062
Epoch 5/10
20000/20000 [==============================] - 2s 102us/step - loss: 0.4114 - acc: 0.8485 - val_loss: 0.4244 - val_acc: 0.8316
Epoch 6/10
20000/20000 [==============================] - 2s 103us/step - loss: 0.3486 - acc: 0.8666 - val_loss: 0.4164 - val_acc: 0.8376
Epoch 7/10
20000/20000 [==============================] - 2s 101us/step - loss: 0.3108 - acc: 0.8640 - val_loss: 0.4502 - val_acc: 0.8200
Epoch 8/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.2802 - acc: 0.8519 - val_loss: 0.4321 - val_acc: 0.8038
Epoch 9/10
20000/20000 [==============================] - 2s 97us/step - loss: 0.2531 - acc: 0.8361 - val_loss: 0.4445 - val_acc: 0.7868
Epoch 10/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.2307 - acc: 0.8103 - val_loss: 0.5007 - val_acc: 0.7538
###Markdown
그림 6-27과 6-28은 훈련과 검증 결과를 보여줍니다. 검증 정확도는 LSTM보다 조금 낮지만 CPU나 GPU에서 더 빠르게 실행됩니다(속도 향상은 환경에 따라 많이 다릅니다). 여기에서 적절한 에포크 수(4개)로 모델을 다시 훈련하고 테스트 세트에서 확인할 수 있습니다. 이 예는 단어 수준의 감성 분류 작업에 순환 네트워크를 대신하여 빠르고 경제적인 1D 컨브넷을 사용할 수 있음을 보여줍니다.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
CNN과 RNN을 연결하여 긴 시퀀스를 처리하기1D 컨브넷이 입력 패치를 독립적으로 처리하기 때문에 RNN과 달리 (합성곱 윈도우 크기의 범위를 넘어선) 타임스텝의 순서에 민감하지 않습니다. 물론 장기간 패턴을 인식하기 위해 많은 합성곱 층과 풀링 층을 쌓을 수 있습니다. 상위 층은 원본 입력에서 긴 범위를 보게 될 것입니다. 이런 방법은 순서를 감지하기엔 부족합니다. 온도 예측 문제에 1D 컨브넷을 적용하여 이를 확인해 보겠습니다. 이 문제는 순서를 감지해야 좋은 예측을 만들어 낼 수 있습니다. 다음은 이전에 정의한 float_data, train_gen, val_gen, val_steps를 다시 사용합니다:
###Code
import os
import numpy as np
data_dir = './datasets/jena_climate/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# 전체 검증 세트를 순회하기 위해 val_gen에서 추출할 횟수
val_steps = (300000 - 200001 - lookback) // batch_size
# 전체 테스트 세트를 순회하기 위해 test_gen에서 추출할 횟수
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 10s 21ms/step - loss: 0.4194 - val_loss: 0.4395
Epoch 2/20
500/500 [==============================] - 10s 20ms/step - loss: 0.3637 - val_loss: 0.4604
Epoch 3/20
500/500 [==============================] - 10s 20ms/step - loss: 0.3391 - val_loss: 0.4559
Epoch 4/20
500/500 [==============================] - 10s 20ms/step - loss: 0.3223 - val_loss: 0.4638
Epoch 5/20
500/500 [==============================] - 10s 20ms/step - loss: 0.3080 - val_loss: 0.4446
Epoch 6/20
500/500 [==============================] - 10s 19ms/step - loss: 0.3008 - val_loss: 0.4486
Epoch 7/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2921 - val_loss: 0.4983
Epoch 8/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2865 - val_loss: 0.4869
Epoch 9/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2787 - val_loss: 0.4539
Epoch 10/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2741 - val_loss: 0.4568
Epoch 11/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2708 - val_loss: 0.4796
Epoch 12/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2669 - val_loss: 0.4892
Epoch 13/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2615 - val_loss: 0.4874
Epoch 14/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2575 - val_loss: 0.4907
Epoch 15/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2564 - val_loss: 0.4733
Epoch 16/20
500/500 [==============================] - 10s 19ms/step - loss: 0.2535 - val_loss: 0.5018
Epoch 17/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2503 - val_loss: 0.4870
Epoch 18/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2470 - val_loss: 0.4639
Epoch 19/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2444 - val_loss: 0.4718
Epoch 20/20
500/500 [==============================] - 10s 20ms/step - loss: 0.2447 - val_loss: 0.4726
###Markdown
다음은 훈련 MAE와 검증 MAE입니다:
###Code
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
검증 MAE는 0.40 대에 머물러 있습니다. 작은 컨브넷을 사용해서 상식 수준의 기준점을 넘지 못 했습니다. 이는 컨브넷이 입력 시계열에 있는 패턴을 보고 이 패턴의 시간 축의 위치(시작인지 끝 부분인지 등)를 고려하지 않기 때문입니다. 최근 데이터 포인트일수록 오래된 데이터 포인트와는 다르게 해석해야 하기 때문에 컨브넷이 의미 있는 결과를 만들지 못합니다. 이런 컨브넷의 한계는 IMDB 데이터에서는 문제가 되지 않습니다. 긍정 또는 부정적인 감성과 연관된 키워드 패턴의 중요성은 입력 시퀀스에 나타난 위치와 무관하기 때문입니다.컨브넷의 속도와 경량함을 RNN의 순서 감지 능력과 결합하는 한가지 전략은 1D 컨브넷을 RNN 이전에 전처리 단계로 사용하는 것입니다. 수천 개의 스텝을 가진 시퀀스 같이 RNN으로 처리하기엔 현실적으로 너무 긴 시퀀스를 다룰 때 특별히 도움이 됩니다. 컨브넷이 긴 입력 시퀀스를 더 짧은 고수준 특성의 (다운 샘플된) 시퀀스로 변환합니다. 추출된 특성의 시퀀스는 RNN 파트의 입력이 됩니다. 이 기법이 연구 논문이나 실전 애플리케이션에 자주 등장하지는 않습니다. 아마도 널리 알려지지 않았기 때문일 것입니다. 이 방법은 효과적이므로 많이 사용되기를 바랍니다. 온도 예측 문제에 적용해 보죠. 이 전략은 훨씬 긴 시퀀스를 다룰 수 있으므로 더 오래전 데이터를 바라보거나(데이터 제너레이터의 `lookback` 매개변수를 증가시킵니다), 시계열 데이터를 더 촘촘히 바라볼 수 있습니다(제너레이터의 `step` 매개변수를 감소시킵니다). 여기서는 그냥 `step`을 절반으로 줄여서 사용하겠습니다. 온도 데이터가 30분마다 1 포인트씩 샘플링되기 때문에 결과 시계열 데이터는 두 배로 길어집니다. 앞서 정의한 제너레이터 함수를 다시 사용합니다.
###Code
# 이전에는 6이었습니다(시간마다 1 포인트); 이제는 3 입니다(30분마다 1 포인트)
step = 3
lookback = 1440 # 변경 안 됨
delay = 144 # 변경 안 됨
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
이 모델은 두 개의 `Conv1D` 층 다음에 `GRU` 층을 놓았습니다:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 7s 342us/step - loss: 0.9889 - acc: 0.4986 - val_loss: 0.6917 - val_acc: 0.5404
Epoch 2/10
20000/20000 [==============================] - 3s 126us/step - loss: 0.6771 - acc: 0.6166 - val_loss: 0.6723 - val_acc: 0.6352
Epoch 3/10
20000/20000 [==============================] - 2s 125us/step - loss: 0.6371 - acc: 0.7493 - val_loss: 0.6275 - val_acc: 0.7270
Epoch 4/10
20000/20000 [==============================] - 2s 124us/step - loss: 0.5552 - acc: 0.8054 - val_loss: 0.5134 - val_acc: 0.7876
Epoch 5/10
20000/20000 [==============================] - 2s 124us/step - loss: 0.4270 - acc: 0.8423 - val_loss: 0.4298 - val_acc: 0.8246
Epoch 6/10
20000/20000 [==============================] - 3s 125us/step - loss: 0.3532 - acc: 0.8682 - val_loss: 0.4042 - val_acc: 0.8392
Epoch 7/10
20000/20000 [==============================] - 2s 125us/step - loss: 0.3145 - acc: 0.8728 - val_loss: 0.4072 - val_acc: 0.8400
Epoch 8/10
20000/20000 [==============================] - 3s 125us/step - loss: 0.2797 - acc: 0.8746 - val_loss: 0.4048 - val_acc: 0.8214
Epoch 9/10
20000/20000 [==============================] - 2s 124us/step - loss: 0.2566 - acc: 0.8524 - val_loss: 0.4123 - val_acc: 0.8034
Epoch 10/10
20000/20000 [==============================] - 3s 125us/step - loss: 0.2327 - acc: 0.8300 - val_loss: 0.4517 - val_acc: 0.7762
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = './'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 15s 29ms/step - loss: 0.4188 - val_loss: 0.4506
Epoch 2/20
500/500 [==============================] - 14s 28ms/step - loss: 0.3628 - val_loss: 0.4464
Epoch 3/20
500/500 [==============================] - 14s 28ms/step - loss: 0.3363 - val_loss: 0.4665
Epoch 4/20
500/500 [==============================] - 14s 28ms/step - loss: 0.3201 - val_loss: 0.4674
Epoch 5/20
500/500 [==============================] - 14s 28ms/step - loss: 0.3069 - val_loss: 0.4640
Epoch 6/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2995 - val_loss: 0.4860
Epoch 7/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2921 - val_loss: 0.4732
Epoch 8/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2868 - val_loss: 0.5081
Epoch 9/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2787 - val_loss: 0.4811
Epoch 10/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2753 - val_loss: 0.4747
Epoch 11/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2721 - val_loss: 0.4754
Epoch 12/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2665 - val_loss: 0.5038
Epoch 13/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2630 - val_loss: 0.4876
Epoch 14/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2593 - val_loss: 0.4829
Epoch 15/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2574 - val_loss: 0.4901
Epoch 16/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2542 - val_loss: 0.4983
Epoch 17/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2508 - val_loss: 0.4777
Epoch 18/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2487 - val_loss: 0.4828
Epoch 19/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2460 - val_loss: 0.4776
Epoch 20/20
500/500 [==============================] - 14s 28ms/step - loss: 0.2456 - val_loss: 0.4871
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
WARNING:tensorflow:From /home/farid/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 86s 4ms/step - loss: 0.8336 - acc: 0.5095 - val_loss: 0.6874 - val_acc: 0.5650
Epoch 2/10
20000/20000 [==============================] - 80s 4ms/step - loss: 0.6699 - acc: 0.6393 - val_loss: 0.6642 - val_acc: 0.6580
Epoch 3/10
20000/20000 [==============================] - 70s 3ms/step - loss: 0.6236 - acc: 0.7529 - val_loss: 0.6082 - val_acc: 0.7432
Epoch 4/10
20000/20000 [==============================] - 76s 4ms/step - loss: 0.5259 - acc: 0.8077 - val_loss: 0.4846 - val_acc: 0.8054
Epoch 5/10
20000/20000 [==============================] - 77s 4ms/step - loss: 0.4100 - acc: 0.8481 - val_loss: 0.4386 - val_acc: 0.8306
Epoch 6/10
20000/20000 [==============================] - 79s 4ms/step - loss: 0.3476 - acc: 0.8664 - val_loss: 0.4153 - val_acc: 0.8354
Epoch 7/10
20000/20000 [==============================] - 77s 4ms/step - loss: 0.3066 - acc: 0.8667 - val_loss: 0.4376 - val_acc: 0.8246
Epoch 8/10
20000/20000 [==============================] - 78s 4ms/step - loss: 0.2760 - acc: 0.8552 - val_loss: 0.4282 - val_acc: 0.8100
Epoch 9/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.2514 - acc: 0.8353 - val_loss: 0.4426 - val_acc: 0.7898
Epoch 10/10
20000/20000 [==============================] - 72s 4ms/step - loss: 0.2285 - acc: 0.8264 - val_loss: 0.4478 - val_acc: 0.7756
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = './jena_climate'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 44s 89ms/step - loss: 0.4210 - val_loss: 0.4388
Epoch 2/20
500/500 [==============================] - 46s 92ms/step - loss: 0.3622 - val_loss: 0.4501
Epoch 3/20
500/500 [==============================] - 43s 85ms/step - loss: 0.3359 - val_loss: 0.4837
Epoch 4/20
500/500 [==============================] - 51s 102ms/step - loss: 0.3197 - val_loss: 0.4629
Epoch 5/20
500/500 [==============================] - 44s 88ms/step - loss: 0.3060 - val_loss: 0.4620
Epoch 6/20
500/500 [==============================] - 48s 97ms/step - loss: 0.2969 - val_loss: 0.4725
Epoch 7/20
500/500 [==============================] - 41s 83ms/step - loss: 0.2887 - val_loss: 0.4488
Epoch 8/20
500/500 [==============================] - 41s 82ms/step - loss: 0.2837 - val_loss: 0.4650
Epoch 9/20
500/500 [==============================] - 40s 81ms/step - loss: 0.2764 - val_loss: 0.4615
Epoch 10/20
500/500 [==============================] - 41s 82ms/step - loss: 0.2719 - val_loss: 0.5040
Epoch 11/20
500/500 [==============================] - 41s 82ms/step - loss: 0.2695 - val_loss: 0.4644
Epoch 12/20
500/500 [==============================] - 51s 103ms/step - loss: 0.2646 - val_loss: 0.4691
Epoch 13/20
500/500 [==============================] - 43s 87ms/step - loss: 0.2605 - val_loss: 0.4727
Epoch 14/20
500/500 [==============================] - 44s 89ms/step - loss: 0.2572 - val_loss: 0.4736
Epoch 15/20
500/500 [==============================] - 44s 87ms/step - loss: 0.2553 - val_loss: 0.4792
Epoch 16/20
500/500 [==============================] - 44s 88ms/step - loss: 0.2521 - val_loss: 0.4825
Epoch 17/20
500/500 [==============================] - 52s 105ms/step - loss: 0.2491 - val_loss: 0.4660
Epoch 18/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2470 - val_loss: 0.4685
Epoch 19/20
500/500 [==============================] - 46s 93ms/step - loss: 0.2446 - val_loss: 0.4836
Epoch 20/20
500/500 [==============================] - 54s 108ms/step - loss: 0.2432 - val_loss: 0.4878
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 4s - loss: 0.7713 - acc: 0.5287 - val_loss: 0.6818 - val_acc: 0.5970
Epoch 2/10
20000/20000 [==============================] - 3s - loss: 0.6631 - acc: 0.6775 - val_loss: 0.6582 - val_acc: 0.6646
Epoch 3/10
20000/20000 [==============================] - 3s - loss: 0.6142 - acc: 0.7580 - val_loss: 0.5987 - val_acc: 0.7118
Epoch 4/10
20000/20000 [==============================] - 3s - loss: 0.5156 - acc: 0.8124 - val_loss: 0.4936 - val_acc: 0.7736
Epoch 5/10
20000/20000 [==============================] - 3s - loss: 0.4029 - acc: 0.8469 - val_loss: 0.4123 - val_acc: 0.8358
Epoch 6/10
20000/20000 [==============================] - 3s - loss: 0.3455 - acc: 0.8653 - val_loss: 0.4040 - val_acc: 0.8382
Epoch 7/10
20000/20000 [==============================] - 3s - loss: 0.3078 - acc: 0.8634 - val_loss: 0.4059 - val_acc: 0.8240
Epoch 8/10
20000/20000 [==============================] - 3s - loss: 0.2812 - acc: 0.8535 - val_loss: 0.4147 - val_acc: 0.8098
Epoch 9/10
20000/20000 [==============================] - 3s - loss: 0.2554 - acc: 0.8334 - val_loss: 0.4296 - val_acc: 0.7878
Epoch 10/10
20000/20000 [==============================] - 3s - loss: 0.2356 - acc: 0.8052 - val_loss: 0.4296 - val_acc: 0.7600
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/home/ubuntu/data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
cnn_history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 11s - loss: 0.4106 - val_loss: 0.4331
Epoch 2/20
500/500 [==============================] - 10s - loss: 0.3604 - val_loss: 0.4252
Epoch 3/20
500/500 [==============================] - 10s - loss: 0.3364 - val_loss: 0.4371
Epoch 4/20
500/500 [==============================] - 10s - loss: 0.3224 - val_loss: 0.4271
Epoch 5/20
500/500 [==============================] - 10s - loss: 0.3085 - val_loss: 0.4420
Epoch 6/20
500/500 [==============================] - 10s - loss: 0.2988 - val_loss: 0.4774
Epoch 7/20
500/500 [==============================] - 10s - loss: 0.2916 - val_loss: 0.4484
Epoch 8/20
500/500 [==============================] - 10s - loss: 0.2853 - val_loss: 0.4446
Epoch 9/20
500/500 [==============================] - 10s - loss: 0.2788 - val_loss: 0.4728
Epoch 10/20
500/500 [==============================] - 10s - loss: 0.2709 - val_loss: 0.4370
Epoch 11/20
500/500 [==============================] - 10s - loss: 0.2694 - val_loss: 0.4472
Epoch 12/20
500/500 [==============================] - 10s - loss: 0.2633 - val_loss: 0.4499
Epoch 13/20
500/500 [==============================] - 10s - loss: 0.2598 - val_loss: 0.4428
Epoch 14/20
500/500 [==============================] - 10s - loss: 0.2572 - val_loss: 0.4471
Epoch 15/20
500/500 [==============================] - 10s - loss: 0.2543 - val_loss: 0.4597
Epoch 16/20
500/500 [==============================] - 10s - loss: 0.2512 - val_loss: 0.4420
Epoch 17/20
500/500 [==============================] - 10s - loss: 0.2481 - val_loss: 0.4462
Epoch 18/20
500/500 [==============================] - 10s - loss: 0.2450 - val_loss: 0.4486
Epoch 19/20
500/500 [==============================] - 10s - loss: 0.2417 - val_loss: 0.4558
Epoch 20/20
500/500 [==============================] - 10s - loss: 0.2399 - val_loss: 0.4512
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
컨브넷을 사용한 시퀀스 처리이 노트북은 [케라스 창시자에게 배우는 딥러닝](https://tensorflow.blog/deep-learning-with-python/) 책의 6장 4절의 코드 예제입니다. 책에는 더 많은 내용과 그림이 있습니다. 이 노트북에는 소스 코드에 관련된 설명만 포함합니다. 이 노트북의 설명은 케라스 버전 2.2.2에 맞추어져 있습니다. 케라스 최신 버전이 릴리스되면 노트북을 다시 테스트하기 때문에 설명과 코드의 결과가 조금 다를 수 있습니다. 1D 컨브넷 구현케라스에서 1D 컨브넷은 `Conv1D` 층을 사용하여 구현합니다. `Conv1D`는 `Conv2D`와 인터페이스가 비슷합니다. `(samples, time, features)` 크기의 3D 텐서를 입력받고 비슷한 형태의 3D 텐서를 반환합니다. 합성곱 윈도우는 시간 축의 1D 윈도우입니다. 즉, 입력 텐서의 두 번째 축입니다.간단한 두 개 층으로 된 1D 컨브넷을 만들어 익숙한 IMDB 감성 분류 문제에 적용해 보죠.기억을 되살리기 위해 데이터를 로드하고 전처리하는 코드를 다시 보겠습니다:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # 특성으로 사용할 단어의 수
max_len = 500 # 사용할 텍스트의 길이(가장 빈번한 max_features 개의 단어만 사용합니다)
print('데이터 로드...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), '훈련 시퀀스')
print(len(x_test), '테스트 시퀀스')
print('시퀀스 패딩 (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train 크기:', x_train.shape)
print('x_test 크기:', x_test.shape)
###Output
데이터 로드...
25000 훈련 시퀀스
25000 테스트 시퀀스
시퀀스 패딩 (samples x time)
x_train 크기: (25000, 500)
x_test 크기: (25000, 500)
###Markdown
1D 컨브넷은 5장에서 사용한 2D 컨브넷과 비슷한 방식으로 구성합니다. `Conv1D`와 `MaxPooling1D` 층을 쌓고 전역 풀링 층이나 `Flatten` 층으로 마칩니다. 이 구조는 3D 입력을 2D 출력으로 바꾸므로 분류나 회귀를 위해 모델에 하나 이상의 `Dense` 층을 추가할 수 있습니다.한 가지 다른 점은 1D 컨브넷에 큰 합성곱 윈도우를 사용할 수 있다는 것입니다. 2D 합성곱 층에서 3 × 3 합성곱 윈도우는 3 × 3 = 9 특성을 고려합니다. 하지만 1D 합성곱 층에서 크기 3인 합성곱 윈도우는 3개의 특성만 고려합니다. 그래서 1D 합성곱에 크기 7이나 9의 윈도우를 사용할 수 있습니다.다음은 IMDB 데이터셋을 위한 1D 컨브넷의 예입니다:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 3s 167us/step - loss: 0.8337 - acc: 0.5093 - val_loss: 0.6874 - val_acc: 0.5636
Epoch 2/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.6700 - acc: 0.6381 - val_loss: 0.6642 - val_acc: 0.6572
Epoch 3/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.6237 - acc: 0.7527 - val_loss: 0.6082 - val_acc: 0.7426
Epoch 4/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.5262 - acc: 0.8076 - val_loss: 0.4830 - val_acc: 0.8052
Epoch 5/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.4130 - acc: 0.8475 - val_loss: 0.4334 - val_acc: 0.8298
Epoch 6/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.3518 - acc: 0.8677 - val_loss: 0.4160 - val_acc: 0.8356
Epoch 7/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.3095 - acc: 0.8705 - val_loss: 0.4423 - val_acc: 0.8248
Epoch 8/10
20000/20000 [==============================] - 2s 102us/step - loss: 0.2795 - acc: 0.8608 - val_loss: 0.4166 - val_acc: 0.8156
Epoch 9/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.2556 - acc: 0.8433 - val_loss: 0.4560 - val_acc: 0.7890
Epoch 10/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.2330 - acc: 0.8257 - val_loss: 0.4794 - val_acc: 0.7672
###Markdown
그림 6-27과 6-28은 훈련과 검증 결과를 보여줍니다. 검증 정확도는 LSTM보다 조금 낮지만 CPU나 GPU에서 더 빠르게 실행됩니다(속도 향상은 환경에 따라 많이 다릅니다). 여기에서 적절한 에포크 수(4개)로 모델을 다시 훈련하고 테스트 세트에서 확인할 수 있습니다. 이 예는 단어 수준의 감성 분류 작업에 순환 네트워크를 대신하여 빠르고 경제적인 1D 컨브넷을 사용할 수 있음을 보여줍니다.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
CNN과 RNN을 연결하여 긴 시퀀스를 처리하기1D 컨브넷이 입력 패치를 독립적으로 처리하기 때문에 RNN과 달리 (합성곱 윈도우 크기의 범위를 넘어선) 타임스텝의 순서에 민감하지 않습니다. 물론 장기간 패턴을 인식하기 위해 많은 합성곱 층과 풀링 층을 쌓을 수 있습니다. 상위 층은 원본 입력에서 긴 범위를 보게 될 것입니다. 이런 방법은 순서를 감지하기엔 부족합니다. 온도 예측 문제에 1D 컨브넷을 적용하여 이를 확인해 보겠습니다. 이 문제는 순서를 감지해야 좋은 예측을 만들어 낼 수 있습니다. 다음은 이전에 정의한 float_data, train_gen, val_gen, val_steps를 다시 사용합니다:
###Code
import os
import numpy as np
data_dir = './datasets/jena_climate/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# 전체 검증 세트를 순회하기 위해 val_gen에서 추출할 횟수
val_steps = (300000 - 200001 - lookback) // batch_size
# 전체 테스트 세트를 순회하기 위해 test_gen에서 추출할 횟수
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 7s 14ms/step - loss: 0.4196 - val_loss: 0.4319
Epoch 2/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3658 - val_loss: 0.4310
Epoch 3/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3421 - val_loss: 0.4689
Epoch 4/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3242 - val_loss: 0.4615
Epoch 5/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3112 - val_loss: 0.4529
Epoch 6/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3017 - val_loss: 0.4641
Epoch 7/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2934 - val_loss: 0.4665
Epoch 8/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2872 - val_loss: 0.4761
Epoch 9/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2798 - val_loss: 0.4660
Epoch 10/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2760 - val_loss: 0.4629
Epoch 11/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2728 - val_loss: 0.4748
Epoch 12/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2675 - val_loss: 0.4693
Epoch 13/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2626 - val_loss: 0.5308
Epoch 14/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2613 - val_loss: 0.5010
Epoch 15/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2583 - val_loss: 0.4917
Epoch 16/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2547 - val_loss: 0.5058
Epoch 17/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2518 - val_loss: 0.4791
Epoch 18/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2489 - val_loss: 0.4735
Epoch 19/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2475 - val_loss: 0.4751
Epoch 20/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2460 - val_loss: 0.5052
###Markdown
다음은 훈련 MAE와 검증 MAE입니다:
###Code
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
검증 MAE는 0.40 대에 머물러 있습니다. 작은 컨브넷을 사용해서 상식 수준의 기준점을 넘지 못 했습니다. 이는 컨브넷이 입력 시계열에 있는 패턴을 보고 이 패턴의 시간 축의 위치(시작인지 끝 부분인지 등)를 고려하지 않기 때문입니다. 최근 데이터 포인트일수록 오래된 데이터 포인트와는 다르게 해석해야 하기 때문에 컨브넷이 의미 있는 결과를 만들지 못합니다. 이런 컨브넷의 한계는 IMDB 데이터에서는 문제가 되지 않습니다. 긍정 또는 부정적인 감성과 연관된 키워드 패턴의 중요성은 입력 시퀀스에 나타난 위치와 무관하기 때문입니다.컨브넷의 속도와 경량함을 RNN의 순서 감지 능력과 결합하는 한가지 전략은 1D 컨브넷을 RNN 이전에 전처리 단계로 사용하는 것입니다. 수천 개의 스텝을 가진 시퀀스 같이 RNN으로 처리하기엔 현실적으로 너무 긴 시퀀스를 다룰 때 특별히 도움이 됩니다. 컨브넷이 긴 입력 시퀀스를 더 짧은 고수준 특성의 (다운 샘플된) 시퀀스로 변환합니다. 추출된 특성의 시퀀스는 RNN 파트의 입력이 됩니다. 이 기법이 연구 논문이나 실전 애플리케이션에 자주 등장하지는 않습니다. 아마도 널리 알려지지 않았기 때문일 것입니다. 이 방법은 효과적이므로 많이 사용되기를 바랍니다. 온도 예측 문제에 적용해 보죠. 이 전략은 훨씬 긴 시퀀스를 다룰 수 있으므로 더 오래전 데이터를 바라보거나(데이터 제너레이터의 `lookback` 매개변수를 증가시킵니다), 시계열 데이터를 더 촘촘히 바라볼 수 있습니다(제너레이터의 `step` 매개변수를 감소시킵니다). 여기서는 그냥 `step`을 절반으로 줄여서 사용하겠습니다. 온도 데이터가 30분마다 1 포인트씩 샘플링되기 때문에 결과 시계열 데이터는 두 배로 길어집니다. 앞서 정의한 제너레이터 함수를 다시 사용합니다.
###Code
# 이전에는 6이었습니다(시간마다 1 포인트); 이제는 3 입니다(30분마다 1 포인트)
step = 3
lookback = 1440 # 변경 안 됨
delay = 144 # 변경 안 됨
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
이 모델은 두 개의 `Conv1D` 층 다음에 `GRU` 층을 놓았습니다:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 98, 32) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d (Global (None, 32) 0
_________________________________________________________________
dense (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
157/157 [==============================] - 17s 65ms/step - loss: 7.7306 - acc: 0.4988 - val_loss: 7.6168 - val_acc: 0.5062
Epoch 2/10
157/157 [==============================] - 6s 37ms/step - loss: 6.5748 - acc: 0.4911 - val_loss: 0.6947 - val_acc: 0.5108
Epoch 3/10
157/157 [==============================] - 6s 37ms/step - loss: 0.6864 - acc: 0.5577 - val_loss: 0.6799 - val_acc: 0.5712
Epoch 4/10
157/157 [==============================] - 6s 37ms/step - loss: 0.6574 - acc: 0.7022 - val_loss: 0.6489 - val_acc: 0.6966
Epoch 5/10
157/157 [==============================] - 6s 37ms/step - loss: 0.6063 - acc: 0.7875 - val_loss: 0.5687 - val_acc: 0.7628
Epoch 6/10
157/157 [==============================] - 6s 37ms/step - loss: 0.4959 - acc: 0.8331 - val_loss: 0.4439 - val_acc: 0.8256
Epoch 7/10
157/157 [==============================] - 6s 37ms/step - loss: 0.3729 - acc: 0.8683 - val_loss: 0.4088 - val_acc: 0.8404
Epoch 8/10
157/157 [==============================] - 6s 37ms/step - loss: 0.3048 - acc: 0.8894 - val_loss: 0.4035 - val_acc: 0.8584
Epoch 9/10
157/157 [==============================] - 6s 37ms/step - loss: 0.2729 - acc: 0.9038 - val_loss: 0.4139 - val_acc: 0.8618
Epoch 10/10
157/157 [==============================] - 6s 37ms/step - loss: 0.2339 - acc: 0.9172 - val_loss: 0.4118 - val_acc: 0.8686
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/home/vi/sources/nn/data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py:1844: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
warnings.warn('`Model.fit_generator` is deprecated and '
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# restore np.load for future normal usage
np.load = np_load_old
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:3376: The name tf.log is deprecated. Please use tf.math.log instead.
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\tensorflow\python\ops\nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From C:\Users\gaborstefanics\Anaconda3\envs\TensorFlow\lib\site-packages\keras\backend\tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 87s 4ms/step - loss: 0.8337 - acc: 0.5091 - val_loss: 0.6874 - val_acc: 0.5648
Epoch 2/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.6699 - acc: 0.6390 - val_loss: 0.6641 - val_acc: 0.6590
Epoch 3/10
20000/20000 [==============================] - 83s 4ms/step - loss: 0.6235 - acc: 0.7538 - val_loss: 0.6078 - val_acc: 0.7438
Epoch 4/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.5254 - acc: 0.8078 - val_loss: 0.4841 - val_acc: 0.8066
Epoch 5/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.4124 - acc: 0.8480 - val_loss: 0.4178 - val_acc: 0.8364
Epoch 6/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.3521 - acc: 0.8690 - val_loss: 0.4157 - val_acc: 0.8362
Epoch 7/10
20000/20000 [==============================] - 85s 4ms/step - loss: 0.3143 - acc: 0.8628 - val_loss: 0.4447 - val_acc: 0.8184
Epoch 8/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.2829 - acc: 0.8463 - val_loss: 0.4389 - val_acc: 0.8016
Epoch 9/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.2550 - acc: 0.8328 - val_loss: 0.4653 - val_acc: 0.7836
Epoch 10/10
20000/20000 [==============================] - 84s 4ms/step - loss: 0.2322 - acc: 0.8098 - val_loss: 0.4684 - val_acc: 0.7692
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = 'C:/Users/gaborstefanics/Documents/GitHub/deep-learning-with-python-notebooks/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 73s 147ms/step - loss: 0.4218 - val_loss: 0.4370
Epoch 2/20
500/500 [==============================] - 71s 143ms/step - loss: 0.3674 - val_loss: 0.4455
Epoch 3/20
500/500 [==============================] - 73s 145ms/step - loss: 0.3458 - val_loss: 0.4650
Epoch 4/20
500/500 [==============================] - 71s 142ms/step - loss: 0.3284 - val_loss: 0.4519
Epoch 5/20
500/500 [==============================] - 72s 145ms/step - loss: 0.3141 - val_loss: 0.4528
Epoch 6/20
500/500 [==============================] - 69s 139ms/step - loss: 0.3063 - val_loss: 0.4601
Epoch 7/20
500/500 [==============================] - 67s 134ms/step - loss: 0.2976 - val_loss: 0.4826
Epoch 8/20
500/500 [==============================] - 78s 155ms/step - loss: 0.2927 - val_loss: 0.4598
Epoch 9/20
500/500 [==============================] - 98s 196ms/step - loss: 0.2856 - val_loss: 0.4545
Epoch 10/20
500/500 [==============================] - 96s 191ms/step - loss: 0.2818 - val_loss: 0.4863
Epoch 11/20
500/500 [==============================] - 95s 191ms/step - loss: 0.2777 - val_loss: 0.4954
Epoch 12/20
500/500 [==============================] - 98s 195ms/step - loss: 0.2735 - val_loss: 0.5049
Epoch 13/20
500/500 [==============================] - 73s 147ms/step - loss: 0.2678 - val_loss: 0.4650
Epoch 14/20
500/500 [==============================] - 74s 147ms/step - loss: 0.2662 - val_loss: 0.4916
Epoch 15/20
500/500 [==============================] - 70s 141ms/step - loss: 0.2626 - val_loss: 0.4759
Epoch 16/20
500/500 [==============================] - 72s 143ms/step - loss: 0.2591 - val_loss: 0.5046
Epoch 17/20
500/500 [==============================] - 72s 144ms/step - loss: 0.2553 - val_loss: 0.4774
Epoch 18/20
500/500 [==============================] - 69s 139ms/step - loss: 0.2534 - val_loss: 0.4813
Epoch 19/20
500/500 [==============================] - 71s 141ms/step - loss: 0.2510 - val_loss: 0.4875
Epoch 20/20
500/500 [==============================] - 72s 144ms/step - loss: 0.2493 - val_loss: 0.4971
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/home/ubuntu/data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 124s - loss: 0.4189 - val_loss: 0.4521
Epoch 2/20
500/500 [==============================] - 11s - loss: 0.3629 - val_loss: 0.4545
Epoch 3/20
500/500 [==============================] - 11s - loss: 0.3399 - val_loss: 0.4527
Epoch 4/20
500/500 [==============================] - 11s - loss: 0.3229 - val_loss: 0.4721
Epoch 5/20
500/500 [==============================] - 11s - loss: 0.3122 - val_loss: 0.4712
Epoch 6/20
500/500 [==============================] - 11s - loss: 0.3030 - val_loss: 0.4705
Epoch 7/20
500/500 [==============================] - 11s - loss: 0.2935 - val_loss: 0.4870
Epoch 8/20
500/500 [==============================] - 11s - loss: 0.2862 - val_loss: 0.4676
Epoch 9/20
500/500 [==============================] - 11s - loss: 0.2817 - val_loss: 0.4738
Epoch 10/20
500/500 [==============================] - 11s - loss: 0.2775 - val_loss: 0.4896
Epoch 11/20
500/500 [==============================] - 11s - loss: 0.2715 - val_loss: 0.4765
Epoch 12/20
500/500 [==============================] - 11s - loss: 0.2683 - val_loss: 0.4724
Epoch 13/20
500/500 [==============================] - 11s - loss: 0.2644 - val_loss: 0.4842
Epoch 14/20
500/500 [==============================] - 11s - loss: 0.2606 - val_loss: 0.4910
Epoch 15/20
500/500 [==============================] - 11s - loss: 0.2558 - val_loss: 0.5000
Epoch 16/20
500/500 [==============================] - 11s - loss: 0.2539 - val_loss: 0.4960
Epoch 17/20
500/500 [==============================] - 11s - loss: 0.2516 - val_loss: 0.4875
Epoch 18/20
500/500 [==============================] - 11s - loss: 0.2501 - val_loss: 0.4884
Epoch 19/20
500/500 [==============================] - 11s - loss: 0.2444 - val_loss: 0.5024
Epoch 20/20
500/500 [==============================] - 11s - loss: 0.2444 - val_loss: 0.4821
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_3 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_4 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_2 (Glob (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 4s 182us/step - loss: 0.8337 - acc: 0.5093 - val_loss: 0.6874 - val_acc: 0.5662
Epoch 2/10
20000/20000 [==============================] - 3s 154us/step - loss: 0.6699 - acc: 0.6392 - val_loss: 0.6641 - val_acc: 0.6566
Epoch 3/10
20000/20000 [==============================] - 3s 152us/step - loss: 0.6235 - acc: 0.7530 - val_loss: 0.6076 - val_acc: 0.7426
Epoch 4/10
20000/20000 [==============================] - 3s 162us/step - loss: 0.5254 - acc: 0.8082 - val_loss: 0.4842 - val_acc: 0.8064
Epoch 5/10
20000/20000 [==============================] - 3s 152us/step - loss: 0.4093 - acc: 0.8489 - val_loss: 0.4305 - val_acc: 0.8300
Epoch 6/10
20000/20000 [==============================] - 3s 152us/step - loss: 0.3466 - acc: 0.8671 - val_loss: 0.4163 - val_acc: 0.8354
Epoch 7/10
20000/20000 [==============================] - 3s 154us/step - loss: 0.3070 - acc: 0.8640 - val_loss: 0.4473 - val_acc: 0.8178
Epoch 8/10
20000/20000 [==============================] - 3s 153us/step - loss: 0.2767 - acc: 0.8502 - val_loss: 0.4237 - val_acc: 0.8062
Epoch 9/10
20000/20000 [==============================] - 3s 153us/step - loss: 0.2517 - acc: 0.8293 - val_loss: 0.4404 - val_acc: 0.7848
Epoch 10/10
20000/20000 [==============================] - 3s 157us/step - loss: 0.2264 - acc: 0.8132 - val_loss: 0.4835 - val_acc: 0.7684
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = 'C:/git/pythonTest/jena_climate_2009_2016.csv'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 12s 25ms/step - loss: 0.4209 - val_loss: 0.4286
Epoch 2/20
500/500 [==============================] - 12s 23ms/step - loss: 0.3643 - val_loss: 0.4483
Epoch 3/20
500/500 [==============================] - 12s 23ms/step - loss: 0.3385 - val_loss: 0.4595
Epoch 4/20
500/500 [==============================] - 12s 23ms/step - loss: 0.3208 - val_loss: 0.4488
Epoch 5/20
500/500 [==============================] - 12s 24ms/step - loss: 0.3081 - val_loss: 0.4496
Epoch 6/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2998 - val_loss: 0.4536
Epoch 7/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2913 - val_loss: 0.5009
Epoch 8/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2865 - val_loss: 0.4783
Epoch 9/20
500/500 [==============================] - 12s 25ms/step - loss: 0.2790 - val_loss: 0.4528
Epoch 10/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2738 - val_loss: 0.4809
Epoch 11/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2718 - val_loss: 0.4725
Epoch 12/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2673 - val_loss: 0.4706
Epoch 13/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2610 - val_loss: 0.4512
Epoch 14/20
500/500 [==============================] - 12s 25ms/step - loss: 0.2581 - val_loss: 0.4777
Epoch 15/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2565 - val_loss: 0.4787
Epoch 16/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2523 - val_loss: 0.4527
Epoch 17/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2492 - val_loss: 0.4600
Epoch 18/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2470 - val_loss: 0.4545
Epoch 19/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2452 - val_loss: 0.4545
Epoch 20/20
500/500 [==============================] - 12s 24ms/step - loss: 0.2433 - val_loss: 0.4415
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
# (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
import numpy as np
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# call load_data with allow_pickle implicitly set to true
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=10000)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tensorenviron\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tensorenviron\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tensorenviron\lib\site-packages\tensorflow\python\ops\math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 93s 5ms/step - loss: 0.8337 - acc: 0.5090 - val_loss: 0.6875 - val_acc: 0.5630
Epoch 2/10
20000/20000 [==============================] - 92s 5ms/step - loss: 0.6700 - acc: 0.6390 - val_loss: 0.6642 - val_acc: 0.6578
Epoch 3/10
20000/20000 [==============================] - 96s 5ms/step - loss: 0.6237 - acc: 0.7530 - val_loss: 0.6084 - val_acc: 0.7446
Epoch 4/10
20000/20000 [==============================] - 94s 5ms/step - loss: 0.5263 - acc: 0.8081 - val_loss: 0.4831 - val_acc: 0.8062
Epoch 5/10
20000/20000 [==============================] - 85s 4ms/step - loss: 0.4127 - acc: 0.8477 - val_loss: 0.4323 - val_acc: 0.8300
Epoch 6/10
20000/20000 [==============================] - 86s 4ms/step - loss: 0.3496 - acc: 0.8677 - val_loss: 0.4163 - val_acc: 0.8380
Epoch 7/10
20000/20000 [==============================] - 85s 4ms/step - loss: 0.3085 - acc: 0.8649 - val_loss: 0.4401 - val_acc: 0.8208
Epoch 8/10
20000/20000 [==============================] - 88s 4ms/step - loss: 0.2793 - acc: 0.8511 - val_loss: 0.4328 - val_acc: 0.8022
Epoch 9/10
20000/20000 [==============================] - 87s 4ms/step - loss: 0.2547 - acc: 0.8284 - val_loss: 0.4458 - val_acc: 0.7840
Epoch 10/10
20000/20000 [==============================] - 85s 4ms/step - loss: 0.2305 - acc: 0.8035 - val_loss: 0.5143 - val_acc: 0.7462
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = 'C:/Users/FEM/Documents/python/deep-learning-with-python-notebooks/data/jena_climate'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 79s 158ms/step - loss: 0.4197 - val_loss: 0.4364
Epoch 2/20
500/500 [==============================] - 77s 153ms/step - loss: 0.3637 - val_loss: 0.4515
Epoch 3/20
500/500 [==============================] - 76s 153ms/step - loss: 0.3399 - val_loss: 0.4672
Epoch 4/20
500/500 [==============================] - 76s 151ms/step - loss: 0.3205 - val_loss: 0.4387
Epoch 5/20
500/500 [==============================] - 77s 155ms/step - loss: 0.3071 - val_loss: 0.4528
Epoch 6/20
500/500 [==============================] - 76s 151ms/step - loss: 0.2989 - val_loss: 0.4619
Epoch 7/20
500/500 [==============================] - 75s 150ms/step - loss: 0.2915 - val_loss: 0.4718
Epoch 8/20
500/500 [==============================] - 77s 154ms/step - loss: 0.2856 - val_loss: 0.4866
Epoch 9/20
500/500 [==============================] - 76s 151ms/step - loss: 0.2795 - val_loss: 0.4615
Epoch 10/20
500/500 [==============================] - 74s 148ms/step - loss: 0.2734 - val_loss: 0.4628
Epoch 11/20
500/500 [==============================] - 75s 151ms/step - loss: 0.2713 - val_loss: 0.4695
Epoch 12/20
500/500 [==============================] - 73s 146ms/step - loss: 0.2668 - val_loss: 0.4722
Epoch 13/20
500/500 [==============================] - 74s 148ms/step - loss: 0.2614 - val_loss: 0.4596
Epoch 14/20
500/500 [==============================] - 78s 155ms/step - loss: 0.2591 - val_loss: 0.4746
Epoch 15/20
500/500 [==============================] - 77s 154ms/step - loss: 0.2562 - val_loss: 0.4758
Epoch 16/20
500/500 [==============================] - 76s 153ms/step - loss: 0.2550 - val_loss: 0.4641
Epoch 17/20
500/500 [==============================] - 76s 152ms/step - loss: 0.2488 - val_loss: 0.4634
Epoch 18/20
500/500 [==============================] - 76s 151ms/step - loss: 0.2482 - val_loss: 0.4594
Epoch 19/20
500/500 [==============================] - 75s 151ms/step - loss: 0.2455 - val_loss: 0.4635
Epoch 20/20
500/500 [==============================] - 75s 150ms/step - loss: 0.2439 - val_loss: 0.4818
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
컨브넷을 사용한 시퀀스 처리이 노트북은 [케라스 창시자에게 배우는 딥러닝](https://tensorflow.blog/deep-learning-with-python/) 책의 6장 4절의 코드 예제입니다. 책에는 더 많은 내용과 그림이 있습니다. 이 노트북에는 소스 코드에 관련된 설명만 포함합니다. 이 노트북의 설명은 케라스 버전 2.2.2에 맞추어져 있습니다. 케라스 최신 버전이 릴리스되면 노트북을 다시 테스트하기 때문에 설명과 코드의 결과가 조금 다를 수 있습니다. 1D 컨브넷 구현케라스에서 1D 컨브넷은 `Conv1D` 층을 사용하여 구현합니다. `Conv1D`는 `Conv2D`와 인터페이스가 비슷합니다. `(samples, time, features)` 크기의 3D 텐서를 입력받고 비슷한 형태의 3D 텐서를 반환합니다. 합성곱 윈도우는 시간 축의 1D 윈도우입니다. 즉, 입력 텐서의 두 번째 축입니다.간단한 두 개 층으로 된 1D 컨브넷을 만들어 익숙한 IMDB 감성 분류 문제에 적용해 보죠.기억을 되살리기 위해 데이터를 로드하고 전처리하는 코드를 다시 보겠습니다:
###Code
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
max_features = 10000 # 특성으로 사용할 단어의 수
max_len = 500 # 사용할 텍스트의 길이(가장 빈번한 max_features 개의 단어만 사용합니다)
print('데이터 로드...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), '훈련 시퀀스')
print(len(x_test), '테스트 시퀀스')
print('시퀀스 패딩 (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train 크기:', x_train.shape)
print('x_test 크기:', x_test.shape)
###Output
데이터 로드...
25000 훈련 시퀀스
25000 테스트 시퀀스
시퀀스 패딩 (samples x time)
x_train 크기: (25000, 500)
x_test 크기: (25000, 500)
###Markdown
1D 컨브넷은 5장에서 사용한 2D 컨브넷과 비슷한 방식으로 구성합니다. `Conv1D`와 `MaxPooling1D` 층을 쌓고 전역 풀링 층이나 `Flatten` 층으로 마칩니다. 이 구조는 3D 입력을 2D 출력으로 바꾸므로 분류나 회귀를 위해 모델에 하나 이상의 `Dense` 층을 추가할 수 있습니다.한 가지 다른 점은 1D 컨브넷에 큰 합성곱 윈도우를 사용할 수 있다는 것입니다. 2D 합성곱 층에서 3 × 3 합성곱 윈도우는 3 × 3 = 9 특성을 고려합니다. 하지만 1D 합성곱 층에서 크기 3인 합성곱 윈도우는 3개의 특성만 고려합니다. 그래서 1D 합성곱에 크기 7이나 9의 윈도우를 사용할 수 있습니다.다음은 IMDB 데이터셋을 위한 1D 컨브넷의 예입니다:
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 3s 167us/step - loss: 0.8337 - acc: 0.5093 - val_loss: 0.6874 - val_acc: 0.5636
Epoch 2/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.6700 - acc: 0.6381 - val_loss: 0.6642 - val_acc: 0.6572
Epoch 3/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.6237 - acc: 0.7527 - val_loss: 0.6082 - val_acc: 0.7426
Epoch 4/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.5262 - acc: 0.8076 - val_loss: 0.4830 - val_acc: 0.8052
Epoch 5/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.4130 - acc: 0.8475 - val_loss: 0.4334 - val_acc: 0.8298
Epoch 6/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.3518 - acc: 0.8677 - val_loss: 0.4160 - val_acc: 0.8356
Epoch 7/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.3095 - acc: 0.8705 - val_loss: 0.4423 - val_acc: 0.8248
Epoch 8/10
20000/20000 [==============================] - 2s 102us/step - loss: 0.2795 - acc: 0.8608 - val_loss: 0.4166 - val_acc: 0.8156
Epoch 9/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.2556 - acc: 0.8433 - val_loss: 0.4560 - val_acc: 0.7890
Epoch 10/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.2330 - acc: 0.8257 - val_loss: 0.4794 - val_acc: 0.7672
###Markdown
그림 6-27과 6-28은 훈련과 검증 결과를 보여줍니다. 검증 정확도는 LSTM보다 조금 낮지만 CPU나 GPU에서 더 빠르게 실행됩니다(속도 향상은 환경에 따라 많이 다릅니다). 여기에서 적절한 에포크 수(4개)로 모델을 다시 훈련하고 테스트 세트에서 확인할 수 있습니다. 이 예는 단어 수준의 감성 분류 작업에 순환 네트워크를 대신하여 빠르고 경제적인 1D 컨브넷을 사용할 수 있음을 보여줍니다.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
CNN과 RNN을 연결하여 긴 시퀀스를 처리하기1D 컨브넷이 입력 패치를 독립적으로 처리하기 때문에 RNN과 달리 (합성곱 윈도우 크기의 범위를 넘어선) 타임스텝의 순서에 민감하지 않습니다. 물론 장기간 패턴을 인식하기 위해 많은 합성곱 층과 풀링 층을 쌓을 수 있습니다. 상위 층은 원본 입력에서 긴 범위를 보게 될 것입니다. 이런 방법은 순서를 감지하기엔 부족합니다. 온도 예측 문제에 1D 컨브넷을 적용하여 이를 확인해 보겠습니다. 이 문제는 순서를 감지해야 좋은 예측을 만들어 낼 수 있습니다. 다음은 이전에 정의한 float_data, train_gen, val_gen, val_steps를 다시 사용합니다:
###Code
import os
import numpy as np
data_dir = './datasets/jena_climate/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# 전체 검증 세트를 순회하기 위해 val_gen에서 추출할 횟수
val_steps = (300000 - 200001 - lookback) // batch_size
# 전체 테스트 세트를 순회하기 위해 test_gen에서 추출할 횟수
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 7s 14ms/step - loss: 0.4196 - val_loss: 0.4319
Epoch 2/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3658 - val_loss: 0.4310
Epoch 3/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3421 - val_loss: 0.4689
Epoch 4/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3242 - val_loss: 0.4615
Epoch 5/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3112 - val_loss: 0.4529
Epoch 6/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3017 - val_loss: 0.4641
Epoch 7/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2934 - val_loss: 0.4665
Epoch 8/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2872 - val_loss: 0.4761
Epoch 9/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2798 - val_loss: 0.4660
Epoch 10/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2760 - val_loss: 0.4629
Epoch 11/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2728 - val_loss: 0.4748
Epoch 12/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2675 - val_loss: 0.4693
Epoch 13/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2626 - val_loss: 0.5308
Epoch 14/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2613 - val_loss: 0.5010
Epoch 15/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2583 - val_loss: 0.4917
Epoch 16/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2547 - val_loss: 0.5058
Epoch 17/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2518 - val_loss: 0.4791
Epoch 18/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2489 - val_loss: 0.4735
Epoch 19/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2475 - val_loss: 0.4751
Epoch 20/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2460 - val_loss: 0.5052
###Markdown
다음은 훈련 MAE와 검증 MAE입니다:
###Code
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
검증 MAE는 0.40 대에 머물러 있습니다. 작은 컨브넷을 사용해서 상식 수준의 기준점을 넘지 못 했습니다. 이는 컨브넷이 입력 시계열에 있는 패턴을 보고 이 패턴의 시간 축의 위치(시작인지 끝 부분인지 등)를 고려하지 않기 때문입니다. 최근 데이터 포인트일수록 오래된 데이터 포인트와는 다르게 해석해야 하기 때문에 컨브넷이 의미 있는 결과를 만들지 못합니다. 이런 컨브넷의 한계는 IMDB 데이터에서는 문제가 되지 않습니다. 긍정 또는 부정적인 감성과 연관된 키워드 패턴의 중요성은 입력 시퀀스에 나타난 위치와 무관하기 때문입니다.컨브넷의 속도와 경량함을 RNN의 순서 감지 능력과 결합하는 한가지 전략은 1D 컨브넷을 RNN 이전에 전처리 단계로 사용하는 것입니다. 수천 개의 스텝을 가진 시퀀스 같이 RNN으로 처리하기엔 현실적으로 너무 긴 시퀀스를 다룰 때 특별히 도움이 됩니다. 컨브넷이 긴 입력 시퀀스를 더 짧은 고수준 특성의 (다운 샘플된) 시퀀스로 변환합니다. 추출된 특성의 시퀀스는 RNN 파트의 입력이 됩니다. 이 기법이 연구 논문이나 실전 애플리케이션에 자주 등장하지는 않습니다. 아마도 널리 알려지지 않았기 때문일 것입니다. 이 방법은 효과적이므로 많이 사용되기를 바랍니다. 온도 예측 문제에 적용해 보죠. 이 전략은 훨씬 긴 시퀀스를 다룰 수 있으므로 더 오래전 데이터를 바라보거나(데이터 제너레이터의 `lookback` 매개변수를 증가시킵니다), 시계열 데이터를 더 촘촘히 바라볼 수 있습니다(제너레이터의 `step` 매개변수를 감소시킵니다). 여기서는 그냥 `step`을 절반으로 줄여서 사용하겠습니다. 온도 데이터가 30분마다 1 포인트씩 샘플링되기 때문에 결과 시계열 데이터는 두 배로 길어집니다. 앞서 정의한 제너레이터 함수를 다시 사용합니다.
###Code
# 이전에는 6이었습니다(시간마다 1 포인트); 이제는 3 입니다(30분마다 1 포인트)
step = 3
lookback = 1440 # 변경 안 됨
delay = 144 # 변경 안 됨
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
이 모델은 두 개의 `Conv1D` 층 다음에 `GRU` 층을 놓았습니다:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
컨브넷을 사용한 시퀀스 처리이 노트북은 [케라스 창시자에게 배우는 딥러닝](https://tensorflow.blog/deep-learning-with-python/) 책의 6장 4절의 코드 예제입니다. 책에는 더 많은 내용과 그림이 있습니다. 이 노트북에는 소스 코드에 관련된 설명만 포함합니다. 이 노트북의 설명은 케라스 버전 2.2.2에 맞추어져 있습니다. 케라스 최신 버전이 릴리스되면 노트북을 다시 테스트하기 때문에 설명과 코드의 결과가 조금 다를 수 있습니다. 1D 컨브넷 구현케라스에서 1D 컨브넷은 `Conv1D` 층을 사용하여 구현합니다. `Conv1D`는 `Conv2D`와 인터페이스가 비슷합니다. `(samples, time, features)` 크기의 3D 텐서를 입력받고 비슷한 형태의 3D 텐서를 반환합니다. 합성곱 윈도우는 시간 축의 1D 윈도우입니다. 즉, 입력 텐서의 두 번째 축입니다.간단한 두 개 층으로 된 1D 컨브넷을 만들어 익숙한 IMDB 감성 분류 문제에 적용해 보죠.기억을 되살리기 위해 데이터를 로드하고 전처리하는 코드를 다시 보겠습니다:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # 특성으로 사용할 단어의 수
max_len = 500 # 사용할 텍스트의 길이(가장 빈번한 max_features 개의 단어만 사용합니다)
print('데이터 로드...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), '훈련 시퀀스')
print(len(x_test), '테스트 시퀀스')
print('시퀀스 패딩 (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train 크기:', x_train.shape)
print('x_test 크기:', x_test.shape)
###Output
데이터 로드...
25000 훈련 시퀀스
25000 테스트 시퀀스
시퀀스 패딩 (samples x time)
x_train 크기: (25000, 500)
x_test 크기: (25000, 500)
###Markdown
1D 컨브넷은 5장에서 사용한 2D 컨브넷과 비슷한 방식으로 구성합니다. `Conv1D`와 `MaxPooling1D` 층을 쌓고 전역 풀링 층이나 `Flatten` 층으로 마칩니다. 이 구조는 3D 입력을 2D 출력으로 바꾸므로 분류나 회귀를 위해 모델에 하나 이상의 `Dense` 층을 추가할 수 있습니다.한 가지 다른 점은 1D 컨브넷에 큰 합성곱 윈도우를 사용할 수 있다는 것입니다. 2D 합성곱 층에서 3 × 3 합성곱 윈도우는 3 × 3 = 9 특성을 고려합니다. 하지만 1D 합성곱 층에서 크기 3인 합성곱 윈도우는 3개의 특성만 고려합니다. 그래서 1D 합성곱에 크기 7이나 9의 윈도우를 사용할 수 있습니다.다음은 IMDB 데이터셋을 위한 1D 컨브넷의 예입니다:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 3s 167us/step - loss: 0.8337 - acc: 0.5093 - val_loss: 0.6874 - val_acc: 0.5636
Epoch 2/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.6700 - acc: 0.6381 - val_loss: 0.6642 - val_acc: 0.6572
Epoch 3/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.6237 - acc: 0.7527 - val_loss: 0.6082 - val_acc: 0.7426
Epoch 4/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.5262 - acc: 0.8076 - val_loss: 0.4830 - val_acc: 0.8052
Epoch 5/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.4130 - acc: 0.8475 - val_loss: 0.4334 - val_acc: 0.8298
Epoch 6/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.3518 - acc: 0.8677 - val_loss: 0.4160 - val_acc: 0.8356
Epoch 7/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.3095 - acc: 0.8705 - val_loss: 0.4423 - val_acc: 0.8248
Epoch 8/10
20000/20000 [==============================] - 2s 102us/step - loss: 0.2795 - acc: 0.8608 - val_loss: 0.4166 - val_acc: 0.8156
Epoch 9/10
20000/20000 [==============================] - 2s 99us/step - loss: 0.2556 - acc: 0.8433 - val_loss: 0.4560 - val_acc: 0.7890
Epoch 10/10
20000/20000 [==============================] - 2s 100us/step - loss: 0.2330 - acc: 0.8257 - val_loss: 0.4794 - val_acc: 0.7672
###Markdown
그림 6-27과 6-28은 훈련과 검증 결과를 보여줍니다. 검증 정확도는 LSTM보다 조금 낮지만 CPU나 GPU에서 더 빠르게 실행됩니다(속도 향상은 환경에 따라 많이 다릅니다). 여기에서 적절한 에포크 수(4개)로 모델을 다시 훈련하고 테스트 세트에서 확인할 수 있습니다. 이 예는 단어 수준의 감성 분류 작업에 순환 네트워크를 대신하여 빠르고 경제적인 1D 컨브넷을 사용할 수 있음을 보여줍니다.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
CNN과 RNN을 연결하여 긴 시퀀스를 처리하기1D 컨브넷이 입력 패치를 독립적으로 처리하기 때문에 RNN과 달리 (합성곱 윈도우 크기의 범위를 넘어선) 타임스텝의 순서에 민감하지 않습니다. 물론 장기간 패턴을 인식하기 위해 많은 합성곱 층과 풀링 층을 쌓을 수 있습니다. 상위 층은 원본 입력에서 긴 범위를 보게 될 것입니다. 이런 방법은 순서를 감지하기엔 부족합니다. 온도 예측 문제에 1D 컨브넷을 적용하여 이를 확인해 보겠습니다. 이 문제는 순서를 감지해야 좋은 예측을 만들어 낼 수 있습니다. 다음은 이전에 정의한 float_data, train_gen, val_gen, val_steps를 다시 사용합니다:
###Code
import os
import numpy as np
data_dir = './datasets/jena_climate/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# 전체 검증 세트를 순회하기 위해 val_gen에서 추출할 횟수
val_steps = (300000 - 200001 - lookback) // batch_size
# 전체 테스트 세트를 순회하기 위해 test_gen에서 추출할 횟수
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 7s 14ms/step - loss: 0.4196 - val_loss: 0.4319
Epoch 2/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3658 - val_loss: 0.4310
Epoch 3/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3421 - val_loss: 0.4689
Epoch 4/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3242 - val_loss: 0.4615
Epoch 5/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3112 - val_loss: 0.4529
Epoch 6/20
500/500 [==============================] - 7s 13ms/step - loss: 0.3017 - val_loss: 0.4641
Epoch 7/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2934 - val_loss: 0.4665
Epoch 8/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2872 - val_loss: 0.4761
Epoch 9/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2798 - val_loss: 0.4660
Epoch 10/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2760 - val_loss: 0.4629
Epoch 11/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2728 - val_loss: 0.4748
Epoch 12/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2675 - val_loss: 0.4693
Epoch 13/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2626 - val_loss: 0.5308
Epoch 14/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2613 - val_loss: 0.5010
Epoch 15/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2583 - val_loss: 0.4917
Epoch 16/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2547 - val_loss: 0.5058
Epoch 17/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2518 - val_loss: 0.4791
Epoch 18/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2489 - val_loss: 0.4735
Epoch 19/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2475 - val_loss: 0.4751
Epoch 20/20
500/500 [==============================] - 7s 13ms/step - loss: 0.2460 - val_loss: 0.5052
###Markdown
다음은 훈련 MAE와 검증 MAE입니다:
###Code
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
검증 MAE는 0.40 대에 머물러 있습니다. 작은 컨브넷을 사용해서 상식 수준의 기준점을 넘지 못 했습니다. 이는 컨브넷이 입력 시계열에 있는 패턴을 보고 이 패턴의 시간 축의 위치(시작인지 끝 부분인지 등)를 고려하지 않기 때문입니다. 최근 데이터 포인트일수록 오래된 데이터 포인트와는 다르게 해석해야 하기 때문에 컨브넷이 의미 있는 결과를 만들지 못합니다. 이런 컨브넷의 한계는 IMDB 데이터에서는 문제가 되지 않습니다. 긍정 또는 부정적인 감성과 연관된 키워드 패턴의 중요성은 입력 시퀀스에 나타난 위치와 무관하기 때문입니다.컨브넷의 속도와 경량함을 RNN의 순서 감지 능력과 결합하는 한가지 전략은 1D 컨브넷을 RNN 이전에 전처리 단계로 사용하는 것입니다. 수천 개의 스텝을 가진 시퀀스 같이 RNN으로 처리하기엔 현실적으로 너무 긴 시퀀스를 다룰 때 특별히 도움이 됩니다. 컨브넷이 긴 입력 시퀀스를 더 짧은 고수준 특성의 (다운 샘플된) 시퀀스로 변환합니다. 추출된 특성의 시퀀스는 RNN 파트의 입력이 됩니다. 이 기법이 연구 논문이나 실전 애플리케이션에 자주 등장하지는 않습니다. 아마도 널리 알려지지 않았기 때문일 것입니다. 이 방법은 효과적이므로 많이 사용되기를 바랍니다. 온도 예측 문제에 적용해 보죠. 이 전략은 훨씬 긴 시퀀스를 다룰 수 있으므로 더 오래전 데이터를 바라보거나(데이터 제너레이터의 `lookback` 매개변수를 증가시킵니다), 시계열 데이터를 더 촘촘히 바라볼 수 있습니다(제너레이터의 `step` 매개변수를 감소시킵니다). 여기서는 그냥 `step`을 절반으로 줄여서 사용하겠습니다. 온도 데이터가 30분마다 1 포인트씩 샘플링되기 때문에 결과 시계열 데이터는 두 배로 길어집니다. 앞서 정의한 제너레이터 함수를 다시 사용합니다.
###Code
# 이전에는 6이었습니다(시간마다 1 포인트); 이제는 3 입니다(30분마다 1 포인트)
step = 3
lookback = 1440 # 변경 안 됨
delay = 144 # 변경 안 됨
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
이 모델은 두 개의 `Conv1D` 층 다음에 `GRU` 층을 놓았습니다:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/home/ubuntu/data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
_____no_output_____
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 5s 248us/step - loss: 0.8337 - acc: 0.5091 - val_loss: 0.6875 - val_acc: 0.5642
Epoch 2/10
20000/20000 [==============================] - 1s 55us/step - loss: 0.6699 - acc: 0.6395 - val_loss: 0.6641 - val_acc: 0.6582
Epoch 3/10
20000/20000 [==============================] - 1s 55us/step - loss: 0.6232 - acc: 0.7563 - val_loss: 0.6074 - val_acc: 0.7418
Epoch 4/10
20000/20000 [==============================] - 1s 57us/step - loss: 0.5251 - acc: 0.8091 - val_loss: 0.4846 - val_acc: 0.8068
Epoch 5/10
20000/20000 [==============================] - 1s 57us/step - loss: 0.4114 - acc: 0.8477 - val_loss: 0.4368 - val_acc: 0.8286
Epoch 6/10
20000/20000 [==============================] - 1s 57us/step - loss: 0.3482 - acc: 0.8619 - val_loss: 0.4257 - val_acc: 0.8322
Epoch 7/10
20000/20000 [==============================] - 1s 57us/step - loss: 0.3105 - acc: 0.8579 - val_loss: 0.4436 - val_acc: 0.8176
Epoch 8/10
20000/20000 [==============================] - 1s 59us/step - loss: 0.2798 - acc: 0.8435 - val_loss: 0.4367 - val_acc: 0.7980
Epoch 9/10
20000/20000 [==============================] - 1s 58us/step - loss: 0.2524 - acc: 0.8193 - val_loss: 0.4384 - val_acc: 0.7786
Epoch 10/10
20000/20000 [==============================] - 1s 57us/step - loss: 0.2282 - acc: 0.8008 - val_loss: 0.4895 - val_acc: 0.7554
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os, platform
import numpy as np
sys_name = platform.system().lower()
data_dir = ""
if ( sys_name == "windows" ) :
data_dir = 'E:/Datasets'
elif ( sys_name == "linux" ) :
data_dir = '/home/ubuntu/data'
else:
data_dir = '/Users/vinnys/Downloads'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
Epoch 1/20
500/500 [==============================] - 12s 23ms/step - loss: 0.4201 - val_loss: 0.4392
Epoch 2/20
500/500 [==============================] - 11s 22ms/step - loss: 0.3647 - val_loss: 0.4421
Epoch 3/20
500/500 [==============================] - 11s 22ms/step - loss: 0.3395 - val_loss: 0.4554
Epoch 4/20
500/500 [==============================] - 11s 22ms/step - loss: 0.3247 - val_loss: 0.4497
Epoch 5/20
500/500 [==============================] - 11s 22ms/step - loss: 0.3117 - val_loss: 0.4526
Epoch 6/20
500/500 [==============================] - 11s 22ms/step - loss: 0.3043 - val_loss: 0.4679
Epoch 7/20
500/500 [==============================] - 11s 22ms/step - loss: 0.2953 - val_loss: 0.4530
Epoch 8/20
500/500 [==============================] - 11s 22ms/step - loss: 0.2877 - val_loss: 0.4905
Epoch 9/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2815 - val_loss: 0.4589
Epoch 10/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2757 - val_loss: 0.4703
Epoch 11/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2721 - val_loss: 0.4611
Epoch 12/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2678 - val_loss: 0.4729
Epoch 13/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2623 - val_loss: 0.5133
Epoch 14/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2595 - val_loss: 0.4733
Epoch 15/20
500/500 [==============================] - 11s 22ms/step - loss: 0.2579 - val_loss: 0.4809
Epoch 16/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2543 - val_loss: 0.4749
Epoch 17/20
500/500 [==============================] - 11s 23ms/step - loss: 0.2497 - val_loss: 0.4724
Epoch 18/20
500/500 [==============================] - 11s 22ms/step - loss: 0.2494 - val_loss: 0.4853
Epoch 19/20
500/500 [==============================] - 11s 22ms/step - loss: 0.2452 - val_loss: 0.4795
Epoch 20/20
500/500 [==============================] - 11s 22ms/step - loss: 0.2445 - val_loss: 0.4808
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 500)
x_test shape: (25000, 500)
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 98, 32) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d (Global (None, 32) 0
_________________________________________________________________
dense (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 5s 236us/sample - loss: 0.7344 - acc: 0.5167 - val_loss: 0.6867 - val_acc: 0.5542
Epoch 2/10
20000/20000 [==============================] - 3s 164us/sample - loss: 0.6712 - acc: 0.6493 - val_loss: 0.6690 - val_acc: 0.6344
Epoch 3/10
20000/20000 [==============================] - 4s 183us/sample - loss: 0.6363 - acc: 0.7341 - val_loss: 0.6296 - val_acc: 0.6876
Epoch 4/10
20000/20000 [==============================] - 4s 197us/sample - loss: 0.5618 - acc: 0.7872 - val_loss: 0.5229 - val_acc: 0.7848
Epoch 5/10
20000/20000 [==============================] - 4s 201us/sample - loss: 0.4401 - acc: 0.8321 - val_loss: 0.4350 - val_acc: 0.8268
Epoch 6/10
20000/20000 [==============================] - 4s 196us/sample - loss: 0.3602 - acc: 0.8683 - val_loss: 0.4282 - val_acc: 0.8442
Epoch 7/10
20000/20000 [==============================] - 4s 185us/sample - loss: 0.3137 - acc: 0.8878 - val_loss: 0.4151 - val_acc: 0.8534
Epoch 8/10
20000/20000 [==============================] - 4s 209us/sample - loss: 0.2779 - acc: 0.9006 - val_loss: 0.4162 - val_acc: 0.8602
Epoch 9/10
20000/20000 [==============================] - 4s 196us/sample - loss: 0.2488 - acc: 0.9126 - val_loss: 0.4151 - val_acc: 0.8638
Epoch 10/10
20000/20000 [==============================] - 4s 197us/sample - loss: 0.2218 - acc: 0.9224 - val_loss: 0.4356 - val_acc: 0.8674
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
data_dir = '/home/yr19/Data/'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
WARNING:tensorflow:From <ipython-input-8-3b2330b0f5ba>:20: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 500 steps, validate for 769 steps
Epoch 1/20
500/500 [==============================] - 6s 13ms/step - loss: 0.4199 - val_loss: 0.4382
Epoch 2/20
500/500 [==============================] - 6s 12ms/step - loss: 0.3629 - val_loss: 0.4635
Epoch 3/20
500/500 [==============================] - 6s 12ms/step - loss: 0.3379 - val_loss: 0.4823
Epoch 4/20
500/500 [==============================] - 6s 12ms/step - loss: 0.3219 - val_loss: 0.4572
Epoch 5/20
500/500 [==============================] - 6s 13ms/step - loss: 0.3096 - val_loss: 0.4621
Epoch 6/20
500/500 [==============================] - 6s 12ms/step - loss: 0.3005 - val_loss: 0.4587
Epoch 7/20
500/500 [==============================] - 6s 12ms/step - loss: 0.2915 - val_loss: 0.4724
Epoch 8/20
500/500 [==============================] - 6s 12ms/step - loss: 0.2852 - val_loss: 0.4969
Epoch 9/20
500/500 [==============================] - 6s 13ms/step - loss: 0.2784 - val_loss: 0.4843
Epoch 10/20
500/500 [==============================] - 6s 13ms/step - loss: 0.2737 - val_loss: 0.4902
Epoch 11/20
500/500 [==============================] - 6s 13ms/step - loss: 0.2682 - val_loss: 0.4641
Epoch 12/20
500/500 [==============================] - 6s 12ms/step - loss: 0.2647 - val_loss: 0.4735
Epoch 13/20
500/500 [==============================] - 7s 14ms/step - loss: 0.2616 - val_loss: 0.4761
Epoch 14/20
500/500 [==============================] - 6s 12ms/step - loss: 0.2567 - val_loss: 0.4798
Epoch 15/20
500/500 [==============================] - 6s 13ms/step - loss: 0.2545 - val_loss: 0.4924
Epoch 16/20
500/500 [==============================] - 6s 12ms/step - loss: 0.2527 - val_loss: 0.4684
Epoch 17/20
500/500 [==============================] - 6s 13ms/step - loss: 0.2469 - val_loss: 0.4869
Epoch 18/20
500/500 [==============================] - 6s 12ms/step - loss: 0.2470 - val_loss: 0.4686
Epoch 19/20
500/500 [==============================] - 6s 13ms/step - loss: 0.2440 - val_loss: 0.4733
Epoch 20/20
500/500 [==============================] - 7s 14ms/step - loss: 0.2410 - val_loss: 0.4895
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
###Code
import tensorflow
tensorflow.keras.__version__
###Output
_____no_output_____
###Markdown
Sequence processing with convnetsThis notebook contains the code samples found in Chapter 6, Section 4 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Implementing a 1D convnetIn Keras, you would use a 1D convnet via the `Conv1D` layer, which has a very similar interface to `Conv2D`. It takes as input 3D tensors with shape `(samples, time, features)` and also returns similarly-shaped 3D tensors. The convolution window is a 1D window on the temporal axis, axis 1 in the input tensor.Let's build a simple 2-layer 1D convnet and apply it to the IMDB sentiment classification task that you are already familiar with.As a reminder, this is the code for obtaining and preprocessing the data:
###Code
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
max_features = 10000 # number of words to consider as features
max_len = 500 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 0s 0us/step
###Markdown
1D convnets are structured in the same way as their 2D counter-parts that you have used in Chapter 5: they consist of a stack of `Conv1D` and `MaxPooling1D` layers, eventually ending in either a global pooling layer or a `Flatten` layer, turning the 3D outputs into 2D outputs, allowing to add one or more `Dense` layers to the model, for classification or regression.One difference, though, is the fact that we can afford to use larger convolution windows with 1D convnets. Indeed, with a 2D convolution layer, a 3x3 convolution window contains 3*3 = 9 feature vectors, but with a 1D convolution layer, a convolution window of size 3 would only contain 3 feature vectors. We can thus easily afford 1D convolution windows of size 7 or 9.This is our example 1D convnet for the IMDB dataset:
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 128) 1280000
_________________________________________________________________
conv1d_2 (Conv1D) (None, 494, 32) 28704
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 92, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 1,315,937
Trainable params: 1,315,937
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
157/157 [==============================] - 6s 34ms/step - loss: 0.6912 - acc: 0.6312 - val_loss: 0.6729 - val_acc: 0.7012
Epoch 2/10
157/157 [==============================] - 5s 31ms/step - loss: 0.3878 - acc: 0.8708 - val_loss: 0.4003 - val_acc: 0.8734
Epoch 3/10
157/157 [==============================] - 5s 31ms/step - loss: 0.3099 - acc: 0.9107 - val_loss: 0.4901 - val_acc: 0.8784
Epoch 4/10
157/157 [==============================] - 5s 31ms/step - loss: 0.2257 - acc: 0.9432 - val_loss: 0.5641 - val_acc: 0.8700
Epoch 5/10
157/157 [==============================] - 5s 31ms/step - loss: 0.1781 - acc: 0.9656 - val_loss: 0.6231 - val_acc: 0.8804
Epoch 6/10
157/157 [==============================] - 5s 31ms/step - loss: 0.1415 - acc: 0.9789 - val_loss: 0.7025 - val_acc: 0.8776
Epoch 7/10
157/157 [==============================] - 5s 31ms/step - loss: 0.1119 - acc: 0.9882 - val_loss: 1.1488 - val_acc: 0.8508
Epoch 8/10
157/157 [==============================] - 5s 31ms/step - loss: 0.1038 - acc: 0.9881 - val_loss: 0.9914 - val_acc: 0.8670
Epoch 9/10
157/157 [==============================] - 5s 31ms/step - loss: 0.0915 - acc: 0.9926 - val_loss: 1.2253 - val_acc: 0.8564
Epoch 10/10
157/157 [==============================] - 5s 31ms/step - loss: 0.0897 - acc: 0.9917 - val_loss: 1.0673 - val_acc: 0.8752
###Markdown
Here are our training and validation results: validation accuracy is somewhat lower than that of the LSTM we used two sections ago, but runtime is faster, both on CPU and GPU (albeit the exact speedup will vary greatly depending on your exact configuration). At that point, we could re-train this model for the right number of epochs (8), and run it on the test set. This is a convincing demonstration that a 1D convnet can offer a fast, cheap alternative to a recurrent network on a word-level sentiment classification task.
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining CNNs and RNNs to process long sequencesBecause 1D convnets process input patches independently, they are not sensitive to the order of the timesteps (beyond a local scale, the size of the convolution windows), unlike RNNs. Of course, in order to be able to recognize longer-term patterns, one could stack many convolution layers and pooling layers, resulting in upper layers that would "see" long chunks of the original inputs -- but that's still a fairly weak way to induce order-sensitivity. One way to evidence this weakness is to try 1D convnets on the temperature forecasting problem from the previous section, where order-sensitivity was key to produce good predictions. Let's see:
###Code
!gdown --id 1-sm1SThDT-PJQJA_egRpGSH-dOi7r6Tx
# We reuse the following variables defined in the last section:
# float_data, train_gen, val_gen, val_steps
import os
import numpy as np
#data_dir = '/home/Data/'
#fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')
fname = "jena_climate_2009_2016.csv"
f = open(fname)
data = f.read()
f.close()
lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]
float_data = np.zeros((len(lines), len(header) - 1))
for i, line in enumerate(lines):
values = [float(x) for x in line.split(',')[1:]]
float_data[i, :] = values
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=6):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j] + delay][1]
yield samples, targets
lookback = 1440
step = 6
delay = 144
batch_size = 128
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step,
batch_size=batch_size)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step,
batch_size=batch_size)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step,
batch_size=batch_size)
# This is how many steps to draw from `val_gen`
# in order to see the whole validation set:
val_steps = (300000 - 200001 - lookback) // batch_size
# This is how many steps to draw from `test_gen`
# in order to see the whole test set:
test_steps = (len(float_data) - 300001 - lookback) // batch_size
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
###Output
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1940: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
warnings.warn('`Model.fit_generator` is deprecated and '
###Markdown
Here are our training and validation Mean Absolute Errors:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The validation MAE stays in the low 0.40s: we cannot even beat our common-sense baseline using the small convnet. Again, this is because our convnet looks for patterns anywhere in the input timeseries, and has no knowledge of the temporal position of a pattern it sees (e.g. towards the beginning, towards the end, etc.). Since more recent datapoints should be interpreted differently from older datapoints in the case of this specific forecasting problem, the convnet fails at producing meaningful results here. This limitation of convnets was not an issue on IMDB, because patterns of keywords that are associated with a positive or a negative sentiment will be informative independently of where they are found in the input sentences.One strategy to combine the speed and lightness of convnets with the order-sensitivity of RNNs is to use a 1D convnet as a preprocessing step before a RNN. This is especially beneficial when dealing with sequences that are so long that they couldn't realistically be processed with RNNs, e.g. sequences with thousands of steps. The convnet will turn the long input sequence into much shorter (downsampled) sequences of higher-level features. This sequence of extracted features then becomes the input to the RNN part of the network. This technique is not seen very often in research papers and practical applications, possibly because it is not very well known. It is very effective and ought to be more common. Let's try this out on the temperature forecasting dataset. Because this strategy allows us to manipulate much longer sequences, we could either look at data from further back (by increasing the `lookback` parameter of the data generator), or look at high-resolution timeseries (by decreasing the `step` parameter of the generator). Here, we will chose (somewhat arbitrarily) to use a `step` twice smaller, resulting in twice longer timeseries, where the weather data is being sampled at a rate of one point per 30 minutes.
###Code
# This was previously set to 6 (one point per hour).
# Now 3 (one point per 30 min).
step = 3
lookback = 720 # Unchanged
delay = 144 # Unchanged
train_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=0,
max_index=200000,
shuffle=True,
step=step)
val_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=200001,
max_index=300000,
step=step)
test_gen = generator(float_data,
lookback=lookback,
delay=delay,
min_index=300001,
max_index=None,
step=step)
val_steps = (300000 - 200001 - lookback) // 128
test_steps = (len(float_data) - 300001 - lookback) // 128
###Output
_____no_output_____
###Markdown
This is our model, starting with two `Conv1D` layers and following-up with a `GRU` layer:
###Code
model = Sequential()
model.add(layers.Conv1D(32, 5, activation='relu',
input_shape=(None, float_data.shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(32, 5, activation='relu'))
#model.add(layers.GRU(32, dropout=0.1, recurrent_dropout=0.5))
model.add(layers.GRU(32))
model.add(layers.Dense(1))
model.summary()
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=val_steps)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
onl/tutorials/mod6_7_final_analysis.ipynb | ###Markdown
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/worldbank/OpenNightLights/blob/master/onl/tutorials/mod6_7_final_analysis.ipynb) Statistical inferenceWe will use the data and model approach we have finalized to infer built-up land cover on the enter time period of 2016 through 2019. Fit modelThis just executes the code to integrate our data and train our model (with the "optimal" final hyperparameters) as we developed previously:
###Code
import numpy as np
import pandas as pd
from scipy.stats import ttest_ind
# reminder that if you are installing libraries in a Google Colab instance you will be prompted to restart your kernal
try:
import geemap, ee
import seaborn as sns
import matplotlib.pyplot as plt
except ModuleNotFoundError:
if 'google.colab' in str(get_ipython()):
print("package not found, installing w/ pip in Google Colab...")
!pip install geemap seaborn matplotlib
else:
print("package not found, installing w/ conda...")
!conda install mamba -c conda-forge -y
!mamba install geemap -c conda-forge -y
!conda install seaborn matplotlib -y
import geemap, ee
import seaborn as sns
import matplotlib.pyplot as plt
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# define some functions and variables
def se2mask(image):
quality_band = image.select('QA60')
cloudmask = 1 << 10
cirrusmask = 1 << 11
mask = quality_band.bitwiseAnd(cloudmask).eq(0) and (quality_band.bitwiseAnd(cirrusmask).eq(0))
return image.updateMask(mask).divide(10000)
se2bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7','B8','B8A']
trainingbands = se2bands + ['avg_rad']
label = 'smod_code'
scaleFactor=1000
# create training data
roi = ee.FeatureCollection("FAO/GAUL/2015/level2").filter(ee.Filter.eq('ADM2_NAME','Bagmati')).geometry()
se2 = ee.ImageCollection('COPERNICUS/S2').filterDate(
"2015-07-01","2015-12-31").filterBounds(roi).filter(
ee.Filter.lt("CLOUDY_PIXEL_PERCENTAGE",20)).map(se2mask).median().select(se2bands).clip(roi)
viirs = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate(
"2015-07-01","2019-12-31").filterBounds(roi).median().select('avg_rad').clip(roi)
fused = se2.addBands(viirs)
# create and overlay labels to training data
ghsl = ee.ImageCollection('JRC/GHSL/P2016/SMOD_POP_GLOBE_V1').filter(ee.Filter.date(
'2015-01-01', '2015-12-31')).select(label).median().gte(2)
points = ghsl.sample(**{"region":roi, "scale":scaleFactor,"seed":0,'geometries':True})
data = fused.select(trainingbands).sampleRegions(collection=points,
properties=[label],
scale=scaleFactor)
# fit classifier on entire dataset
new_params = {"numberOfTrees":500,
"variablesPerSplit":None,
"minLeafPopulation":1,
"bagFraction":0.5,
"maxNodes":None,
"seed":0}
clf = ee.Classifier.smileRandomForest(**new_params).train(data, label, trainingbands)
###Output
_____no_output_____
###Markdown
Prep new dataIn order to predict the data we need to prep (including fuse) the unseen data just as we did with the training data, but we'll do this for each year.For the scope of this excercise, we're doing this at an annual level, but you could do this to produce a monthly time series. Try it yourself!
###Code
def img_prep(se2collection,
viirscollection,
year,
se2bands,
roi,
se2maskfunc,
scaleFactor):
se2 = se2collection.filterDate(f"{year}-01-01",f"{year}-12-31").filterBounds(roi).filter(
ee.Filter.lt("CLOUDY_PIXEL_PERCENTAGE",20)).map(se2maskfunc).median().select(se2bands).clip(roi)
viirs = viirscollection.filterDate(
f"{year}-01-01",f"{year}-12-31").filterBounds(roi).median().select('avg_rad').clip(roi)
return se2.addBands(viirs)
###Output
_____no_output_____
###Markdown
Run inference on all years (2016-2019)
###Code
allyears = []
for year in ['2016','2017','2018','2019']:
img = img_prep(se2collection=ee.ImageCollection('COPERNICUS/S2'),
viirscollection=ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG"),
year=year,
se2bands=se2bands,
roi=roi,
se2maskfunc=se2mask,
scaleFactor=scaleFactor)
allyears.append(img.classify(clf))
###Output
_____no_output_____
###Markdown
Plotting trendsWe can plot histograms {doc}`mod4_2_histograms` or time series {doc}`mod4_1_time_series_charts` as you've learned.But since our values are binary and we are comparing just a few years, a simple bar graph will do. If you produce a monthly time series, you might try some other plots.As with our data structure transformations in those earlier modules mentioned (refer to them for a refresher), extracting our data into a numerical array that we can plot takes a couple steps. We'll do this for each year we have predicted data.
###Code
allyears_arrs = [img.sample(region=roi, scale=scaleFactor, numPixels=1000) for img in allyears]
allyears_arrs = [np.asarray(arr.reduceColumns(ee.Reducer.toList(1),
['classification']).values().get(0).getInfo()) for arr in allyears_arrs]
###Output
_____no_output_____
###Markdown
Now we'll transform this to a Pandas dataframe for convenience and visualization.Note that our dataframe across all years will have some missing values for a few years (apparently some pixels were masked for data quality or had other issues). We'll drop those missing values and standardize our data so instead of a direct count of built-up pixels, we'll look at the ratio of built-up for the particular year-sample.
###Code
df = pd.DataFrame([arr.flatten() for arr in allyears_arrs], index=['2016','2017','2018','2019']).T
df = df/df.sum(axis=0)
df = df.melt()
df = df.dropna()
df.columns =['year','built-up ratio']
df.groupby('year').count()
fig, ax = plt.subplots(1, figsize=(10,7))
sns.set_theme(style="whitegrid")
ax = sns.barplot(x='year',y='built-up ratio',data=df)
plt.title('Ratio of built-up pixels (per total) by year');
###Output
_____no_output_____
###Markdown
We see two important things here:- 2019 has a lower ratio of built-up land than 2016- but 2016 seems like an outlier among a trend that is steadily growing from 2017 to 2019Remember in our exploratory analsis when we saw bright lights East of Kathmandu? Perhaps those are an outlier in our dataset? It might be worth revisiting a cleaning process to improve the nighttime lights signal. Or maybe omit nighttime lights and see if that changes things in terms of classifier performance. Or try running inference on a monthly (rather than annual) time series to get more temporal information.Or compare this to other additional provinces in Nepal (i.e. more data).Our classifier performance had much to be improved, so extra steps may be needed to validate that before we draw any strong conclusions here.But aside from that, is there anything we can tell right now? We might consider 2016 an outlier worth looking into, but could communicate that there does seem to be a steady growth trend from 2016 to 2019. We do see very large error bars in 2016 relative to the other data that justify it being an outlier. These are directly related to the sample size and as noted earlier, it is possible that data quality issues (including cloud masking?) reduced the number of observations for a given year. Hypothesis testLets conduct a t-test of means comparing 2016 and 2019 to find if this is a statistically significant difference.We might also look at the comparison of 2017 and 2019 to capture change in that 3 year period. Change from 2016 to 2019
###Code
yrA = '2016'
yrB = '2019'
col = 'built-up ratio'
ttest_ind(df.loc[df['year']==yrA,col], df.loc[df['year']==yrB,col])
###Output
_____no_output_____
###Markdown
We do not see a significant difference (p is well over our preset alpha=0.05). So, even though it appears there is a reduction in growth, there's too much noise to say this is significant. **HINT:** you can usually tell when a means t-test will fail to reject the null hypothesis when the error bars of the samples being compared overlap as they do for 2016 and 2019.This might actually give us some relief that we are not actually saying economic growth was reduced...but the noise data indicates we should do some work to clean this as well as improve our classifier.Ok, but how about 2017 and 2019?
###Code
yrA = '2017'
yrB = '2019'
col = 'built-up ratio'
ttest_ind(df.loc[df['year']==yrA,col], df.loc[df['year']==yrB,col])
###Output
_____no_output_____
###Markdown
Here again we fail to reject the null hypothesis (p > 0.05), although the comparison is cleaner (lower p).Let's take a look at 2016 versus 2019 spatially by differencing our images.
###Code
# initialize our map
map1 = geemap.Map()
map1.centerObject(roi, 9)
map1.addLayer(allyears[-1].subtract(allyears[0]), {"min":-1.0, "max":1.0}, 'diff')
map1.addLayerControl()
map1
###Output
_____no_output_____ |
Day 7/7_3.exercise_with_kaggle_dataset.ipynb | ###Markdown
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SLCFLAB/Data-Science-Python/blob/main/Day%207/7_3.exercise_with_kaggle_dataset.ipynb) Part 3: Exercise with Kaggle Dataset Welcome to the third part of today's lab session. You just learned some visualization and clustering skills. But the thing is you haven't done it by yourself. In this part, you will practice making visualized outputs and trying data clustering with different Kaggle data. Import libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
###Output
_____no_output_____
###Markdown
Prepare data We will use New York taxi data for data visualization and penguin data for clustering. You can check out the descriptions for each dataset in the url below. Taxi: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page Penguins: https://github.com/allisonhorst/penguins Taxis
###Code
taxis = sns.load_dataset('taxis')
taxis.shape
taxis.info()
taxis.head()
###Output
_____no_output_____
###Markdown
Penguins
###Code
penguins = sns.load_dataset('penguins')
penguins.shape
penguins.info()
penguins.head()
penguins.species.unique()
###Output
_____no_output_____ |
socks_and_skeets.ipynb | ###Markdown
Socks, Skeets, and Space Invaders---------------------------------This notebook contains code from my blog, [Probably Overthinking It](http://allendowney.blogspot.com/)Copyright 2016 Allen DowneyMIT License: http://opensource.org/licenses/MIT
###Code
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
from thinkbayes2 import Pmf, Hist, Beta
import thinkbayes2
import thinkplot
###Output
_____no_output_____
###Markdown
SocksThe sock drawer problemPosed by Yuzhong Huang:> There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair (same color) but we don't know the color of these socks. What is the chance that we picked the first drawer? Now I'll solve the problem more generally using a Jupyter notebook.I'll represent the sock drawers with `Hist` objects, defined in the `thinkbayes2` library:
###Code
drawer1 = Hist(dict(W=40, B=10), label='Drawer 1')
drawer2 = Hist(dict(W=20, B=30), label='Drawer 2')
drawer1.Print()
###Output
B 10
W 40
###Markdown
Now I can make a `Pmf` that represents the two hypotheses:
###Code
pmf = Pmf([drawer1, drawer2])
pmf.Print()
###Output
Drawer 2 0.5
Drawer 1 0.5
###Markdown
This function computes the likelihood of the data for a given hypothesis:
###Code
def likelihood(data, hypo):
"""Likelihood of the data under the hypothesis.
data: string 'same' or 'different'
hypo: Hist object with the number of each color
returns: float likelihood
"""
probs = Pmf(hypo)
prob_same = probs['W']**2 + probs['B']**2
if data == 'same':
return prob_same
else:
return 1-prob_same
###Output
_____no_output_____
###Markdown
Now we can update `pmf` with these likelihoods
###Code
data = 'same'
pmf[drawer1] *= likelihood(data, drawer1)
pmf[drawer2] *= likelihood(data, drawer2)
pmf.Normalize()
###Output
_____no_output_____
###Markdown
The return value from Normalize is the total probability of the data, the denominator of Bayes's theorem, also known as the normalizing constant.And here's the posterior distribution:
###Code
pmf.Print()
###Output
Drawer 2 0.433333333333
Drawer 1 0.566666666667
###Markdown
The likelihood of getting a pair is higher in Drawer 1, which is 40:10, than in Drawer 2, which is 30:20.In general, the probability of getting a pair is highest if the drawer contains only one color sock, and lowest if the proportion if 50:50.So getting a pair is evidence that the drawer is more likely to have a high (or low) proportion of one color, and less likely to be balanced. The Alien Blaster problemIn preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, x.Based on previous tests, the distribution of x in the population of designs is roughly uniform between 10% and 40%. To approximate this distribution, we'll assume that x is either 10%, 20%, 30%, or 40% with equal probability.Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: ``The same number of targets were hit in the two tests, so we have reason to think this new design is consistent.''Is this data good or bad; that is, does it increase or decrease your estimate of x for the Alien Blaster 10K? I'll start by creating a `Pmf` that represents the four hypothetical values of `x`:
###Code
pmf = Pmf([0.1, 0.2, 0.3, 0.4])
pmf.Print()
###Output
0.1 0.25
0.2 0.25
0.3 0.25
0.4 0.25
###Markdown
Before seeing the data, the mean of the distribution, which is the expected effectiveness of the blaster, is 0.25.
###Code
pmf.Mean()
###Output
_____no_output_____
###Markdown
Here's how we compute the likelihood of the data. If each blaster takes two shots, there are three ways they can get a tie: they both get 0, 1, or 2. If the probability that either blaster gets a hit is x, the probabilities of these outcomes are: both 0: (1-x)**4 both 1: (2 * x * (1-x))**2 both 2: x**x Here's the likelihood function that computes the total probability of the three outcomes:
###Code
def likelihood(hypo, data):
"""Likelihood of the data under hypo.
hypo: probability of a hit, x
data: 'tie' or 'no tie'
"""
x = hypo
like = x**4 + (2 * x * (1-x))**2 + (1-x)**4
if data == 'tie':
return like
else:
return 1-like
###Output
_____no_output_____
###Markdown
To see what the likelihood function looks like, I'll print the likelihood of a tie for the four hypothetical values of `x`:
###Code
data = 'tie'
for hypo in sorted(pmf):
like = likelihood(hypo, data)
print(hypo, like)
###Output
0.1 0.6886
0.2 0.5136
0.3 0.4246
0.4 0.3856
###Markdown
If we multiply each likelihood by the corresponding prior, we get the unnormalized posteriors:
###Code
for hypo in sorted(pmf):
unnorm_post = pmf[hypo] * likelihood(hypo, data)
print(hypo, pmf[hypo], unnorm_post)
###Output
0.1 0.25 0.17215
0.2 0.25 0.1284
0.3 0.25 0.10615
0.4 0.25 0.0964
###Markdown
Finally, we can do the update by multiplying the priors in `pmf` by the likelihoods:
###Code
for hypo in pmf:
pmf[hypo] *= likelihood(hypo, data)
###Output
_____no_output_____
###Markdown
And then normalizing `pmf`. The result is the total probability of the data.
###Code
pmf.Normalize()
###Output
_____no_output_____
###Markdown
And here are the posteriors.
###Code
pmf.Print()
###Output
0.1 0.342178493341
0.2 0.255217650566
0.3 0.210991850527
0.4 0.191612005565
###Markdown
The lower values of `x` are more likely, so this evidence makes us downgrade our expectation about the effectiveness of the blaster. The posterior mean is 0.225, a bit lower than the prior mean, 0.25.
###Code
pmf.Mean()
###Output
_____no_output_____
###Markdown
A tie is evidence in favor of extreme values of `x`. The Skeet Shooting problemAt the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. After 25 shots, they were tied, sending the match into sudden death. In each round of sudden death, each competitor shoots at two targets. In the first three rounds, Rhode and Wei hit the same number of targets. Finally in the fourth round, Rhode hit more targets, so she won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games. Based on this information, should we infer that Rhode and Wei had an unusually good or bad day?As background information, you can assume that anyone in the Olympic final has about the same probability of hitting 13, 14, 15, or 16 out of 25 targets. To compute the likelihood function, I'll use `binom.pmf`, which computes the Binomial PMF. In the following example, the probability of hitting `k=10` targets in `n=25` attempts, with probability `p=13/15` of hitting each target, is about 8%.
###Code
from scipy.stats import binom
k = 10
n = 25
p = 13/25
binom.pmf(k, n, p)
###Output
_____no_output_____
###Markdown
The following function computes the likelihood of `tie` or `no tie` after a given number of shots, `n`, given the hypothetical value of `p`.It loops through the possible values of `k` from 0 to `n` and uses `binom.pmf` to compute the probability that each shooter hits `k` targets. To get the probability that BOTH shooters hit `k` targets, we square the result.To get the total likelihood of the outcome, we add up the probability for each value of `k`.
###Code
def likelihood(data, hypo):
"""Likelihood of data under hypo.
data: tuple of (number of shots, 'tie' or 'no tie')
hypo: hypothetical number of hits out of 25
"""
p = hypo / 25
n, outcome = data
like = sum([binom.pmf(k, n, p)**2 for k in range(n+1)])
return like if outcome=='tie' else 1-like
###Output
_____no_output_____
###Markdown
Now we can see what that looks like for `n=2`
###Code
data = 2, 'tie'
hypos = range(0, 26)
likes = [likelihood(data, hypo) for hypo in hypos]
thinkplot.Plot(hypos, likes)
thinkplot.Config(xlabel='Probability of a hit (out of 25)',
ylabel='Likelihood of a tie',
ylim=[0, 1])
###Output
_____no_output_____
###Markdown
As we saw in the Sock Drawer problem and the Alien Blaster problem, the probability of a tie is highest for extreme values of `p`, and minimized when `p=0.5`.The result is similar when `n=25`:
###Code
data = 25, 'tie'
hypos = range(0, 26)
likes = [likelihood(data, hypo) for hypo in hypos]
thinkplot.Plot(hypos, likes)
thinkplot.Config(xlabel='Probability of a hit (out of 25)',
ylabel='Likelihood of a tie',
ylim=[0, 1])
###Output
_____no_output_____
###Markdown
In the range we care about (13 through 16) this curve is pretty flat, which means that a tie after the round of 25 doesn't discriminate strongly among the hypotheses.We could use this likelihood function to run the update, but just for purposes of demonstration, I'll do it using the Suite class from `thinkbayes2`:
###Code
from thinkbayes2 import Suite
class Skeet(Suite):
def Likelihood(self, data, hypo):
"""Likelihood of data under hypo.
data: tuple of (number of shots, 'tie' or 'no tie')
hypo: hypothetical number of hits out of 25
"""
p = hypo / 25
n, outcome = data
like = sum([binom.pmf(k, n, p)**2 for k in range(n+1)])
return like if outcome=='tie' else 1-like
###Output
_____no_output_____
###Markdown
Now I'll create the prior.
###Code
suite = Skeet([13, 14, 15, 16])
suite.Print()
###Output
13 0.25
14 0.25
15 0.25
16 0.25
###Markdown
The prior mean is 14.5.
###Code
suite.Mean()
###Output
_____no_output_____
###Markdown
Here's the update after the round of 25.
###Code
suite.Update((25, 'tie'))
suite.Print()
###Output
13 0.245787744767
14 0.247411480833
15 0.250757985003
16 0.256042789397
###Markdown
The higher values are a little more likely, but the effect is pretty small.Interestingly, the rounds of `n=2` provide more evidence in favor of the higher values of `p`.
###Code
suite.Update((2, 'tie'))
suite.Print()
suite.Update((2, 'tie'))
suite.Print()
suite.Update((2, 'tie'))
suite.Print()
###Output
13 0.228830701632
14 0.236427057892
15 0.253007722855
16 0.28173451762
###Markdown
After three rounds of sudden death, we are more inclined to think that the shooters are having a good day.The fourth round, which ends with no tie, provides a small amount of evidence in the other direction.
###Code
suite.Update((2, 'no tie'))
suite.Print()
###Output
13 0.2323322732
14 0.23878553469
15 0.252684685857
16 0.276197506253
###Markdown
And the posterior mean, after all updates, is a little higher than 14.5, where we started.
###Code
suite.Mean()
###Output
_____no_output_____ |
dna_problem/ASW_ESN.ipynb | ###Markdown
Connect to Google Drive
###Code
%tensorflow_version 1.x
from google.colab import drive
import os, natsort as nsrt, numpy as np, re
from scipy.sparse import coo_matrix, csgraph, csr_matrix
import matplotlib.pyplot as plt
import cv2 as cv
import scipy
!pip install -U scikit-learn
import sklearn
import math
drive.mount('/content/drive')
PATH_PROJECT='/content/drive/My Drive/DL_DATA_GRAPH/'
PATH_CNN_REPO=PATH_PROJECT + 'BUILD/cnn_graph/'
os.chdir(PATH_CNN_REPO)
from lib import models, graph, coarsening, utils
%ls
# !git clone https://github.com/mdeff/cnn_graph
!git pull origin master
os.chdir(PATH_PROJECT)
%ls
%matplotlib inline
###Output
TensorFlow 1.x selected.
Requirement already up-to-date: scikit-learn in /usr/local/lib/python3.6/dist-packages (0.22.2.post1)
Requirement already satisfied, skipping upgrade: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (1.4.1)
Requirement already satisfied, skipping upgrade: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (1.18.3)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn) (0.14.1)
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/drive
[0m[01;34mcheckpoints[0m/ makefile README.md [01;34mtrials[0m/
[01;34mlib[0m/ [01;34mnips2016[0m/ requirements.txt usage.ipynb
LICENSE.txt rcv1.ipynb [01;34msummaries[0m/
From https://github.com/allnes/cnn_graph
* branch master -> FETCH_HEAD
Already up to date.
[0m[01;34mBUILD[0m/ [01;34mDATA[0m/
###Markdown
Preprocessing data
###Code
flag_save_dump = False
name_region = 'ASW_ESN'
def save_matrix(zip_sz):
PATH_CONVERTED_SAVE_DATA = PATH_PROJECT + 'DATA/DNA_DATA/raw/{}.npz'.format(name_region)
npzfile = np.load(PATH_CONVERTED_SAVE_DATA)
X_full = npzfile['arr_0'].astype(np.float32)
y_full = npzfile['arr_1']
X = []
for graph in X_full:
graph_size = int(math.sqrt(graph.shape[0]))
new_graph = np.copy(graph).reshape(graph_size, graph_size)
new_graph = np.pad(new_graph, pad_width=1, mode='constant', constant_values=0)
X.append(cv.resize(new_graph,
dsize=(zip_sz, zip_sz),
interpolation=cv.INTER_CUBIC))
X = np.array(X)
X = X.reshape((X.shape[0], X.shape[1] * X.shape[2]))
print(X.shape)
PATH_CONVERTED_SAVE_DATA = PATH_PROJECT + 'DATA/DNA_DATA/resize/{}_{}.npz'.format(name_region, zip_sz)
np.savez(PATH_CONVERTED_SAVE_DATA, X, y_full)
zip_size = 128
if flag_save_dump:
save_matrix(zip_size)
PATH_CONVERTED_DATA = PATH_PROJECT + 'DATA/DNA_DATA/resize/{}_{}.npz'.format(name_region, zip_size)
npzfile = np.load(PATH_CONVERTED_DATA)
print(npzfile.files)
X = npzfile['arr_0'].astype(np.float32)
y = npzfile['arr_1']
num_samples = X.shape[0]
print(X.shape)
print(y.shape)
from sklearn.utils import shuffle
X, y = shuffle(X, y)
##########################################################
print('--> Reshape data')
n_train = (num_samples * 5) // 6
n_val = num_samples // 20
X_train = X[:n_train]
X_val = X[n_train:n_train+n_val]
X_test = X[n_train+n_val:]
y_train = y[:n_train]
y_val = y[n_train:n_train+n_val]
y_test = y[n_train+n_val:]
plt.title("y = {}".format(y.shape[0]))
plt.hist(y, len(np.unique(y)))
plt.show()
plt.title("y_train = {}".format(y_train.shape[0]))
plt.hist(y_train, len(np.unique(y_train)))
plt.show()
plt.title("y_test = {}".format(y_test.shape[0]))
plt.hist(y_test, len(np.unique(y_test)))
plt.show()
print(np.unique(y))
##########################################################
def save_dump():
print('--> Get distance graph')
def distance_sklearn_metrics(z, k=6, metric='euclidean'):
"""Compute exact pairwise distances."""
d = sklearn.metrics.pairwise.pairwise_distances(
z, metric=metric, n_jobs=-2)
# k-NN graph.
idx = np.argsort(d)[:, 1:k+1]
d.sort()
d = d[:, 1:k+1]
return d, idx
dist, idx = distance_sklearn_metrics(X_train.T)
A = graph.adjacency(dist, idx).astype(np.float32)
PATH_DUMP_DATA = PATH_PROJECT + 'DATA/DNA_DATA/dump/{}_dump.npz'.format(name_region)
scipy.sparse.save_npz(PATH_DUMP_DATA, A)
if flag_save_dump:
save_dump()
PATH_DUMP_LOAD_DATA = PATH_PROJECT + 'DATA/DNA_DATA/dump/{}_dump.npz'.format(name_region)
A = scipy.sparse.load_npz(PATH_DUMP_LOAD_DATA)
print('d = |V| = {}, k|V| < |E| = {}'.format(zip_size, A.nnz))
plt.spy(A, markersize=2, color='black');
###Output
['arr_0', 'arr_1']
(160, 16384)
(160,)
--> Reshape data
###Markdown
Train
###Code
print('--> Get laplacian matrix')
graphs, perm = coarsening.coarsen(A, levels=3, self_connections=True)
X_train = coarsening.perm_data(X_train, perm)
print(X_train.shape)
X_val = coarsening.perm_data(X_val, perm)
print(X_val.shape)
X_test = coarsening.perm_data(X_test, perm)
print(X_test.shape)
L = [graph.laplacian(A, normalized=True) for A in graphs]
params = dict()
params['dir_name'] = 'demo'
params['num_epochs'] = 32
params['batch_size'] = 6
params['eval_frequency'] = 100
# Building blocks.
params['filter'] = 'chebyshev2'
params['brelu'] = 'b2relu'
params['pool'] = 'mpool1'
# Number of classes.
C = y.max() + 1
assert C == np.unique(y).size
# Architecture.
params['F'] = [32, 32] # Number of graph convolutional filters.
params['K'] = [16, 16] # Polynomial orders.
params['p'] = [4, 2] # Pooling sizes.
params['M'] = [2000, C] # Output dimensionality of fully connected layers.
# Optimization.
params['regularization'] = 5e-4
params['dropout'] = 1
params['learning_rate'] = 1e-3
params['decay_rate'] = 0.95
params['momentum'] = 0
params['decay_steps'] = n_train / params['batch_size']
model = models.cgcnn(L, **params)
accuracy, loss, t_step = model.fit(X_train, y_train, X_val, y_val)
fig, ax1 = plt.subplots(figsize=(15, 5))
ax1.plot(accuracy, 'b.-')
ax1.set_ylabel('validation accuracy', color='b')
ax2 = ax1.twinx()
ax2.plot(loss, 'g.-')
ax2.set_ylabel('training loss', color='g')
plt.show()
print('Time per step: {:.2f} ms'.format(t_step*1000))
print(X_test.shape, y_test.shape)
acc_per_class = {}
for id_class in np.unique(y):
acc_per_class[id_class] = []
for graph, label in zip(X_test, y_test):
acc_per_class[label].append(graph)
for id_class in np.unique(y):
acc_per_class[id_class] = np.array(acc_per_class[id_class])
acc_hape = acc_per_class[id_class].shape
labels = np.empty(acc_hape[0])
labels.fill(id_class)
print("############ Class {}".format(id_class))
print(acc_hape)
print(model.evaluate(acc_per_class[id_class], labels)[0])
res = model.evaluate(X_test, y_test)
print(res[0])
###Output
INFO:tensorflow:Restoring parameters from /content/drive/My Drive/DL_DATA_GRAPH/BUILD/cnn_graph/lib/../checkpoints/demo/model-709
accuracy: 47.37 (9 / 19), f1 (weighted): 45.80, loss: 2.50e+03
time: 33s (wall 9s)
|
nbs/dl1/lesson4-collab_finished.ipynb | ###Markdown
Collaborative filtering example `collab` models use data in a `DataFrame` of user, items, and ratings.
###Code
user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
###Output
_____no_output_____
###Markdown
That's all we need to create and train a model:
###Code
data = CollabDataBunch.from_df(ratings, seed=42)
y_range = [0,5.5]
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
###Output
_____no_output_____
###Markdown
Movielens 100k Let's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip
###Code
# from google.colab import files
# uploaded = files.upload()
# for fn in uploaded.keys():
# print('User uploaded file "{name}" with length {length} bytes'.format(
# name=fn, length=len(uploaded[fn])))
from google.colab import drive
drive.mount('/content/gdrive')
# !ls /content
# !mv 'ml-100k.zip' gdrive/'My Drive'
# !unzip /content/gdrive/'My Drive'/'ml-100k.zip'
!mv 'ml-100k' gdrive/'My Drive'
!ls gdrive/'My Drive'/ml-100k
path="gdrive/My Drive/ml-100k"
ratings = pd.read_csv(path + '/u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path +'/u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
data = CollabDataBunch.from_df(rating_movie, seed=42, valid_pct=0.1, item_name=title)
data.show_batch()
y_range = [0,5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
learn.save('dotprod')
###Output
_____no_output_____
###Markdown
Here's [some benchmarks](https://www.librec.net/release/v1.3/example.html) on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of `0.91**2 = 0.83`. Interpretation Setup
###Code
learn.load('dotprod');
learn.model
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
###Output
_____no_output_____
###Markdown
Movie bias
###Code
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby(title)['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
###Output
_____no_output_____
###Markdown
Movie weights
###Code
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
###Output
_____no_output_____ |
notebooks/sampling documents for annotation.ipynb | ###Markdown
All available documents
###Code
full_paths = sorted(glob(os.path.join(input_dir, '*', 'dev', '*' + suffix)) +
glob(os.path.join(input_dir, '*', 'test', '*' + suffix)))
paths_df = paths2df(full_paths)
paths_df.head(3)
paths_df.groupby('transformation').agg({'path': 'count'})
_ = paths_df.groupby('genre').agg({'path': 'count'}).plot.bar()
###Output
_____no_output_____
###Markdown
CLTL pilot on 23 Jan 2019
###Code
conf_cltl_unmasked = ConfigFactory.parse_file('../mturk/configs/cltl-2019-01-23-unmasked.conf')
conf_cltl_masked = ConfigFactory.parse_file('../mturk/configs/cltl-2019-01-23-masked.conf')
cltl_pilot = paths2df(conf_cltl_unmasked.get_list('input_paths') + conf_cltl_masked.get_list('input_paths'))
cltl_pilot.head(3)
_ = cltl_pilot.groupby('genre')['path'].count().plot.bar()
_ = cltl_pilot.groupby('transformation')['path'].count().plot.bar()
len(cltl_pilot.base_doc.unique())
cltl_docs = set(cltl_pilot.base_doc.values)
###Output
_____no_output_____
###Markdown
Student1's documentsStudent1 is exposed to CLTL pilot documents (both masked and unmasked) so we need to sample other documents for her. The estimation of 11 min per document (measured during CLTL pilot), she would be able to finish more documents than what's available, so we just picked one version per base document.
###Code
student1_conf = ConfigFactory.parse_file('../mturk/configs/student1-2019-03-14.conf')
student1_paths = paths2df(student1_conf.get_list('input_paths'), 'student1')
check_assignments_one_annotator(student1_paths)
print("Number of documents: %d" % len(student1_paths))
def compare_genre_distribution(df):
fig, axes = plt.subplots(ncols=2, figsize=(8,4))
df.groupby('genre')['path'].count().plot.bar(ax=axes[0])
axes[0].title.set_text('Student')
paths_df.groupby('genre').agg({'path': 'count'}).plot.bar(ax=axes[1])
axes[1].title.set_text('Reference')
compare_genre_distribution(student1_paths)
_ = student1_paths.groupby('transformation')['path'].count().plot.bar()
student1_docs = set(student1_paths.base_doc.values)
###Output
_____no_output_____
###Markdown
Student2's documents Student 2 will work on some of the same documents as student 1 and can also work on the documents that CLTL members have seen.
###Code
student2_practice_conf = ConfigFactory.parse_file('../mturk/configs/student2-practice.conf')
student2_practice_paths = paths2df(student2_practice_conf.get_list('input_paths'))
student2_conf = ConfigFactory.parse_file('../mturk/configs/student2-2019-03-29.conf')
student2_paths = paths2df(student2_conf.get_list('input_paths'), 'student2')
check_assignments_one_annotator(student2_practice_paths, student2_paths)
len(student2_paths)
student2_paths.tail(3)
compare_genre_distribution(student2_paths)
_ = student2_paths.groupby('transformation')['path'].count().plot.bar()
###Output
_____no_output_____
###Markdown
Student3's documents This student will work on the same documents that student 1 and 2 have worked on, so that each document is annotated by 2 workers.
###Code
student3_conf = ConfigFactory.parse_file('../mturk/configs/student3-2019-03-29.conf')
student3_paths = paths2df(student3_conf.get_list('input_paths'), 'student3')
check_assignments_one_annotator(student3_paths)
len(student3_paths)
student3_paths.head(3)
compare_genre_distribution(student3_paths)
_ = student3_paths.groupby('transformation')['path'].count().plot.bar()
###Output
_____no_output_____
###Markdown
First author's documents
###Code
author1_conf = ConfigFactory.parse_file('../mturk/configs/author1.conf')
author1_paths = paths2df(author1_conf.get_list('input_paths'), 'author1')
check_assignments_one_annotator(author1_paths)
len(author1_paths)
author1_paths.head(3)
_ = author1_paths.groupby('genre')['path'].count().plot.bar()
_ = author1_paths.groupby('transformation')['path'].count().plot.bar()
###Output
_____no_output_____
###Markdown
Overview of all assignments
###Code
all_assigments = pd.concat([student1_paths, student2_paths, student3_paths, author1_paths])
_ = all_assigments.groupby('genre')['path'].count().plot.bar()
_ = all_assigments.groupby('transformation')['path'].count().plot.bar()
all_assigments.groupby('path').agg({'experiment': 'count'}).describe()
###Output
_____no_output_____
###Markdown
back up: saved for next student
###Code
assigned_docs = set(student2_paths.base_doc)
candidate_pool = paths_df[~paths_df.base_doc.isin(assigned_docs) &
paths_df.transformation.isin(['men_100', 'nonmen_100', 'no-name', 'orig'])]
sampled_paths = candidate_pool.groupby('base_doc').apply(lambda x: x.sample(n=1)).sample(frac=1)
len(sampled_paths)
sampled_paths
sampled_paths.head(3)
check_assignments_one_annotator(student2_paths, sampled_paths)
compare_genre_distribution(pd.concat([student2_paths, sampled_paths]))
copy_conf_string_clipboard(sampled_paths)
###Output
_____no_output_____ |
_notebooks/2021-08-29-effective-pandas.ipynb | ###Markdown
Code for Effective Pandas by Matt Harrison talktl;drThis code was inspired by Matt Harrison's presentation, [Effective Pandas](https://youtu.be/UURvPeczxJI). In the presentation, Matt shows the following:1. Reducing the memory cost using `Casting` [[1](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html)], which is a method to turn a data type of an object into another data type. From Matt's experiment, we saved `65.17% (10.97MB)` from the original data. This could be improved even further by casting other columns.2. Writing `Pandas` operations in a cleaner and efficient way using ~~[Dot Notion](https://www.dataschool.io/pandas-dot-notation-vs-brackets/)~~ [Chaining](https://changhsinlee.com/pyjanitor/).3. Comparing `.apply` method with other methods to broadcast operations.
###Code
%matplotlib inline
from IPython.display import display
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.display.min_rows = 20 # show more rows
print(plt.style.available) # show avaiable themes
plt.style.use('seaborn-dark-palette') # select a theme
# https://seaborn.pydata.org/generated/seaborn.set_context.html
sns.set_context('paper') # talk, paper, notebook, poster
plt.plot(range(10)) # to view for myself
plt.show()
data_source = "https://github.com/mattharrison/datasets/raw/master/data/vehicles.csv.zip"
autos = pd.read_csv(data_source)
cols = ['city08','comb08', 'cylinders', 'displ', 'drive', 'eng_dscr', 'trany', 'fuelCost08', 'highway08','make','range','year','createdOn']
print("show dtypes")
print(autos[cols].dtypes)
# int64 == means no missing data
# float64 == means 1. all float numbers with no missing data 2. all float numbers with missing data 3. or all int numbers with missing data
# object == means can't tell if it is int or not (not super fast as it points to python objects)
display((autos
[cols]
.select_dtypes(int)
.describe()
))
old_mem = autos[cols].memory_usage(deep=True).sum()
print(f"{(old_mem)/1000000:.2f}MB")
###Output
show dtypes
city08 int64
comb08 int64
cylinders float64
displ float64
drive object
eng_dscr object
trany object
fuelCost08 int64
highway08 int64
make object
range int64
year int64
createdOn object
dtype: object
###Markdown
Casting as Integer
###Code
# cast highway08 as int8 and city08 & comb08 as int16
print("show int8 and int16 range")
print(np.iinfo(np.int8))
print(np.iinfo(np.int16))
print("cast highway08, city08, and comb08 as int")
display((
autos
[cols]
.astype({'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16'})
# .select_dtypes([int, 'int8'])
.select_dtypes(['integer']) # select integer like
.describe()
))
new_mem = (autos
[cols]
.astype({'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16'})
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
show int8 and int16 range
Machine parameters for int8
---------------------------------------------------------------
min = -128
max = 127
---------------------------------------------------------------
Machine parameters for int16
---------------------------------------------------------------
min = -32768
max = 32767
---------------------------------------------------------------
cast highway08, city08, and comb08 as int
###Markdown
Remove NaN
###Code
# remove NaN from cylinders & displ and cast cylinders as int8
print("show columns with dtype as float")
display((
autos
[cols]
.select_dtypes('float')
))
print("show cylinders' summary")
print(autos.cylinders.describe()) # it has missing values (41144 != 40938)
print("show cylinders' values (including NaN)")
print(autos.cylinders.value_counts(dropna=False)) # show NaN
# where are they missing?
print("show rows with NaN values for cylinders")
display((
autos
[cols]
.query('cylinders.isna()',engine='python')
))
print("remove NaN values in cylinders & displ and cast cylinders as int")
# add cylinders and displ columns
display((
autos
[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'), # updating columns with new values, cylinderes is filled with 0 when NaN and then converted to int8
displ=autos.displ.fillna(0)) # displ is filled with 0 when NaN
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16',})
.describe()
))
new_mem = (
autos
[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'), # updating columns with new values, cylinderes is filled with 0 when NaN and then converted to int8
displ=autos.displ.fillna(0)) # displ is filled with 0 when NaN
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16',})
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
show columns with dtype as float
###Markdown
Casting as Float
###Code
# cast displ as float
print("show float16 range")
print(np.finfo(np.float16))
print("cast displ as float")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'))
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16',})
))
new_mem = (
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'))
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16',})
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
show float16 range
Machine parameters for float16
---------------------------------------------------------------
precision = 3 resolution = 1.00040e-03
machep = -10 eps = 9.76562e-04
negep = -11 epsneg = 4.88281e-04
minexp = -14 tiny = 6.10352e-05
maxexp = 16 max = 6.55040e+04
nexp = 5 min = -max
---------------------------------------------------------------
cast displ as float
###Markdown
Casting Objects as Category
###Code
# show objects
print("show columns with dtype as object")
(
autos[cols]
.select_dtypes(object) # object that could be turned to categorical
)
# show drive's values
print("show drive's values (including NaN)")
(
autos
.drive.value_counts(dropna=False)
)
# show NaN
print("show rows with NaN values for drive")
(
autos[cols]
.query('drive.isna()',engine='python')
)
# show unique values based on year
print("show show unique values based on year")
(
autos[cols]
.groupby('year')
.drive
.nunique()
)
# # drive and make (in .astype) to category
# # converting two columns to categorical column
print("remove NaN values in drive and cast drive as category")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'))
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.describe()
))
new_mem = (
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'))
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
show columns with dtype as object
show drive's values (including NaN)
show rows with NaN values for drive
show show unique values based on year
remove NaN values in drive and cast drive as category
###Markdown
Casting as Category
###Code
# cast trany as category of automatic & speeds
print("show trany's values (including NaN)")
display((
autos
.trany.value_counts(dropna=False)
))
# drive and make (in .astype) to category
# converting two columns to categorical column
print("create two new columns: automatic and speeds")
print("authomatic: values that contain 'Auto' from trany column")
print("speeds: decimal values from trany column and fill NaN then cast as int")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8') # pull the digits from trany column
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany'])
.describe()
))
new_mem = (
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8') # pull the digits from trany column
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany'])
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
show trany's values (including NaN)
###Markdown
Casting as Date
###Code
# cast createdOn as date
# add createdOn (Python doesn't like EST/EDT format)
print("cast createdOn as Date, but Python doesn't like EST/EDT format")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn).dt.tz_localize('America/New_York').dt.tz_convert('UTC')
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany'])
.describe()
))
# fix date warning
print("cast createdOn as Date with the right fix")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True))
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany'])
.describe()
))
new_mem = (
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True))
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany'])
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
cast createdOn as Date, but Python doesn't like EST/EDT format
###Markdown
Casting as Category for columns with multiple values
###Code
# cast eng_dscr as category of ffs
print("show eng_dscr's values (including NaN)")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True)),
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany'])
.eng_dscr
.value_counts(dropna=False)
))
# add ffs (Feedback fuel system), drop eng_dscr
print("create a new column: ffs")
print("ffs: values that contain 'FFS' from eng_dscr column")
display((
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True)),
ffs=autos.eng_dscr.str.contains('FFS'),
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany', 'eng_dscr'])
.describe()
))
new_mem = (
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True)),
ffs=autos.eng_dscr.str.contains('FFS'),
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany', 'eng_dscr'])
.memory_usage(deep=True)
.sum()
)
print(f"{100 - new_mem/old_mem * 100:.2f}% ({(old_mem-new_mem)/1000000:.2f}MB)")
###Output
show eng_dscr's values (including NaN)
###Markdown
Cleaning up everything
###Code
# cleaning up everything
def tweak_autos(autos):
cols = ['city08','comb08', 'cylinders', 'displ', 'drive', 'eng_dscr', 'trany', 'fuelCost08', 'highway08','make','range','year','createdOn']
return (
autos[cols]
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True)),
ffs=autos.eng_dscr.str.contains('FFS'),
)
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany', 'eng_dscr'])
)
tweak_autos(autos)
# adding more elements to display and store in variables
from IPython.display import display
def get_var(df, var_name):
globals()[var_name] = df.copy(deep=True)
return df
def tweak_autos(autos):
cols = ['city08','comb08', 'cylinders', 'displ', 'drive', 'eng_dscr', 'trany', 'fuelCost08', 'highway08','make','range','year','createdOn']
return (
autos[cols]
.pipe(get_var, 'old_df') # store an old copy of df
.assign(cylinders=autos.cylinders.fillna(0).astype('int8'),
displ=autos.displ.fillna(0).astype('float16'),
drive=autos.drive.fillna('Other').astype('category'),
automatic=autos.trany.str.contains('Auto'),
speeds=autos.trany.str.extract(r'(\d)+').fillna('20').astype('int8'), # pull the digits from trany column
createdOn=pd.to_datetime(autos.createdOn.replace({' EDT': '-04:00', ' EST': '-05:00'}, regex=True)),
ffs=autos.eng_dscr.str.contains('FFS'),
)
# debug
.pipe(lambda df: display(df) or df) # display while continuing doing chaining
.astype({ 'highway08': 'int8', 'city08': 'int16', 'comb08': 'int16',
'fuelCost08': 'int16', 'range': 'int16', 'year': 'int16', 'make': 'category'})
.drop(columns=['trany', 'eng_dscr'])
.pipe(get_var, 'processed_df') # store a processed copy of df
)
autos2 = tweak_autos(autos)
# can access those variables (old_df) and (processed_df)
# don't mutate (inplace doesn't save anything so you can't chain anything and shows more warnings)
# don't use apply when dealing with numbers
print("number")
def to_lper100km(val):
return 235.215/val
%timeit autos2.city08.apply(to_lper100km) # 50% slower
%timeit 235.215/autos2.city08 # levrage modern CPU architecture
# even when data is categorical, don't use apply
print("categorical")
def is_american(val):
return val in {'Chevrolet', 'Ford', 'Dodge', 'GMC', 'Tesla'}
%timeit autos2.make.apply(is_american)
%timeit autos2.make.isin({'Chevrolet', 'Ford', 'Dodge', 'GMC', 'Tesla'})
# however, when dealing with strings, apply could be faster
# strings in pandas aren't optimized for speed
# categorical makes it faster as you do the mapping from value to a category whereas strings is stores the entire value
print("string")
%timeit autos2.make.astype(str).apply(is_american)
%timeit autos2.make.astype(str).isin({'Chevrolet', 'Ford', 'Dodge', 'GMC', 'Tesla'})
def country(val):
if val in {'Chevrolet', 'Ford', 'Dodge', 'GMC', 'Tesla'}:
return 'USA'
return 'Other'
values = {'Chevrolet', 'Ford', 'Dodge', 'GMC', 'Tesla'}
%%timeit
(
autos2
.assign(country=autos2.make.apply(country))
)
%%timeit
# if operating on number, it will be faster
(
autos2
.assign(country='US')
.assign(country=lambda df_:df_.country.where(df_.make.isin(values), 'Other'))
)
%%timeit
(
autos2
.assign(country=np.select(
[autos2.make.isin(values)], ['US'], 'Other'
))
)
%%timeit
(
autos2
.groupby('year')
[['comb08', 'speeds']]
.mean()
)
%%timeit
# order of column filtering/aggregation
(
autos2
.groupby('year')
.mean()
[['comb08', 'speeds']]
)
# can test multiple stuff easily
(
autos2
.groupby('year')
[['comb08', 'speeds']]
# .mean()
# .median()
# .quantile(.99) # 99% quantile
# .std()
.var()
.plot()
)
def second_to_last(set):
return set.iloc[-2]
(
autos2
.assign(country=autos2.make.apply(country))
.groupby(['year', 'country']) # two axises + frequency
.agg(['min', 'mean', second_to_last])
)
# has an issue with labeling due to multiple axis
# (
# autos2
# .assign(country=autos2.make.apply(country))
# .groupby(['year', 'country'])
# .mean()
# .plot()
# )
# solves it by unstacking
print("solves it by unstacking")
display((
autos2
.assign(country=autos2.make.apply(country))
.groupby(['year', 'country'])
.mean()
.unstack() # rotate country (unstacking) by sticking it to the columns
))
(
autos2
.assign(country=autos2.make.apply(country))
.groupby(['year', 'country'])
.mean()
.unstack() # rotate country (unstacking) by sticking it to the columns
.city08 # get the city08 column
.plot()
.legend(bbox_to_anchor=(1,1))
)
# smoothing the plot
print("smoothing the plot")
(
autos2
.assign(country=autos2.make.apply(country))
.groupby(['year', 'country'])
.mean()
.unstack() # rotate country (unstacking) by sticking it to the columns
.city08 # get the city08 column
.rolling(2) # rolling window of 2, to smooth out the curve
.mean()
.plot()
.legend(bbox_to_anchor=(1,1))
)
# someone's question: halfing results or something in SQL that might not be avaialble in Pandas
def vals_gt(df_, num):
return df_[df_.gt(num)].dropna()
(
autos2
.assign(country=autos2.make.apply(country))
.groupby(['year', 'country'])
.count()
.pipe(vals_gt, 700)
)
###Output
_____no_output_____ |
CitibikeProject.ipynb | ###Markdown
Description For the project, we will download data automatically from Citibike and store it into a database. We will be using all of the data available from Citibike. In order to do that, we will need to download and save aroun 17 GB of data. Due to the large ammount of data, we can't really import data directly into a pandas dataframe, instead, we have two options: 1. Import data in clusters as a dataframe and process them in batch. 2. Import all the data into a SQLite3 Database on the Hard Drive, and process them through SQL. I chose the second one due to the flexibility of the database model, since everything is stored localy and it provides the posibility to append at any time without any repercutions. The data can also be queried by other programs (eg. a website) 1. Libraries We are importing all the required libraries for the project
###Code
import pylab as pl #plot package
import pandas as pd #powerful dataframe package
import numpy as np # math package
import os #file management package
import sqlite3 # Database management package
import requests #For downloading files
from zipfile import ZipFile
import re # for filtering text
import numpy as np # For pretty printing
import glob
###Output
_____no_output_____
###Markdown
2. Data preparationIn this step we are setting up data to be analysed. 2.1 Directory management We are creating a directory named data in the root directory.We create a data folder in order to have a specific place where we do file manipulation and data management.This is so we don't have all kinds of files in our root directory and it looks pretty :)
###Code
# We check if we are in the data folder
current_dir=os.getcwd()
if 'data' in current_dir:
os.chdir('..')
# We are setting the home directory to the Jupyter Root directory
HOME_DIR=os.getcwd()
print (HOME_DIR)
# We are saving the location of the data directory
DATA_DIR=HOME_DIR+'/data'
print(DATA_DIR)
# We check if the directory exists, if not,
try:
os.makedirs(DATA_DIR)
print("Creating directory" + DATA_DIR)
except FileExistsError :
print("Directory {} already exists".format(DATA_DIR))
os.chdir(DATA_DIR)
print("Changing directory to {}".format(DATA_DIR))
###Output
/home/jovyan
/home/jovyan/data
Creating directory/home/jovyan/data
Changing directory to /home/jovyan/data
###Markdown
2.2 Database setupWe are setting up a database in order to
###Code
# Creating or connecting to the data sqlite db and setting the cursor there
database_connection = sqlite3.connect('ImportedData.db')
database_cursor = database_connection.cursor()
##TODO Describe cursor and why it's there
# Creating a table that traks downloaded files so we don't have to import them again
database_cursor.execute("CREATE TABLE IF NOT EXISTS file_data(file_name TEXT UNIQUE)")
database_connection.commit()
pd.read_sql_query("Select * from file_data",database_connection)
###Output
_____no_output_____
###Markdown
2.3 Querying the Citibike data We are checking the data available on the citibike website. This is hosted on Amazon AwS in the tripdata Bucket.The citybike data is hosted on amazon bitbucket What we do is we fetch the XML from s3, we parse it for files that are marked down inbetween ``We filtered the files that start with `2`, so that we don't get any files from Jersey City, since they start with `JC` and all the others start with `2`
###Code
# Fetching a file list of all available files on the Citybike data server
s3_url = "https://s3.amazonaws.com/tripdata"
s3_xml_file = requests.get(s3_url).text
files_on_s3 = re.findall (r'<Key>(2.*?zip)</Key>', s3_xml_file)
files_on_s3.sort()
print ('We have found the following files on S3 ')
print ("\n".join(files_on_s3))
###Output
We have found the following files on S3
201306-citibike-tripdata.zip
201307-201402-citibike-tripdata.zip
201307-citibike-tripdata.zip
201308-citibike-tripdata.zip
201309-citibike-tripdata.zip
201310-citibike-tripdata.zip
201311-citibike-tripdata.zip
201312-citibike-tripdata.zip
201401-citibike-tripdata.zip
201402-citibike-tripdata.zip
201403-citibike-tripdata.zip
201404-citibike-tripdata.zip
201405-citibike-tripdata.zip
201406-citibike-tripdata.zip
201407-citibike-tripdata.zip
201408-citibike-tripdata.zip
201409-citibike-tripdata.zip
201410-citibike-tripdata.zip
201411-citibike-tripdata.zip
201412-citibike-tripdata.zip
201501-citibike-tripdata.zip
201502-citibike-tripdata.zip
201503-citibike-tripdata.zip
201504-citibike-tripdata.zip
201505-citibike-tripdata.zip
201506-citibike-tripdata.zip
201507-citibike-tripdata.zip
201508-citibike-tripdata.zip
201509-citibike-tripdata.zip
201510-citibike-tripdata.zip
201511-citibike-tripdata.zip
201512-citibike-tripdata.zip
201601-citibike-tripdata.zip
201602-citibike-tripdata.zip
201603-citibike-tripdata.zip
201604-citibike-tripdata.zip
201605-citibike-tripdata.zip
201606-citibike-tripdata.zip
201607-citibike-tripdata.zip
201608-citibike-tripdata.zip
201609-citibike-tripdata.zip
201610-citibike-tripdata.zip
201611-citibike-tripdata.zip
201612-citibike-tripdata.zip
201701-citibike-tripdata.csv.zip
201702-citibike-tripdata.csv.zip
201703-citibike-tripdata.csv.zip
201704-citibike-tripdata.csv.zip
201705-citibike-tripdata.csv.zip
201706-citibike-tripdata.csv.zip
201707-citibike-tripdata.csv.zip
201708-citibike-tripdata.csv.zip
201709-citibike-tripdata.csv.zip
201710-citibike-tripdata.csv.zip
201711-citibike-tripdata.csv.zip
201712-citibike-tripdata.csv.zip
201801-citibike-tripdata.csv.zip
201802-citibike-tripdata.csv.zip
201803-citibike-tripdata.csv.zip
201804-citibike-tripdata.csv.zip
201805-citibike-tripdata.csv.zip
201806-citibike-tripdata.csv.zip
201807-citibike-tripdata.csv.zip
201808-citibike-tripdata.csv.zip
201809-citibike-tripdata.csv.zip
201810-citibike-tripdata.csv.zip
201811-citibike-tripdata.csv.zip
201812-citibike-tripdata.csv.zip
201901-citibike-tripdata.csv.zip
201902-citibike-tripdata.csv.zip
201903-citibike-tripdata.csv.zip
201904-citibike-tripdata.csv.zip
201905-citibike-tripdata.csv.zip
201906-citibike-tripdata.csv.zip
201907-citibike-tripdata.csv.zip
201908-citibike-tripdata.csv.zip
201909-citibike-tripdata.csv.zip
201910-citibike-tripdata.csv.zip
###Markdown
2.4 Setup methods for file managementHere we define all the methods we will use when adding data into the database. We have 5 methods one for downloading, one for unzipping, one for checking if the file exists in the database, one for adding the file in the database, and a cleanup
###Code
def download_file(filename):
delete_files_ending_with('zip')
download_url = s3_url + '/' + filename
print("Downloading file: " + download_url)
file_data = requests.get(download_url)
open(filename, 'wb').write(file_data.content)
def unzipping_file(filename):
file = ZipFile(filename,'r')
print("Extracting... ")
file.extractall()
unzipped_file = glob.glob('*.csv')
print("Extracted " + unzipped_file[0] + " from " + filename + ". Deleting archive to save space...")
delete_files_ending_with('zip')
return unzipped_file[0]
def delete_files_ending_with(extension):
for file in glob.glob('*'+ extension):
os.remove(file)
def file_has_been_added_to_the_database(filename):
database_cursor.execute("SELECT file_name FROM file_data WHERE file_name = ?",(filename,))
if database_cursor.fetchone() is None:
return False
else:
return True
def save_file_name_into_database(filename):
database_cursor.execute("INSERT INTO file_data (file_name) VALUES(?)",(filename,))
database_connection.commit()
def import_data_from_file_into_database(filename):
print("Adding "+ filename + "into the database")
file_data = pd.read_csv(extracted_file, names=['tripduration','starttime','stoptime','start_station_id','start_station_name',
'start_station_latitude', 'start_station_longitude', 'end_station_id',
'end_station_name', 'end_station_latitude', 'end_station_longitude', 'bikeid',
'usertype', 'birth_year', 'gender'], parse_dates = ['starttime','stoptime'], infer_datetime_format=True, header=None, skiprows=[0])
file_data['starttime'] = file_data['starttime'].dt.strftime("%Y-%m-%d %H:%M:%S")
file_data['stoptime'] = file_data['stoptime'].dt.strftime("%Y-%m-%d %H:%M:%S")
file_data.to_sql('trip_data', database_connection, if_exists='append')
database_connection.commit()
print("Data from "+filename+" added to the database")
###Output
_____no_output_____
###Markdown
2.5 Inserting data into the databaseNow we check compare the data we have with whatever is on the server. With the methods from above, we download, extract, save the data into the database, save the file name into the database when it's parsed, and delete necessary files along the way.
###Code
file_count = 0
for file in files_on_s3:
if not file_has_been_added_to_the_database(file):
try:
download_file(file)
extracted_file = unzipping_file(file)
import_data_from_file_into_database(extracted_file)
save_file_name_into_database(file)
delete_files_ending_with('csv')
file_count += 1 #la final face count la cate fisiere am adaugat
except XLRDError:
print("Close any open files and try again")
print('We imported '+file_count+' files into imported_data')
sql_to_be_processed = """SELECT bikeid, tripduration, gender,starttime, stoptime ,usertype, birth_year,
cast(strftime('%Y', stoptime) as int) as stop_year,
cast(strftime('%Y', stoptime) as int) as start_year,
case
when birth_year != 'NaN'
then strftime('%Y','now') - cast(birth_year as int)
else null
end age
FROM trip_data"""
pd.read_sql_query(sql_to_be_processed+ " limit 50", database_connection)
database_cursor.execute("CREATE TABLE bike_data as SELECT bikeid " +
" , sum(tripduration) as bike_lifecycle"
" , max(stop_year) as last_year_of_use"
" , min(start_year) as first_year_of_use"
" , count(bikeid) as number_of_uses"
" , count(case when gender = '1' then 1 else null end) as number_of_men_per_bike"
" , count(case when gender = '2' then 1 else null end) as number_of_women_per_bike"
" , count(case when usertype = 'Subscriber' then 1 else null end) as number_subscribers"
" , count(case when age <= 30 then 1 else null end) as under_30"
" , count(case when age > 31 and age <= 40 then 1 else null end) as beween_31_40"
" , count(case when age > 41 and age <= 50 then 1 else null end) as beween_41_50"
" , count(case when age > 50 then 1 else null end) as over_50"
" FROM (" + sql_to_be_processed +")"
"GROUP BY bikeid ")
database_connection.commit()
###Output
_____no_output_____ |
cloud_build_tfx.ipynb | ###Markdown
References* https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training* https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai/ Setting up
###Code
!gcloud init
from google.colab import auth
auth.authenticate_user()
GOOGLE_CLOUD_PROJECT = "fast-ai-exploration"
GOOGLE_CLOUD_REGION = "us-central1"
GCS_BUCKET_NAME = "vertex-tfx-mlops"
PIPELINE_NAME = "penguin-vertex-training"
DATA_ROOT = "gs://{}/data/{}".format(GCS_BUCKET_NAME, PIPELINE_NAME)
MODULE_ROOT = "gs://{}/pipeline_module/{}".format(GCS_BUCKET_NAME, PIPELINE_NAME)
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error("Please set all required parameters.")
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
###Output
_____no_output_____
###Markdown
Training module for TFX
###Code
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified run_fn() to add distribution_strategy.
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
_FEATURE_KEYS = [
"culmen_length_mm",
"culmen_depth_mm",
"flipper_length_mm",
"body_mass_g",
]
_LABEL_KEY = "species"
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),
}
def _input_fn(
file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int,
) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema,
).repeat()
def _make_keras_model(learning_rate: float) -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation="relu")(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
optimizer = keras.optimizers.Adam(learning_rate)
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
model.summary(print_fn=logging.info)
return model
# NEW: Read `use_gpu` from the custom_config of the Trainer.
# if it uses GPU, enable MirroredStrategy.
def _get_distribution_strategy(fn_args: tfx.components.FnArgs):
if fn_args.custom_config.get("use_gpu", False):
logging.info("Using MirroredStrategy with one GPU.")
return tf.distribute.MirroredStrategy(devices=["device:GPU:0"])
return None
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
hyperparameters = fn_args.hyperparameters
logging.info("Hyperparameters:")
logging.info(hyperparameters)
train_dataset = _input_fn(
fn_args.train_files, fn_args.data_accessor, schema, batch_size=_TRAIN_BATCH_SIZE
)
eval_dataset = _input_fn(
fn_args.eval_files, fn_args.data_accessor, schema, batch_size=_EVAL_BATCH_SIZE
)
# NEW: If we have a distribution strategy, build a model in a strategy scope.
strategy = _get_distribution_strategy(fn_args)
if strategy is None:
model = _make_keras_model(hyperparameters["learning_rate"])
else:
with strategy.scope():
model = _make_keras_model(hyperparameters["learning_rate"])
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
epochs=hyperparameters["num_epochs"],
)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format="tf")
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
###Output
_____no_output_____
###Markdown
Cloud Build configurations
###Code
REPO_URL = "https://github.com/sayakpaul/CI-CD-for-Model-Training"
BRANCH = "dev"
PIPELINE_ROOT = "gs://{}/pipeline_root/{}".format(GCS_BUCKET_NAME, PIPELINE_NAME)
SERVING_MODEL_DIR = "gs://{}/serving_model/{}".format(GCS_BUCKET_NAME, PIPELINE_NAME)
VERSION = "1.0.0"
CICD_IMAGE_URI = f"gcr.io/tfx-oss-public/tfx:{VERSION}"
TFX_IMAGE_URI = f"gcr.io/{GOOGLE_CLOUD_PROJECT}/{PIPELINE_NAME}:{VERSION}"
SUBSTITUTIONS=f"""\
_REPO_URL='{REPO_URL}',\
_BRANCH={BRANCH},\
_PROJECT={GOOGLE_CLOUD_PROJECT},\
_REGION={GOOGLE_CLOUD_REGION},\
_PIPELINE_NAME={PIPELINE_NAME},\
_PIPELINE_ROOT={PIPELINE_ROOT},\
_MODULE_ROOT={MODULE_ROOT},\
_DATA_ROOT={DATA_ROOT},\
_SERVING_MODEL_DIR={SERVING_MODEL_DIR},\
_CICD_IMAGE_URI={CICD_IMAGE_URI},\
_TFX_IMAGE_URI={TFX_IMAGE_URI}
"""
!echo $SUBSTITUTIONS
###Output
_____no_output_____
###Markdown
Submit to Cloud BuildThe output of Cloud Build, in this case, is a compiled pipeline uploaded to GCS Bucket.
###Code
!git clone https://github.com/sayakpaul/CI-CD-for-Model-Training --quiet
!gcloud builds submit --no-source --timeout=60m \
--config CI-CD-for-Model-Training/build/pipeline-deployment.yaml \
--substitutions {SUBSTITUTIONS} \
--machine-type=e2-highcpu-8
###Output
_____no_output_____
###Markdown
Output:```shellID CREATE_TIME DURATION SOURCE IMAGES STATUS1619041e-a192-4de0-91f5-6799afa647ca 2021-08-24T08:16:37+00:00 7M45S - - SUCCESS```
###Code
!gsutil ls -lh {PIPELINE_ROOT}/
###Output
_____no_output_____
###Markdown
References* https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training* https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai/ Setting up
###Code
!gcloud init
from google.colab import auth
auth.authenticate_user()
GOOGLE_CLOUD_PROJECT = 'fast-ai-exploration'
GOOGLE_CLOUD_REGION = 'us-central1'
GCS_BUCKET_NAME = 'vertex-tfx-mlops'
PIPELINE_NAME = 'penguin-vertex-training'
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
###Output
Copying gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv [Content-Type=application/octet-stream]...
/ [1 files][ 25.0 KiB/ 25.0 KiB]
Operation completed over 1 objects/25.0 KiB.
###Markdown
Training module for TFX
###Code
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified run_fn() to add distribution_strategy.
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
}, _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# NEW: Read `use_gpu` from the custom_config of the Trainer.
# if it uses GPU, enable MirroredStrategy.
def _get_distribution_strategy(fn_args: tfx.components.FnArgs):
if fn_args.custom_config.get('use_gpu', False):
logging.info('Using MirroredStrategy with one GPU.')
return tf.distribute.MirroredStrategy(devices=['device:GPU:0'])
return None
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
# NEW: If we have a distribution strategy, build a model in a strategy scope.
strategy = _get_distribution_strategy(fn_args)
if strategy is None:
model = _make_keras_model()
else:
with strategy.scope():
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
epochs=1)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
###Output
Copying file://penguin_trainer.py [Content-Type=text/x-python]...
/ [0 files][ 0.0 B/ 4.4 KiB]
/ [1 files][ 4.4 KiB/ 4.4 KiB]
Operation completed over 1 objects/4.4 KiB.
###Markdown
Cloud Build configurations
###Code
REPO_URL = "https://github.com/sayakpaul/CI-CD-for-Model-Training"
BRANCH = "main"
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
CICD_IMAGE_URI = 'gcr.io/tfx-oss-public/tfx:1.0.0'
SUBSTITUTIONS=f"""\
_REPO_URL='{REPO_URL}',\
_BRANCH={BRANCH},\
_PROJECT={GOOGLE_CLOUD_PROJECT},\
_REGION={GOOGLE_CLOUD_REGION},\
_PIPELINE_NAME={PIPELINE_NAME},\
_PIPELINE_ROOT={PIPELINE_ROOT},\
_MODULE_ROOT={MODULE_ROOT},\
_DATA_ROOT={DATA_ROOT},\
_SERVING_MODEL_DIR={SERVING_MODEL_DIR},\
_CICD_IMAGE_URI={CICD_IMAGE_URI}
"""
!echo $SUBSTITUTIONS
###Output
_REPO_URL=https://github.com/sayakpaul/CI-CD-for-Model-Training,_BRANCH=dev,_PROJECT=fast-ai-exploration,_REGION=us-central1,_PIPELINE_NAME=penguin-vertex-training,_PIPELINE_ROOT=gs://vertex-tfx-mlops/pipeline_root/penguin-vertex-training,_MODULE_ROOT=gs://vertex-tfx-mlops/pipeline_module/penguin-vertex-training,_DATA_ROOT=gs://vertex-tfx-mlops/data/penguin-vertex-training,_SERVING_MODEL_DIR=gs://vertex-tfx-mlops/serving_model/penguin-vertex-training,_CICD_IMAGE_URI=gcr.io/tfx-oss-public/tfx:1.0.0
###Markdown
Submit to Cloud BuildThe output of Cloud Build, in this case, is a compiled pipeline uploaded to GCS Bucket.
###Code
!git clone https://github.com/sayakpaul/CI-CD-for-Model-Training --quiet
!gcloud builds submit --no-source --timeout=60m \
--config CI-CD-for-Model-Training/build/pipeline-deployment.yaml \
--substitutions {SUBSTITUTIONS} --machine-type=e2-highcpu-8
###Output
_____no_output_____ |
formative_ipynb/2 Mechanics/2.1.ipynb | ###Markdown
Topic 2.1 - Motion__Formula booklet:__ four SUVAT equations*velocity* $$v = u + at $$*displacement* $$s = ut + \frac{1}{2}at^2$$*timeless* $$v^2 = u^2 + 2as$$*average displacement* $$s = \frac{(v + u)t}{2} $$ Question 1A fly travels along the x-axis. His starting point is $x = -8.0 m$ and his ending point is $x = -16 m$. His flight lasts $2.0$ seconds. What is his velocity? __Given__- $x_i = -8.0 m$- $x_f = -16 m$- $t = 2 s$__Formula__- $\Delta x = x_f - x_i$- $v = \frac{\Delta x}{t}$__Solution__- $\Delta x = x_f - x_i = -16 - (-8) = -8m$- $v = \frac{\Delta x}{t} = \frac{-8}{2} = -4 \frac{m}{s}$__Answer:__ The velocity of the fly is $-4 \frac{m}{s}$.
###Code
x_i = -8.0 # initial point in m
x_f = -16 # final point in m
t = 2 # time to travel the distance in s
x = x_f - x_i # displacement in m
v = x / t # velocity
print('The velocity of the fly is', v, 'm/s.')
###Output
The velocity of the fly is -4.0 m/s.
###Markdown
Question 2A car traveling at $48 ms^{-1}$ is brought to a stop in $3.0$ seconds. What is its acceleration?__Given__- $u = 48 \frac{m}{s}$- $t = 3 s$- $v = 0$__Formula__ velocity- $v = u + at$__Solution__- Since $v = 0$ the formula rearranges:- $-u = at$ or $a = -\frac{u}{t} = -\frac{48}{3} = -16\frac{m}{s^2}$__Answer:__ The acceleration of the car is $-16\frac{m}{s^2}$.
###Code
v = 0 # final velocity - implicit - stop or zero
u = 48 # initial velocity
t = 3 # time to stop
a = -u / t # acceleration is change in velocity over time
print('The acceleration of the car is',a,'m/s²')
###Output
The acceleration of the car is -16.0 m/s²
|
2.Model Implementation/0. DNN/3. DNN with pytorch/.ipynb_checkpoints/1. MLP-MNIST-exp2(n-layer,act)-checkpoint.ipynb | ###Markdown
1. model
###Code
class MLP(nn.Module):
def __init__(self, in_dim, out_dim, hid_dim, n_layer, act, batch_normal, dropout_p, weight_init):
super(MLP,self).__init__()
self.in_dim = in_dim
self.out_dim = out_dim
self.hid_dim = hid_dim
self.n_layer = n_layer # n_layer = hid_dim(1) + ... + hid_dim(n-1) + out_dim(n)
self.act = act
self.batch_normal = batch_normal
self.dropout = dropout_p
#===Create sequence space===#
self.linears = nn.ModuleList()
self.batch_normals = nn.ModuleList()
self.fc1 = nn.Linear(self.in_dim, self.hid_dim)
for idx in range(n_layer-1):
self.linears.append(nn.Linear(self.hid_dim, self.hid_dim))
# 레이어 마다 batch_normalization 시행 예정-> [linear - BN - activation] 반복
if self.batch_normal == True:
self.batch_normals.append(nn.BatchNorm1d(hid_dim))
self.fc2 = nn.Linear(self.hid_dim, self.out_dim)
#===Create Activation Function===#
if self.act == 'sigmoid':
self.act = nn.Sigmoid()
elif self.act == 'relu':
self.act = nn.ReLU()
elif self.act == 'tanh':
self.act = nn.Tanh()
elif self.act == 'leaky_relu':
self.act = nn.LeakyReLU()
else:
raise ValueError("no valid activation function selected(sigmoid, relu, leaky_relu, tanh)")
#===Create Regularization layer===#
# dropout
self.dropout = nn.Dropout(self.dropout)
# weight_initialization
if weight_init == 'xavier':
self.xavier_init()
elif weight_init == 'he':
self.he_init()
else:
raise ValueError("no valid weight_initializer selected(xavier, he)")
def xavier_init(self):
for linear in self.linears:
nn.init.xavier_normal_(linear.weight)
linear.bias.data.fill_(0.01)
def he_init(self):
for linear in self.linears:
torch.nn.init.kaiming_normal_(linear.weight)
linear.bias.data.fill_(0.01)
def forward(self,x):
out = self.act(self.fc1(x))
#===hidden layer===#
# 레이어 마다 batch_normalization 시행 예정-> [weight_init - linear - BN - activation - dropout] 반복
# batch_norm, dropout 은 model.train()에서만 ON
# batch_norm, dropout 은 hidden layer에서만 적용해야 함!
for idx in range(len(self.linears)):
out = self.linears[idx](out)
if self.batch_normals:
out = self.batch_normals[idx](out)
out = self.act(out)
out = self.dropout(out)
#===hidden layer===#
out = self.fc2(out)
return out
# __init__(self, in_dim, out_dim, hid_dim, n_layer, act, batch_normal, dropout_p, weight_init):
model = MLP(3072,10,100,4,'leaky_relu',batch_normal=True,dropout_p=0.1,weight_init='he')
###Output
_____no_output_____
###Markdown
2. Train
###Code
def train(model, partition, optimizer, criterion, args):
# input data preparation
trainloader = torch.utils.data.DataLoader(partition['train'],
batch_size=args.train_batch_size,
shuffle=True, num_workers=2)
model.train()
train_loss = 0.0
accuracy_batch = 0.0
total_sample = 0
for i, samples in enumerate(trainloader):
x_data, y_label = samples
x_data = x_data.view(-1,3072)
x_data = x_data.to(device)
y_label = y_label.to(device)
# forward
output = model(x_data)
cost = criterion(output, y_label)
# backward
optimizer.zero_grad()
cost.backward()
optimizer.step()
train_loss += cost.item()
_, predicted_label = torch.max(output, dim=1)
correct = predicted_label == y_label
accuracy_batch += correct.float().sum().item()
total_sample += y_label.size(0)
# batch 당 평균 loss( len(trainloader) == batch 갯수 )
train_loss_batch = train_loss / len(trainloader)
# 모든 sample의 평균 accuracy
train_acc_batch = (accuracy_batch / total_sample)*100
# 학습 후의 model을 return해서, 이후 validate 등에 넣어 활용할 예정
return model, train_loss_batch, train_acc_batch
###Output
_____no_output_____
###Markdown
3. Validate
###Code
def validate(model, partition, criterion, args):
valloader = torch.utils.data.DataLoader(partition['val'],
batch_size=args.test_batch_size,
shuffle=False, num_workers=2)
model.eval()
val_loss = 0.0
accuracy_batch = 0.0
total_sample = 0
with torch.no_grad():
for samples in valloader:
x_data, y_label = samples
x_data = x_data.view(-1,3072)
x_data = x_data.to(device)
y_label = y_label.to(device)
# forward
output = model(x_data)
cost = criterion(output, y_label)
# backward (X)
val_loss += cost.item()
_, predicted_label = torch.max(output, dim=1)
correct = predicted_label == y_label
accuracy_batch += correct.float().sum().item()
total_sample += y_label.size(0)
val_loss_batch = val_loss / len(valloader)
val_acc_batch = (accuracy_batch / total_sample)*100
return val_loss_batch, val_acc_batch
###Output
_____no_output_____
###Markdown
4. Test
###Code
def test(model, partition, args):
testloader = torch.utils.data.DataLoader(partition['test'],
batch_size=args.test_batch_size,
shuffle=False, num_workers=2)
model.eval()
accuracy_batch = 0.0
total_sample = 0
with torch.no_grad():
for samples in testloader:
x_data, y_label = samples
x_data = x_data.view(-1,3072)
x_data = x_data.to(device)
y_label = y_label.to(device)
# forward (X)
# backward (X)
output = model(x_data)
_, predicted_label = torch.max(output, dim=1)
correct = predicted_label == y_label
accuracy_batch += correct.float().sum().item()
total_sample += y_label.size(0)
test_acc_batch = (accuracy_batch / total_sample)*100
return test_acc_batch
###Output
_____no_output_____
###Markdown
5. Experiment
###Code
def experiment(partition,args):
model = MLP(args.in_dim, args.out_dim, args.hid_dim,
args.n_layer, args.act,
args.batch_normal, args.dropout_p, args.weight_init)
model.to(device)
# Loss function
criterion = nn.CrossEntropyLoss()
# Optimizer
if args.optim == 'SGD':
optimizer = optim.SGD(model.parameters(), lr = args.lr, weight_decay = args.l2)
elif args.optim == 'RMSprop':
optimizer = optim.RMSprop(model.parameters(), lr = args.lr, weight_decay = args.l2)
elif args.optim == 'ADAM':
optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.l2)
else:
raise ValueError("no valid optimizer selected(SGD, RMSprop, ADAM)")
# Create loss, accuracy list for visualization(seaborn)
# epoch-wise loss, accuracy
train_losses, val_losses = [], []
train_accs, val_accs = [], []
# loop (train / val)
for epoch in range(args.epoch+1):
ts = time.time()
model, train_loss_batch, train_acc_batch = train(model, partition, optimizer, criterion, args)
val_loss_batch, val_acc_batch = validate(model, partition, criterion, args)
te = time.time()
train_losses.append(train_loss_batch)
val_losses.append(val_loss_batch)
train_accs.append(train_acc_batch)
val_accs.append(val_acc_batch)
print('Epoch {}, Acc(train/val): {:2.2f}/{:2.2f}, Loss(train/val) {:2.2f}/{:2.2f}.\
Took {:2.2f} sec'.format(epoch, train_acc_batch, val_acc_batch, train_loss_batch, val_loss_batch, te-ts))
test_acc_batch = test(model, partition, args)
# to keep track of the result of each experiment
result = {}
result['train_losses'] = train_losses
result['val_losses'] = val_losses
result['train_accs'] = train_accs
result['val_accs'] = val_accs
result['train_acc'] = train_acc_batch
result['val_acc'] = val_acc_batch
result['test_acc'] = test_acc_batch
# vars(object) : object가 갖는 attribute를 return! (result의 experiment arguments를 return하기 위함
# vars(object) : object의 attribute를 dictionary 로 return!
return vars(args), result
###Output
_____no_output_____
###Markdown
6. Manage Results as a File
###Code
import hashlib
import json
from os import listdir
from os.path import isfile, join
import pandas as pd
def save_exp_result(setting, result):
exp_name = setting['exp_name']
del setting['epoch']
del setting['test_batch_size']
hash_key = hashlib.sha1(str(setting).encode()).hexdigest()[:6]
filename = './results/{}-{}.json'.format(exp_name, hash_key)
# .updata : dictionary의 append와 같음 (result.update(setting) : result dict + setting dict)
result.update(setting)
with open(filename, 'w') as f:
json.dump(result, f)
def load_exp_result(exp_name):
dir_path = './results'
filenames = [f for f in listdir(dir_path) if isfile(join(dir_path, f)) if '.json' in f]
list_result = []
for filename in filenames:
if exp_name in filename:
with open(join(dir_path, filename), 'r') as infile:
results = json.load(infile)
list_result.append(results)
df = pd.DataFrame(list_result) # .drop(columns=[])
return df
###Output
_____no_output_____
###Markdown
7. Experiement
###Code
# ====== Random Seed Initialization ====== #
seed = 123
np.random.seed(seed)
torch.manual_seed(seed)
parser = argparse.ArgumentParser()
args = parser.parse_args("")
args.exp_name = "exp1_n_layer_hid_dim"
# ====== Model Capacity ====== #
args.in_dim = 3072
args.out_dim = 10
args.hid_dim = 100
args.act = 'relu'
# ====== Regularization ======= #
args.dropout_p = 0.2
args.batch_normal = True
args.l2 = 0.00001
args.weight_init = 'he'
# ====== Optimizer & Training ====== #
args.optim = 'ADAM' #'RMSprop' #SGD, RMSprop, ADAM...
args.lr = 0.0015
args.epoch = 10
args.train_batch_size = 256
args.test_batch_size = 1024
# ====== Experiment Variable ====== #
name_var1 = 'n_layer'
name_var2 = 'act'
list_var1 = [3, 4, 5]
list_var2 = ['relu','leaky_relu']
for var1 in list_var1:
for var2 in list_var2:
setattr(args, name_var1, var1)
setattr(args, name_var2, var2)
print(args)
setting, result = experiment(partition, deepcopy(args))
save_exp_result(setting, result)
###Output
Namespace(act='relu', batch_normal=True, dropout_p=0.2, epoch=10, exp_name='exp1_n_layer_hid_dim', hid_dim=100, in_dim=3072, l2=1e-05, lr=0.0015, n_layer=3, optim='ADAM', out_dim=10, test_batch_size=1024, train_batch_size=256, weight_init='he')
Epoch 0, Acc(train/val): 35.39/42.37, Loss(train/val) 1.81/1.62. Took 10.87 sec
Epoch 1, Acc(train/val): 43.41/46.50, Loss(train/val) 1.58/1.50. Took 10.75 sec
Epoch 2, Acc(train/val): 47.40/47.61, Loss(train/val) 1.48/1.47. Took 10.75 sec
Epoch 3, Acc(train/val): 49.23/49.26, Loss(train/val) 1.41/1.43. Took 10.77 sec
Epoch 4, Acc(train/val): 51.50/50.04, Loss(train/val) 1.36/1.41. Took 10.95 sec
Epoch 5, Acc(train/val): 53.11/50.68, Loss(train/val) 1.32/1.40. Took 10.71 sec
Epoch 6, Acc(train/val): 54.11/50.66, Loss(train/val) 1.28/1.39. Took 10.83 sec
Epoch 7, Acc(train/val): 55.39/51.44, Loss(train/val) 1.25/1.38. Took 10.84 sec
Epoch 8, Acc(train/val): 56.91/51.29, Loss(train/val) 1.21/1.38. Took 10.67 sec
Epoch 9, Acc(train/val): 57.44/51.35, Loss(train/val) 1.19/1.38. Took 10.83 sec
Epoch 10, Acc(train/val): 58.36/51.51, Loss(train/val) 1.16/1.37. Took 10.89 sec
Namespace(act='leaky_relu', batch_normal=True, dropout_p=0.2, epoch=10, exp_name='exp1_n_layer_hid_dim', hid_dim=100, in_dim=3072, l2=1e-05, lr=0.0015, n_layer=3, optim='ADAM', out_dim=10, test_batch_size=1024, train_batch_size=256, weight_init='he')
Epoch 0, Acc(train/val): 35.17/42.14, Loss(train/val) 1.81/1.61. Took 10.89 sec
Epoch 1, Acc(train/val): 44.01/45.74, Loss(train/val) 1.58/1.51. Took 10.85 sec
Epoch 2, Acc(train/val): 47.23/47.54, Loss(train/val) 1.48/1.45. Took 10.80 sec
Epoch 3, Acc(train/val): 49.48/49.20, Loss(train/val) 1.41/1.42. Took 10.99 sec
Epoch 4, Acc(train/val): 51.48/50.05, Loss(train/val) 1.36/1.40. Took 10.76 sec
Epoch 5, Acc(train/val): 52.94/50.32, Loss(train/val) 1.32/1.39. Took 10.84 sec
Epoch 6, Acc(train/val): 54.10/51.18, Loss(train/val) 1.28/1.38. Took 10.98 sec
Epoch 7, Acc(train/val): 55.77/51.91, Loss(train/val) 1.24/1.36. Took 10.88 sec
Epoch 8, Acc(train/val): 56.47/50.57, Loss(train/val) 1.22/1.39. Took 10.76 sec
Epoch 9, Acc(train/val): 57.55/52.07, Loss(train/val) 1.19/1.36. Took 11.09 sec
Epoch 10, Acc(train/val): 58.54/51.48, Loss(train/val) 1.16/1.38. Took 10.71 sec
Namespace(act='relu', batch_normal=True, dropout_p=0.2, epoch=10, exp_name='exp1_n_layer_hid_dim', hid_dim=100, in_dim=3072, l2=1e-05, lr=0.0015, n_layer=4, optim='ADAM', out_dim=10, test_batch_size=1024, train_batch_size=256, weight_init='he')
Epoch 0, Acc(train/val): 32.67/40.89, Loss(train/val) 1.87/1.64. Took 11.09 sec
Epoch 1, Acc(train/val): 41.79/44.37, Loss(train/val) 1.63/1.54. Took 11.20 sec
Epoch 2, Acc(train/val): 45.07/45.90, Loss(train/val) 1.54/1.49. Took 10.98 sec
Epoch 3, Acc(train/val): 47.57/47.43, Loss(train/val) 1.47/1.46. Took 11.06 sec
Epoch 4, Acc(train/val): 49.62/48.17, Loss(train/val) 1.42/1.44. Took 11.10 sec
Epoch 5, Acc(train/val): 51.48/49.13, Loss(train/val) 1.37/1.42. Took 11.21 sec
Epoch 6, Acc(train/val): 52.53/50.43, Loss(train/val) 1.33/1.39. Took 11.14 sec
Epoch 7, Acc(train/val): 53.42/49.77, Loss(train/val) 1.31/1.41. Took 11.27 sec
Epoch 8, Acc(train/val): 54.80/50.84, Loss(train/val) 1.27/1.39. Took 10.93 sec
Epoch 9, Acc(train/val): 55.59/51.70, Loss(train/val) 1.24/1.37. Took 11.05 sec
Epoch 10, Acc(train/val): 56.79/51.88, Loss(train/val) 1.22/1.37. Took 11.13 sec
Namespace(act='leaky_relu', batch_normal=True, dropout_p=0.2, epoch=10, exp_name='exp1_n_layer_hid_dim', hid_dim=100, in_dim=3072, l2=1e-05, lr=0.0015, n_layer=4, optim='ADAM', out_dim=10, test_batch_size=1024, train_batch_size=256, weight_init='he')
Epoch 0, Acc(train/val): 32.48/41.10, Loss(train/val) 1.87/1.64. Took 11.17 sec
Epoch 1, Acc(train/val): 42.08/44.10, Loss(train/val) 1.63/1.56. Took 10.99 sec
Epoch 2, Acc(train/val): 45.58/46.94, Loss(train/val) 1.53/1.48. Took 11.22 sec
Epoch 3, Acc(train/val): 48.34/48.50, Loss(train/val) 1.45/1.43. Took 11.11 sec
Epoch 4, Acc(train/val): 50.56/49.07, Loss(train/val) 1.40/1.43. Took 11.01 sec
Epoch 5, Acc(train/val): 51.71/50.13, Loss(train/val) 1.36/1.40. Took 11.11 sec
Epoch 6, Acc(train/val): 52.88/50.35, Loss(train/val) 1.33/1.39. Took 11.29 sec
Epoch 7, Acc(train/val): 54.44/51.06, Loss(train/val) 1.29/1.37. Took 11.08 sec
Epoch 8, Acc(train/val): 55.52/50.80, Loss(train/val) 1.26/1.39. Took 10.98 sec
Epoch 9, Acc(train/val): 56.57/50.90, Loss(train/val) 1.23/1.39. Took 11.28 sec
Epoch 10, Acc(train/val): 57.47/51.48, Loss(train/val) 1.21/1.36. Took 11.11 sec
Namespace(act='relu', batch_normal=True, dropout_p=0.2, epoch=10, exp_name='exp1_n_layer_hid_dim', hid_dim=100, in_dim=3072, l2=1e-05, lr=0.0015, n_layer=5, optim='ADAM', out_dim=10, test_batch_size=1024, train_batch_size=256, weight_init='he')
Epoch 0, Acc(train/val): 30.75/39.18, Loss(train/val) 1.91/1.68. Took 11.36 sec
Epoch 1, Acc(train/val): 40.24/44.13, Loss(train/val) 1.67/1.56. Took 11.40 sec
Epoch 2, Acc(train/val): 43.59/44.55, Loss(train/val) 1.57/1.54. Took 11.21 sec
Epoch 3, Acc(train/val): 45.96/46.85, Loss(train/val) 1.51/1.49. Took 11.41 sec
Epoch 4, Acc(train/val): 48.28/48.10, Loss(train/val) 1.46/1.45. Took 11.27 sec
Epoch 5, Acc(train/val): 50.15/48.76, Loss(train/val) 1.41/1.44. Took 11.27 sec
Epoch 6, Acc(train/val): 51.50/49.20, Loss(train/val) 1.37/1.42. Took 11.41 sec
Epoch 7, Acc(train/val): 52.65/50.30, Loss(train/val) 1.34/1.41. Took 11.12 sec
Epoch 8, Acc(train/val): 53.95/50.60, Loss(train/val) 1.31/1.39. Took 11.26 sec
Epoch 9, Acc(train/val): 54.89/51.54, Loss(train/val) 1.28/1.38. Took 11.22 sec
Epoch 10, Acc(train/val): 55.66/50.72, Loss(train/val) 1.26/1.39. Took 11.29 sec
Namespace(act='leaky_relu', batch_normal=True, dropout_p=0.2, epoch=10, exp_name='exp1_n_layer_hid_dim', hid_dim=100, in_dim=3072, l2=1e-05, lr=0.0015, n_layer=5, optim='ADAM', out_dim=10, test_batch_size=1024, train_batch_size=256, weight_init='he')
Epoch 0, Acc(train/val): 29.89/38.84, Loss(train/val) 1.93/1.68. Took 11.33 sec
Epoch 1, Acc(train/val): 40.30/43.51, Loss(train/val) 1.67/1.58. Took 11.52 sec
Epoch 2, Acc(train/val): 44.03/46.51, Loss(train/val) 1.58/1.51. Took 11.19 sec
Epoch 3, Acc(train/val): 46.49/46.78, Loss(train/val) 1.51/1.49. Took 11.36 sec
Epoch 4, Acc(train/val): 48.74/48.39, Loss(train/val) 1.45/1.46. Took 11.21 sec
Epoch 5, Acc(train/val): 49.89/48.96, Loss(train/val) 1.41/1.43. Took 11.53 sec
Epoch 6, Acc(train/val): 51.71/49.78, Loss(train/val) 1.37/1.41. Took 11.26 sec
Epoch 7, Acc(train/val): 52.60/51.13, Loss(train/val) 1.33/1.38. Took 11.42 sec
Epoch 8, Acc(train/val): 54.43/50.90, Loss(train/val) 1.30/1.40. Took 11.47 sec
Epoch 9, Acc(train/val): 55.34/51.10, Loss(train/val) 1.27/1.39. Took 11.37 sec
Epoch 10, Acc(train/val): 56.08/51.91, Loss(train/val) 1.25/1.38. Took 11.44 sec
###Markdown
8. Visualize
###Code
import seaborn as sns
import matplotlib.pyplot as plt
df = load_exp_result('exp1')
fig, ax = plt.subplots(1, 3)
fig.set_size_inches(15, 6)
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
sns.barplot(x='n_layer', y='train_acc', hue='act', data=df, ax=ax[0])
sns.barplot(x='n_layer', y='val_acc', hue='act', data=df, ax=ax[1])
sns.barplot(x='n_layer', y='test_acc', hue='act', data=df, ax=ax[2])
var1 = 'n_layer'
var2 = 'act'
df = load_exp_result('exp1')
list_v1 = df[var1].unique()
list_v2 = df[var2].unique()
list_data = []
for value1 in list_v1:
for value2 in list_v2:
row = df.loc[df[var1]==value1]
row = row.loc[df[var2]==value2]
train_losses = list(row.train_losses)[0]
val_losses = list(row.val_losses)[0]
for epoch, train_loss in enumerate(train_losses):
list_data.append({'type':'train', 'loss':train_loss, 'epoch':epoch, var1:value1, var2:value2})
for epoch, val_loss in enumerate(val_losses):
list_data.append({'type':'val', 'loss':val_loss, 'epoch':epoch, var1:value1, var2:value2})
df = pd.DataFrame(list_data)
g = sns.FacetGrid(df, row=var2, col=var1, hue='type', margin_titles=True, sharey=False)
g = g.map(plt.plot, 'epoch', 'loss', marker='.')
g.add_legend()
g.fig.suptitle('Train loss vs Val loss')
plt.subplots_adjust(top=0.89)
var1 = 'n_layer'
var2 = 'act'
df = load_exp_result('exp1')
list_v1 = df[var1].unique()
list_v2 = df[var2].unique()
list_data = []
for value1 in list_v1:
for value2 in list_v2:
row = df.loc[df[var1]==value1]
row = row.loc[df[var2]==value2]
train_accs = list(row.train_accs)[0]
val_accs = list(row.val_accs)[0]
test_acc = list(row.test_acc)[0]
for epoch, train_acc in enumerate(train_accs):
list_data.append({'type':'train', 'Acc':train_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2})
for epoch, val_acc in enumerate(val_accs):
list_data.append({'type':'val', 'Acc':val_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2})
df = pd.DataFrame(list_data)
g = sns.FacetGrid(df, row=var2, col=var1, hue='type', margin_titles=True, sharey=False)
g = g.map(plt.plot, 'epoch', 'Acc', marker='.')
def show_acc(x, y, metric, **kwargs):
plt.scatter(x, y, alpha=0.3, s=1)
metric = "Test Acc: {:1.3f}".format(list(metric.values)[0])
plt.text(0.05, 0.95, metric, horizontalalignment='left', verticalalignment='center', transform=plt.gca().transAxes, bbox=dict(facecolor='yellow', alpha=0.5, boxstyle="round,pad=0.1"))
g = g.map(show_acc, 'epoch', 'Acc', 'test_acc')
g.add_legend()
g.fig.suptitle('Train Accuracy vs Val Accuracy')
plt.subplots_adjust(top=0.89)
###Output
_____no_output_____ |
notebooks/mag/Mag_Induced2D.ipynb | ###Markdown
This is the Jupyter Notebook, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it. To use the notebook:- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)- You can alter variables and re-run cells- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook) This notebook uses code adapted from SimPEG- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
###Code
import numpy as np
from geoscilabs.mag import Mag, Simulator
from SimPEG import PF, Utils, Mesh
%matplotlib inline
###Output
_____no_output_____
###Markdown
How do we define direction of an earth magnetic field?Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system: - X (Easting), - Y (Northing), and - Z (Up). Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:- Declination: An angle from geographic North (Ng) (positive clockwise)- Inclination: Vertical angle from the N-E plane (positive down) What's data: total field anomalyWe consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth. Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as $$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$ We measure both earth and anomalous magnetic field such that$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$Total field anomaly, $\triangle \vec{B}$ can be defined as$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$ If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$ Define a 3D prismOur model is a rectangular prism. Parameters to define this prism are given below:- dx: length in Easting (x) direction (meter)- dy: length in Northing (y) direction (meter)- dz: length in Depth (z) direction (meter) below the receiver- depth: top boundary of the prism (meter)- pinc: inclination of the prism (reference is a unit northing vector; degree)- pdec: declination of the prism (reference is a unit northing vector; degree)You can also change the height of the survey grid above the ground- rx_h: height of the grid (meter)*Green dots show a plane where we measure data.*
###Code
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/master/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey)
display(param)
param.result
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
display(model)
###Output
_____no_output_____
###Markdown
Magnetic appletBased on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right). For the prism, you can alter:- sus: susceptibility of the prismParameters for the earth field are:- Einc: inclination of the earth field (degree)- Edec: declination of the earth field (degree)- Bigrf: intensity of the earth field (nT)For data, you can view:- tf: total field anomaly, - bx :x-component, - by :y-component, - bz :z-componentYou can simulate and view remanent magnetization effect with parameters:- irt: "induced", "remanent", or "total"- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)- rinc: inclination of the remanent magnetization (degree)- rdec: declination of the remanent magnetization (degree)
###Code
plotwidget = Simulator.PFSimulator(model, param)
display(plotwidget)
###Output
_____no_output_____
###Markdown
This is the Jupyter Notebook, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it. To use the notebook:- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)- You can alter variables and re-run cells- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook) This notebook uses code adapted from SimPEG- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
###Code
import numpy as np
from geoscilabs.mag import Mag, Simulator
from SimPEG.potential_fields import magnetics as mag
from SimPEG import utils, data
from discretize import TensorMesh
###Output
_____no_output_____
###Markdown
How do we define direction of an earth magnetic field?Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system: - X (Easting), - Y (Northing), and - Z (Up). Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:- Declination: An angle from geographic North (Ng) (positive clockwise)- Inclination: Vertical angle from the N-E plane (positive down) What's data: total field anomalyWe consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth. Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as $$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$ We measure both earth and anomalous magnetic field such that$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$Total field anomaly, $\triangle \vec{B}$ can be defined as$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$ If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$ Define a 3D prismOur model is a rectangular prism. Parameters to define this prism are given below:- dx: length in Easting (x) direction (meter)- dy: length in Northing (y) direction (meter)- dz: length in Depth (z) direction (meter) below the receiver- depth: top boundary of the prism (meter)- pinc: inclination of the prism (reference is a unit northing vector; degree)- pdec: declination of the prism (reference is a unit northing vector; degree)You can also change the height of the survey grid above the ground- rx_h: height of the grid (meter)*Green dots show a plane where we measure data.*
###Code
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/main/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey, dobj = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey, dobj)
param
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
model
###Output
_____no_output_____
###Markdown
Magnetic appletBased on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right). For the prism, you can alter:- sus: susceptibility of the prismParameters for the earth field are:- Einc: inclination of the earth field (degree)- Edec: declination of the earth field (degree)- Bigrf: intensity of the earth field (nT)For data, you can view:- tf: total field anomaly, - bx :x-component, - by :y-component, - bz :z-componentYou can simulate and view remanent magnetization effect with parameters:- irt: "induced", "remanent", or "total"- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)- rinc: inclination of the remanent magnetization (degree)- rdec: declination of the remanent magnetization (degree)
###Code
Simulator.PFSimulator(model, param)
###Output
_____no_output_____
###Markdown
This is the Jupyter Notebook, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it. To use the notebook:- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)- You can alter variables and re-run cells- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook) This notebook uses code adapted from SimPEG- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
###Code
import numpy as np
from geoscilabs.mag import Mag, Simulator
from SimPEG.potential_fields import magnetics as mag
from SimPEG import utils, data
from discretize import TensorMesh
%matplotlib inline
###Output
_____no_output_____
###Markdown
How do we define direction of an earth magnetic field?Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system: - X (Easting), - Y (Northing), and - Z (Up). Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:- Declination: An angle from geographic North (Ng) (positive clockwise)- Inclination: Vertical angle from the N-E plane (positive down) What's data: total field anomalyWe consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth. Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as $$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$ We measure both earth and anomalous magnetic field such that$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$Total field anomaly, $\triangle \vec{B}$ can be defined as$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$ If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$ Define a 3D prismOur model is a rectangular prism. Parameters to define this prism are given below:- dx: length in Easting (x) direction (meter)- dy: length in Northing (y) direction (meter)- dz: length in Depth (z) direction (meter) below the receiver- depth: top boundary of the prism (meter)- pinc: inclination of the prism (reference is a unit northing vector; degree)- pdec: declination of the prism (reference is a unit northing vector; degree)You can also change the height of the survey grid above the ground- rx_h: height of the grid (meter)*Green dots show a plane where we measure data.*
###Code
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/master/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey, dobj = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey, dobj)
display(param)
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
display(model)
###Output
_____no_output_____
###Markdown
Magnetic appletBased on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right). For the prism, you can alter:- sus: susceptibility of the prismParameters for the earth field are:- Einc: inclination of the earth field (degree)- Edec: declination of the earth field (degree)- Bigrf: intensity of the earth field (nT)For data, you can view:- tf: total field anomaly, - bx :x-component, - by :y-component, - bz :z-componentYou can simulate and view remanent magnetization effect with parameters:- irt: "induced", "remanent", or "total"- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)- rinc: inclination of the remanent magnetization (degree)- rdec: declination of the remanent magnetization (degree)
###Code
plotwidget = Simulator.PFSimulator(model, param)
display(plotwidget)
###Output
_____no_output_____
###Markdown
This is the Jupyter Notebook, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it. To use the notebook:- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)- You can alter variables and re-run cells- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook) This notebook uses code adapted from SimPEG- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
###Code
import numpy as np
from geoscilabs.mag import Mag, Simulator
from SimPEG.potential_fields import magnetics as mag
from SimPEG import utils, data
from discretize import TensorMesh
%matplotlib inline
###Output
_____no_output_____
###Markdown
How do we define direction of an earth magnetic field?Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system: - X (Easting), - Y (Northing), and - Z (Up). Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:- Declination: An angle from geographic North (Ng) (positive clockwise)- Inclination: Vertical angle from the N-E plane (positive down) What's data: total field anomalyWe consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth. Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as $$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$ We measure both earth and anomalous magnetic field such that$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$Total field anomaly, $\triangle \vec{B}$ can be defined as$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$ If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$ Define a 3D prismOur model is a rectangular prism. Parameters to define this prism are given below:- dx: length in Easting (x) direction (meter)- dy: length in Northing (y) direction (meter)- dz: length in Depth (z) direction (meter) below the receiver- depth: top boundary of the prism (meter)- pinc: inclination of the prism (reference is a unit northing vector; degree)- pdec: declination of the prism (reference is a unit northing vector; degree)You can also change the height of the survey grid above the ground- rx_h: height of the grid (meter)*Green dots show a plane where we measure data.*
###Code
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/master/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey, dobj = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey, dobj)
display(param)
param.result
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
display(model)
###Output
_____no_output_____
###Markdown
Magnetic appletBased on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right). For the prism, you can alter:- sus: susceptibility of the prismParameters for the earth field are:- Einc: inclination of the earth field (degree)- Edec: declination of the earth field (degree)- Bigrf: intensity of the earth field (nT)For data, you can view:- tf: total field anomaly, - bx :x-component, - by :y-component, - bz :z-componentYou can simulate and view remanent magnetization effect with parameters:- irt: "induced", "remanent", or "total"- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)- rinc: inclination of the remanent magnetization (degree)- rdec: declination of the remanent magnetization (degree)
###Code
plotwidget = Simulator.PFSimulator(model, param)
display(plotwidget)
###Output
_____no_output_____ |
MockGradedProject1/edge_detection_and_delineation/edge_detection_delineation.ipynb | ###Markdown
Exercise Session 3: Edge Detection and Delineation Introduction In Google earth we can visualize 3D reconstructions of mountainchains. An important feature to identify a mountain chain is its ridge (or edge) in theimage. Human vision is very good at detecting such structures but doing it automaticallyusing computer vision algorithms is not trivial. Here we will try to delineate the ridge ofa mountain as given in Fig. 1 using Dijkstra’s algorithm. Use 'mountain.png' inside folder images as input. Figure 1: (a) Input image. Start and end positions of the ridge are displayed onthe image. Pixel values for these positions are provided in the code. (b) Detectedridge. Ridge is detected between the start and end points using Dijkstra’s algorithmand overlayed on the image.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import cv2
plt.rcParams['figure.figsize'] = (15, 15)
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
Computing gradients We will first detect the edges in the image by simply computing the gradient image.This will then be used in the next exercise session for ridge delineation.* First read 'mountain.png' image and convert it to grayscale. * Smooth the images using a Gaussianfilter of size $7 \times 7$. Choose suitable $\sigma$. What is the trade-off of using bigger $\sigma$?* Compute the gradients in x- and y-directions using Sobel mask.* Compute a gradient-magnitude image.* Threshold gradient image to find the most pronounced edges and store as ```th_grad_img```. Output should look like the figure below. To obtain the below image, you need to replace the pronounced edge pixels with their magnitude in the thresholded image. The colormap used for visualization is 'jet' (```plt.imshow(th_grad_img,cmap='jet')```)
###Code
# Read Image
img = cv2.imread('images/mountain.png', cv2.IMREAD_GRAYSCALE)
# Smooth by Gaussian
# Tradeoff: Larger sigma will lead to a wider area influencing the smoothing result. (worse localization)
smooth_img = cv2.GaussianBlur(img, (7, 7), sigmaX=1)
# Compute gradient
grad_x = cv2.Sobel(smooth_img, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=3)
grad_y = cv2.Sobel(smooth_img, ddepth=cv2.CV_32F, dx=0, dy=1, ksize=3)
# Compute gradient magnitude image
grad_img = np.sqrt(np.power(grad_x, 2) + np.power(grad_y, 2))
plt.figure(1, figsize=(8, 6))
plt.imshow(grad_img, cmap='jet')
# Thresholding
thresh = 100
th_grad_img = grad_img.copy()
th_grad_img[th_grad_img <= 155] = 0
plt.figure(2, figsize=(8, 6))
plt.imshow(th_grad_img, cmap='jet')
###Output
_____no_output_____
###Markdown
Delineating ridges Given the thresholded gradient image computed in the previous exercise, we will nowdelineate the ridge of a mountain using Dijkstra’s shortest path algorithm. Roughlyspeaking, starting from an initial node, this algorithm looks around the neighboringpixels and chooses the pixel with the shortest distance to the current pixel as the nextelement of the path. Particularly, in this example, we will find the shortest paths from astarting pixel to all the other pixels in the image. The distance of a pixel to a neighboring one can be defined by a cost value for the edgeconnecting these two pixels. For this particular problem, we will define the cost of theedge to pixel (i, j) from any of its 4-neighbors as ```cost(i,j) = C - thresholdedGrad(i, j)```,where C is a constant and thresholdedGrad(i,j) is the value of the thresholdedgradient image at pixel (i, j). With such definition of cost, the algorithm will assignlow cost values to high gradient regions, which are typically the regions in the imagethat belong to the ridges of the mountain, and therefore will choose pixels belongingto mountain ridges as the next element in the shortest path. In the end, this wouldeffectively delineate the ridge of the mountain.In particular, the steps of the algorithm can be summarized as follows:1. Assign to every pixel a distance value: zero for the initial pixel and infinity for allother pixels.2. Set the initial pixel as current and mark it visited. All the other pixels are initiallyunvisited. Create a set of all the visited pixels called the visited set (e.g., a binarymatrix of 1’s for visited pixels and 0’s for unvisited pixels).3. For the current pixel, consider all of its unvisited neighbors and calculate theirdistances. Use the edge cost defined above to measure the distance between apixel and its neighboring one. Compare the newly calculated tentative distance tothe current assigned value and assign the smaller one. For example, if the currentpixel is marked with a distance of 15, and the edge connecting it with a neighbor has value 2, then the distance between them will be 15 + 2 = 17. If the neighborwas previously marked with a distance greater than 17 then change it to 17 andupdate the previous pixel position. Otherwise, keep the current value.4. When we are done considering all of the neighbors of the current pixel, mark thecurrent node as visited. A visited pixel will never be checked again.5. Select the unvisited pixel that is marked with the smallest distance, set it as thenew "current pixel", and go back to step 3.The algorithm will terminate when all the pixels in the image are visited. Computing the shortest path * Fill in the functiondijkstra to implement the above algorithm. The input to the function should berespectively, the thresholded gradient image, the constant C, and starting position ofthe ridge. The algorithm should return a distance matrix that encodes the shortestpath from the starting point to each pixel in the image and a matrix that stores foreach pixel the position of the previous pixel that lies on the shortest path to thestarting point.
###Code
def dijkstra(thresholded_grad, C, ridge_start_row, ridge_start_col):
img_row, img_col = thresholded_grad.shape
# Assign distance values
distance_matrix = np.full((img_row, img_col), np.inf)
distance_matrix[ridge_start_row, ridge_start_col] = 0
# Calculate cost value map
cost = C - thresholded_grad
# Create visited set
visited_set = np.zeros((img_row, img_col))
previous_pixel = np.zeros((img_row, img_col, 2))
while np.any(visited_set == 0):
dist_map = distance_matrix.copy()
dist_map[visited_set == 1] = np.inf
curr_dist = np.min(dist_map)
row, col = np.unravel_index(np.argmin(dist_map), dist_map.shape)
for i, j in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
if 0 <= row+i < img_row and 0 <= col+j < img_col:
if not visited_set[row+i, col+j] and curr_dist + cost[row+i, col+j] < distance_matrix[row+i, col+j]:
distance_matrix[row+i, col+j] = curr_dist + cost[row+i, col+j]
previous_pixel[row+i, col+j] = [row, col]
visited_set[row, col] = 1
return distance_matrix, previous_pixel
###Output
_____no_output_____
###Markdown
* Run Dijkstra’s shortest path algorithm on the thresholded gradient image that you computed in the previous question and visualize the distance matrix.
###Code
ridge_start_row = 67;
ridge_start_col = 15;
C = np.max(th_grad_img)
dist_matrix, prev_pxl = dijkstra(th_grad_img, C, ridge_start_row, ridge_start_col)
fig, ax = plt.subplots()
img1 = ax.imshow(dist_matrix, cmap='jet')
fig.colorbar(img1, ax=ax)
ax.set_aspect('auto')
###Output
_____no_output_____
###Markdown
Ridge Delineation * Now use the matrix that stores previous pixel positions in the shortest paths to the starting point to reconstruct the path form the end point back to the starting point.
###Code
def findRidge(ridge_start_row,ridge_start_col, ridge_end_row, ridge_end_col,parents):
# Implement your function here
ridge = []
ridge_end = [ridge_end_row, ridge_end_col]
ridge.append(ridge_end)
parents = parents.astype("int")
while ((ridge_end[0] != ridge_start_row) and (ridge_end[1] != ridge_start_col)):
ridge_end = [parents[ridge_end[0], ridge_end[1], 0], parents[ridge_end[0], ridge_end[1], 1]]
ridge.append(ridge_end)
return ridge
###Output
_____no_output_____
###Markdown
* Visualize the recovered path by overlaying it on the image
###Code
ridge_end_row = 35
ridge_end_col = 149
extracted_ridge=findRidge(ridge_start_row,ridge_start_col,ridge_end_row,ridge_end_col,prev_pxl)
plt.imshow(img)
plt.scatter([x[1] for x in extracted_ridge], [x[0] for x in extracted_ridge])
###Output
_____no_output_____
###Markdown
Evaluating the edge cost * Try different values of $C$ in Dijkstra algorithm. Visualize the distance matrix and the detected ridge for given $C$ values. How does the distance and the final detected ridge change? Do you understand why?
###Code
constants = [np.max(th_grad_img), 1.5*np.max(th_grad_img), 2*np.max(th_grad_img)]
# With the increase of C, the distance matrix generally increases and final detected ridge becomes more intended for shorter pixel paths instead of gradient values.
for C in constants:
dist_matrix, prev_pxl = dijkstra(th_grad_img, C, ridge_start_row, ridge_start_col)
extracted_ridge=findRidge(ridge_start_row,ridge_start_col,ridge_end_row,ridge_end_col,prev_pxl)
fig, axes = plt.subplots(1, 2, figsize=(12, 8))
axes[0].imshow(img)
axes[0].scatter([x[1] for x in extracted_ridge], [x[0] for x in extracted_ridge])
im = axes[1].imshow(dist_matrix, cmap='jet')
plt.colorbar(im)
plt.title("C = {}".format(C))
###Output
_____no_output_____ |
Tony/ipynb/.ipynb_checkpoints/FA Visualizations Final-checkpoint.ipynb | ###Markdown
Fractional Anisotropy Maps - Steps and ResultsOn Thursday, we showed Greg the output of the first step of the CAPTURE pipeline - namely, after modifying the CAPTURE MATLAB pipeline to accept TIFF files (originally it only took TIFs), we were able to generate two structure tensors from a TIFF stack of Aut1367 originally for use in Ilastik analysis. The main steps for the generation of the structure tensors are explained in a separate viewer (we showed Greg this) on Thursday: http://nbviewer.jupyter.org/github/NeuroDataDesign/seelviz/blob/gh-pages/Tony/ipynb/Generating%20Structure%20Tensors.ipynb There were two separate structure tensors generated by the CAPTURE pipeline - one was "DTK" (which could be used later in the Diffusion ToolKit process) and the other was "FSL" (an alternate file format). We realized at office hours that the structure tensors (which were 5000 x 5000 x 5 x 6) each were the "lower triangular" values from the structures.From there, we first tried to use the DTK file directly inside Diffusion ToolKit, but were informed that the "file appeared to be corrupted/missing data". Only the FSL format seemed to have properly saved all the image data (likely because it was run first during the MATLAB script, and because generating the structure tensors froze Tony's computer, so the DTK file format was corrupted. Thus, all analysis was done on the FSL file. From there, we followed the DiPy tutorial/ndmg code that was suitable for generating FA maps (as recommended by Greg).
###Code
from dipy.reconst.dti import fractional_anisotropy, color_fa
from argparse import ArgumentParser
from scipy import ndimage
import os
import re
import numpy as np
import nibabel as nb
import sys
import matplotlib
matplotlib.use('Agg') # very important above pyplot import
import matplotlib.pyplot as plt
import vtk
from dipy.reconst.dti import from_lower_triangular
img = nb.load('../../../../../Desktop/result/dogsig1_gausig2.3/v100_ch0_tensorfsl_dogsig1_gausig2.3.nii')
data = img.get_data()
# Output is the structure tensor generated from a lower triangular structure tensor (which data is)
output = from_lower_triangular(data)
###Output
_____no_output_____
###Markdown
Subsampling:We added this step because the calculation of RGB/eigenvalues/eigenvectors took much too long on the full file. Even still, with small sizes like 25x25, the last VTK rendering step took significant amounts of time. In the pipeline we'll have to think of a more optimal way to compute these, and we're guessing we're missing something (since why is this taking so long)?
###Code
output_ds = output[4250:4300, 250:300, :, :, :]
print output.shape
print output_ds.shape
FA = fractional_anisotropy(output_ds)
FA = np.clip(FA, 0, 1)
FA[np.isnan(FA)] = 0
print FA.shape
from dipy.reconst.dti import decompose_tensor
evalues, evectors = decompose_tensor(output_ds)
print evectors[..., 0, 0].shape
print evectors.shape[-2:]
print FA[:, :, :, 0].shape
## To satisfy requirements for RGB
RGB = color_fa(FA[:, :, :, 0], evectors)
nb.save(nb.Nifti1Image(np.array(255 * RGB, 'uint8'), img.get_affine()), 'tensor_rgb_upper.nii.gz')
print('Computing tensor ellipsoids in a random part')
from dipy.data import get_sphere
sphere = get_sphere('symmetric724')
from dipy.viz import fvtk
ren = fvtk.ren()
evals = evalues[:, :, :]
evecs = evectors[:, :, :]
print "printing evals:"
print evals
print "printing evecs"
print evecs
cfa = RGB[:, :, :]
cfa = cfa / cfa.max()
print "printing cfa"
print cfa
fvtk.add(ren, fvtk.tensor(evals, evecs, cfa, sphere))
from IPython.display import Image
def vtk_show(renderer, width=400, height=300):
"""
Takes vtkRenderer instance and returns an IPython Image with the rendering.
"""
renderWindow = vtk.vtkRenderWindow()
renderWindow.SetOffScreenRendering(1)
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(width, height)
renderWindow.Render()
windowToImageFilter = vtk.vtkWindowToImageFilter()
windowToImageFilter.SetInput(renderWindow)
windowToImageFilter.Update()
writer = vtk.vtkPNGWriter()
writer.SetWriteToMemory(1)
writer.SetInputConnection(windowToImageFilter.GetOutputPort())
writer.Write()
data = str(buffer(writer.GetResult()))
return Image(data)
###Output
_____no_output_____
###Markdown
Results:
###Code
# x = 4250:4300, y = 250:300, z = : on Tony's computer (doesn't show anything)
# Thus, all results were displayed after running on Albert's computer
vtk_show(ren)
###Output
_____no_output_____ |
test_pytorch.ipynb | ###Markdown
###Code
import torch
print("Using torch", torch.__version__)
# Note on Mac M1 this prints ... 'Using torch 1.9.1.post3'
gpu_avail = torch.cuda.is_available()
if gpu_avail:
print(f"Is the GPU available: {gpu_avail}")
print("GPU type: {}".format(torch.cuda.get_device_name(0)))
else:
print("GPU unavailable.")
import cv2
cv2.__version__
import torchvision
torchvision.__version__
import PIL
PIL.__version__
###Output
_____no_output_____
###Markdown
The following uses 'imgaug' for augmentation: https://medium.com/pytorch/ai-for-ag-production-machine-learning-for-agriculture-e8cfdb9849a1
###Code
from imgaug import augmenters as iaa
import os
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import FashionMNIST
from torchvision import datasets, transforms
DATA_DIR = './fashionMNIST/'
class CustomAugmentor:
def __init__(self):
self.aug = iaa.Sequential([iaa.flip.Fliplr(p=0.5),
iaa.GaussianBlur(sigma=(0.0, 0.1)),
iaa.Multiply((0.9, 1.1)),
iaa.Dropout((0, 0.05)),
iaa.AdditiveGaussianNoise(scale=(0, 0.05*255)) ])
def __call__(self, img):
img = np.array(img)
# Return a copy here to work around the error:
# ValueError: At least one stride in the given numpy array is negative,
# and tensors with negative strides are not currently supported.
return self.aug.augment_image(img).copy()
# transforms for images
transform=transforms.Compose([CustomAugmentor(), transforms.ToTensor()])
fmnist_train = FashionMNIST(DATA_DIR, train=True, download=True, transform=transform)
fmnist_test = FashionMNIST(DATA_DIR, train=False, download=True, transform=transforms.ToTensor())
fmnist_train, fmnist_val = random_split(fmnist_train, [55000, 5000])
train_dl = DataLoader(fmnist_train, batch_size=64)
val_dl = DataLoader(fmnist_val, batch_size=64)
test_dl = DataLoader(fmnist_test, batch_size=64)
###Output
_____no_output_____
###Markdown
NumPy
###Code
# https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
# -*- coding: utf-8 -*-
import numpy as np
import math
# Create random input and output data
x = np.linspace(-math.pi, math.pi, 2000)
y = np.sin(x)
# Randomly initialize weights
a = np.random.randn()
b = np.random.randn()
c = np.random.randn()
d = np.random.randn()
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
# y = a + b x + c x^2 + d x^3
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = np.square(y_pred - y).sum()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
print(f'Result: y = {a} + {b} x + {c} x^2 + {d} x^3')
###Output
99 355.402711321496
199 247.08211392449945
299 172.79908606549935
399 121.7998785839554
499 86.74657720737252
599 62.626348011701594
699 46.01077833724662
799 34.552386853146366
899 26.641975374311585
999 21.175177563990932
1099 17.39322400985033
1199 14.774205213958732
1299 12.958732850739251
1399 11.699059149253767
1499 10.824212000728028
1599 10.216077150753954
1699 9.792971561829166
1799 9.498348931703442
1899 9.29302525054646
1999 9.149821431849661
Result: y = 0.017533781583038425 + 0.8493210916678474 x + -0.0030248690864590404 x^2 + -0.09227499401486801 x^3
###Markdown
PyTorch
###Code
import torch
import math
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
###Output
99 3771.43994140625
199 2547.0380859375
299 1722.7314453125
399 1167.2752685546875
499 792.6326904296875
599 539.701171875
699 368.77044677734375
799 253.13890075683594
899 174.8350067138672
999 121.7524185180664
1099 85.7291030883789
1199 61.25576400756836
1299 44.610626220703125
1399 33.27702713012695
1499 25.551387786865234
1599 20.279043197631836
1699 16.67681884765625
1799 14.212850570678711
1899 12.525492668151855
1999 11.368684768676758
Result: y = -0.04120982810854912 + 0.825455904006958 x + 0.007109378930181265 x^2 + -0.08888037502765656 x^3
###Markdown
PyTorch neural network
###Code
import torch
import math
# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(xx)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]
# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
###Output
/home/vagrant/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/autograd/__init__.py:130: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
Variable._execution_engine.run_backward(
###Markdown
PyTorch custom made nn modules
###Code
import torch
import math
class Polynomial3(torch.nn.Module):
def __init__(self):
"""
In the constructor we instantiate four parameters and assign them as
member parameters.
"""
super().__init__()
self.a = torch.nn.Parameter(torch.randn(()))
self.b = torch.nn.Parameter(torch.randn(()))
self.c = torch.nn.Parameter(torch.randn(()))
self.d = torch.nn.Parameter(torch.randn(()))
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
def string(self):
"""
Just like any class in Python, you can also define custom method on PyTorch modules
"""
return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3'
# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# Construct our model by instantiating the class defined above
model = Polynomial3()
# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters of the nn.Linear
# module which is members of the model.
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-6)
for t in range(2000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Result: {model.string()}')
###Output
99 240.88771057128906
199 169.7163543701172
299 120.47968292236328
399 86.38185119628906
499 62.744842529296875
599 46.343711853027344
699 34.95265579223633
799 27.034074783325195
899 21.5246639251709
999 17.688148498535156
1099 15.014398574829102
1199 13.14952564239502
1299 11.847841262817383
1399 10.938604354858398
1499 10.303049087524414
1599 9.858505249023438
1699 9.547356605529785
1799 9.329445838928223
1899 9.176749229431152
1999 9.069684982299805
Result: y = -0.015871429815888405 + 0.8516260981559753 x + 0.0027380886022001505 x^2 + -0.09260285645723343 x^3
|
06_Matplotlib_Exercises.ipynb | ###Markdown
**Plotting** **(a)** produce a plot of the functions f(x) = e^(-x/10) * arctan(πx) and g(x) = e^(x/10) * cos(πx) over the interval [0, 10]on same axis.Include labels for the x- and y-axes, and a legend explaining which line is which plot
###Code
###Output
_____no_output_____
###Markdown
**(b)** The shape of limacon can be defined parametrically as r = r0 + cos(a)x = r * cos(a)y = r * sin(a)Use this definition to plot the shape for r0 = 0.2, r0 = 0.5, and r0 = 0.8. Be sure to use enough points that the curve is closed and appears smooth. Use a legend to identify which curve is which.
###Code
###Output
_____no_output_____
###Markdown
**(c)** Try to reproduce the following figure?![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAYAAACadoJwAAAgAElEQVR4AeydCbiV0/fHm+fBbZ6kURqVIs0RRUIRkZAyVZJKyPBTSDKHMktCIpUkimQoRf4aNJJmlQbNc1n/Z+3f717d2723c867z3ve4bOfp6d7znnftff+7HXvWd937712FqFAAAIQgAAEIAABCEAAAhBwiUAWl+qhGghAAAIQgAAEIAABCEAAAoIAwQkgAAEIQAACEIAABCAAAdcIIEBcQ01FEIAABCAAAQhAAAIQgAACBB+AAAQgAAEIQAACEIAABFwjgABxDTUVQQACEIAABCAAAQhAAAIIEHwAAhCAAAQgAAEIQAACEHCNAALENdRUBAEIQAACEIAABCAAAQggQPABCEAAAhCAAAQgAAEIQMA1AggQ11BTEQQgAAEIQAACEIAABCCAAMEHIAABCEAAAhCAAAQgAAHXCCBAXENNRRCAAAQgAAEIQAACEIAAAgQfgAAEIAABCEAAAhCAAARcI4AAcQ01FUEAAhCAAAQgAAEIQAACCBB8AAIQgAAEIAABCEAAAhBwjQACxDXUVAQBCEAAAhCAAAQgAAEIIEDwAQhAAAIQgAAEIAABCEDANQIIENdQUxEEIAABCEAAAhCAAAQggADBByAAAQhAAAIQgAAEIAAB1wggQFxDTUUQgAAEIAABCEAAAhCAAAIEH4AABCAAAQhAAAIQgAAEXCOAAHENNRVBAAIQgAAEIAABCEAAAggQfAACEIAABCAAAQhAAAIQcI0AAsQ11FQEAQhAAAIQgAAEIAABCCBA8AEIQAACEIAABCAAAQhAwDUCCBDXUFMRBCAAAQhAAAIQgAAEIIAAwQcgAAEIQAACEIAABCAAAdcIIEBcQ01FEIAABCAAAQhAAAIQgAACBB+AAAQgAAEIQAACEIAABFwjgABxDTUVQQACEIAABCAAAQhAAAIIEHwAAhCAAAQgAAEIQAACEHCNAALENdRUBAEIQAACEIAABCAAAQggQPABCEAAAhCAAAQgAAEIQMA1AggQ11BTEQQgAAEIQAACEIAABCCAAMEHIAABCEAAAhCAAAQgAAHXCCBAXENNRRCAAAQgAAEIQAACEIAAAgQfgAAEIAABCEAAAhCAAARcI4AAcQ01FUEAAhCAAAQgAAEIQAACCBB8AAIQgECICYwaNUqyZs0qb7/9ti8oaFtbtmzpi7bSSAhAAAIQSJ8AAiR9LrwLAQhAwJcEjh49Kq+99po0b95ckpKSJGfOnFKiRAmpU6eO3HzzzTJ58uRU/VLhkS1bNhk9enSq9736QgXIeeedF3HzPvroI7njjjukadOmUrBgQSO2unTpEvH9XAgBCEAAAvYJIEDsM8UiBCAAgYQQUPFx0UUXmSC7SJEicv3118vAgQOlf//+0qZNG8mXL580a9YsVdt27dolK1asEP3fDyVaAXLmmWcaHoUKFZLq1aubn5ULBQIQgAAEEkcAAZI49tQMAQhAwCqBMWPGmAC7Xr16snv37hNs79+/X7755psT3vfTG9EKkJkzZ8rKlStNF7Xvej8CxE8jTlshAIEgEkCABHFU6RMEIBBKAj169DAB9vDhwyPuf2Z7QL744gtp3LixmTnRGZX27dvLsmXL5MYbbzT1rF27NqWe1atXm/e6du0q+nOnTp2kaNGikidPHmnQoIFMmTIl5drkH3TW5cknnzRLqsqWLSu5cuWS4sWLy2WXXSZz5sxJvizV/9EKkONvVjGCADmeCD9DAAIQSAwBBEhiuFMrBCAAAesEHnroIRNg9+zZM2LbyQIk7R6QsWPHGlv58+eXm266Se6//35p0aKF2Veim8A1kE9PgOj+DN1z0qhRI+nXr58RKypCsmfPLioAji8qMlR0XHjhhXL77beb5WKdO3eWAgUKmL0rKoDSFgRIWiK8hgAEIOA/AggQ/40ZLYYABCCQLoH58+ebgF43lesyowkTJsiaNWvSvTb5zfQEiC7fOuWUU8zsxaJFi5IvNf/fd999RnxoHekJEBUIjzzySKp7pk2bZu5p27Ztqvd1BmT79u2p3tMXGzZskDJlypg9G2k/RICkJcJrCEAAAv4jgADx35jRYghAAAIZEvjwww+ldOnSJuDXYF3/6fKpDh06yKeffnrCfekJkOS9JN27dz/h+r179xpxkpEAqVixovzzzz8n3Fe+fHmzvOqEDzJ4o3fv3qbt69evT3UFAiQVDl5AAAIQ8CUBBIgvh41GQwACEMiYwJEjR2T69Ony8MMPm/0UKkCSxYju3zi+pCdAdOmUXq+fpVcyW4KlQie90qRJE8mRI8cJH82aNUuuuuoqKVeunOTOnTulncntnTt3bqp7ECCpcPACAhCAgC8JIEB8OWw0GgIQgEDkBI4dOyY6M6J7KzSAnzRpUsrN6QkQnfnQ66ZOnZpy3fE/6AZz/Ty9JVi6XyS9ovtHdNbk+KJLxPQ9bZcKl7vvvtuIpsGDB5vDBrWOb7/99vhbTL3RnANy/M1sQj+eBj9DAAIQSBwBBEji2FMzBCAAAVcJJG9S79OnT0q96QmQvn37mkA/lhmQaARIzZo1TYat5cuXp7Qn+Ydbb73VtAEBkkyE/yEAAQgEhwACJDhjSU8gAAEIZEpg2LBhJqi/8847U65LT4Ak7wHp1q1bynXJP+zZsyfTPSDRCBBdcqUpetMWnbGpUaMGAiQtGF5DAAIQCAgBBEhABpJuQAACEHj//fflyy+/THcT+KZNm6RKlSomqB8/fnwKrPQESHIWLBUICxcuTLlWf7j33nuNjYw2oUcjQM444wzRE8o3btyYUoduYE+eqdE6mAFJQcMPEIAABAJDAAESmKGkIxCAQNgJ3HXXXUYcaBYs3dg9YMAA869jx46SN29e81naTeLpCRDl+N5775n9GcnngAwcOPCEc0COz1CVfBBhZgJE93QcX1599VXTppIlS4oeoqgzM/Xr1zfLsvQwQht7QCZOnGjOItHN923atDE2K1eunPKe7juhQAACEICAuwQQIO7ypjYIQAACcSOggmDEiBFmQ3e1atXM7IIe9KdnalxyySVGVKSt/O233zZCI+1BhHrd559/nnISelJSkjkJfcWKFcaWigM9xyO5nEyAaOastJvQ9V6tv27duqJCR09Bv+KKK2Tx4sUyaNAgc73TGRC1o23Vuo//p+/pP00bTIEABCAAAXcJIEDc5U1tEIAABHxN4OjRo6JneqiooUAAAhCAAARiIYAAiYUa90AAAhAIOIGdO3fKvn37UvVS92ckzyj06tUr1We8gAAEIAABCERKAAESKSmugwAEIBAiArr8SjeI614S3Sdx++23m6VSumzptNNOk61bt4aIBl2FAAQgAAGbBBAgNmliCwIQgEBACOiejuuuu04qVKhg9mdoRizNoqUb3bds2RKQXtINCEAAAhBIBAEESCKoUycEIAABCEAAAhCAAARCSgABEtKBp9sQgAAEIAABCEAAAhBIBAEESCKoZ1Cnrql+9913ZdasWfJ///d//IMBPoAP4AP4AD6AD+ADHvMBjdM0XmMvXAYBbQRvI0AigOTWJerMWbJk4R8M8AF8AB/AB/ABfAAf8LgPaNxGiY0AAiQ2bnG5SxW1ChB16ONnQH788UeZOnWqvPTSS9K3b1+59NJLpUaNGpInTx5zfY4cOeScc84xn40fP15+/vnnVPcfb4ufmVnCB/ABfMAdH5g9e7Y8//zz5mDIYsWKpQSTp5xyijRq1Ej01Phhw4bJBx98IF988YXMnTs34r/dM2bMMIc4Dh48WLp16yYXXHCBSRKQ/BCrXLly5lDHoUOHyldffRWxXXzDHd+As785Jz8w1riNEhsBBEhs3OJyl/5B0i8P/T+ScuzYMVm2bJn5gmvTpo1olhq9X1NkasrMr7/+WjRvPwUCEIAABNwhoEsyRo0aZURHvnz5zN/kqlWrSv/+/WXy5MmyYcOGuP5d3r59u3z88cfSs2dPqVatWoroadiwoYwYMUL0cwoEIOCMQLTxmrPagnk3AsRD4+rUofXQsM8++0x69+4tlSpVMl88mjbziSeekM2bN3uopzQFAhCAQHAI6IMenWW44oorJHv27KJnpTRu3Nj87dWHRIl8ELR+/XoZPXq0tGvXzrQtV65c0rFjR/n000/l8OHDwRkEegIBFwk4jddcbKpnq0KAeGhobDq0fuF9++230qVLFzMzosu09MtRDxc7evSoh3pNUyAAAQj4k8COHTtk+PDhKTMNNWvWNEtlvfrAR9v17LPPyplnnmkeUJUoUcLMzKxbt86fA0CrIZAgAjbjtQR1IeHVIkASPgT/NiBeDv3333/LCy+8ILVr106ZFdEnYkeOHPm3cn6CAAQgAIGICCxfvlxuueUW0SVW+nDn6quvNg98EjnTEVHDj7to/vz55lDJpKQkyZkzp3Tv3l1+//33467gRwhAICMC8YrXMqoviO8jQDw0qvF2aP1ynDNnjlx++eUIEQ+NO02BAAT8QWDNmjVm43i2bNmkdOnSohvAN27c6I/GZ9DK3bt3y1NPPSWlSpUS7dc111wjCxcuzOBq3oYABJRAvOO1MFBGgHholN106F9++UXat2+fIkTefvttZkQ85As0BQIQ8A4BFRm9evUyMwW6bEkzWx04cMA7DbTQEu3PyJEjTRITTWai2RYXLVpkwTImIBA8Am7Ga8Gj998eIUA8NLKJcOjjhYhmatF0vxQIQAACEBCTMeqee+6RvHnziqbOffzxx2XPnj2BRqMb03WJriYw0RkRFV7btm0LdJ/pHASiJZCIeC3aNnr9egSIh0YokQ6t64FbtWplZkR0idbq1as9RIamQAACEHCPgC5XffPNN6VIkSKSP39+eeCBB0Q3nIepHDp0yCzNKliwoOGg51CxbzBMHkBfMyOQyHgts3b56TMEiIdGK9EOrV+648aNk7Jly5pDDh999NHALTPw0HDTFAhAwIMEFi9eLM2aNTMPY66//vrQpzDftGmTOehQUwtrIhM9X4oCgbATSHS8FgT+CBAPjaJXHFqXGOiyA83uUrlyZZZlechHaAoEIBAfAnqO0sCBA83fvdNPP130pHHKvwTmzZtnTm/X/SHXXnsty7L+RcNPISTglXjNz+gRIB4aPa859NKlS+X8889PeRK4c+dOD9GiKRCAAATsENDzkSpWrCh6SN+gQYOY+c0Aq86S6/4Q3Q+jWbP0ZHcKBMJIwGvxmh/HAAHioVHzokMnf+EUKlRIypcvLzNnzvQQMZoCAQhAIHYCOutx++23m4cs+rBlxYoVsRsL0Z1//vmntG3b1nDr2rWr8HAqRINPVw0BL8ZrfhsaBIiHRszLDq3571u0aCG6Drh///48IfSQ39AUCEAgegKaeKN69epmv5umn9WHLZTICSgv3aivm9TLlSsn06ZNi/xmroSAzwl4OV7zC1oEiIdGyusOfezYMXn66afNMoVatWqJfoFTIAABCPiJQPLfMT39+8wzzxRdakqJncDatWvlggsuMLMhPXr04OFU7Ci500cEvB6v+QElAsRDo+QXh9bDqerUqWMO5Xr22Wd5cughH6IpEIBAxgR06VBysKwzuQcPHsz4Yj6JmIDOhowYMUJy584t9erVkz/++CPie7kQAn4k4Jd4zctsESAeGh0/ObR+cesXuGZEufrqqwN/OJeH3ISmQAACMRDQJUJFixaV0qVLy/Tp02OwwC0nI6AH22rmxMKFC8snn3xyssv5HAK+JeCneM2rkBEgHhoZPzr0+PHjpUCBAmYt9fLlyz1Ek6ZAAAIQEDND+8QTT5hTvS+66CLZunUrWOJIQA9sbN++vXk4pencObwwjrAxnTACfozXEgYrg4oRIBmAScTbfnXoZcuWyRlnnGE2I3788ceJQEedEIAABE4goGcaXXXVVSYY1tPMjx49esI1vGGfgC7J0v2C2bNnN4c66tI3CgSCRMCv8ZqXxgAB4qHR8LND7969Wzp27MhTLw/5E02BQJgJrFy50pzcnT9/fuHBSGI84fvvv5cyZcpIiRIl5IcffkhMI6gVAnEg4Od4LQ44YjKJAIkJW3xu8rtDH//USzd66lQ8BQIQgIDbBPRgQT0sr2rVqrJ48WK3q6e+4wj89ddf0rRpU7NBfezYscd9wo8Q8C8Bv8drXiCPAPHCKPyvDUFx6K+//lqSkpLMvpDVq1d7iDBNgQAEgkwg+SGInlekB+XxEMQbo61JS7p06WJmyPWkeR0nCgT8TCAo8VoixwABkkj6aeoOkkPrhvRKlSqZqfcff/wxTU95CQEIQMAuAd3fcccdd5gg97777hM974PiHQIqOh577DEzPp07d+a8EO8MDS2JgUCQ4rUYum/lFgSIFYx2jATNobds2SKNGjWSvHnzsgbbjotgBQIQSIfAvn375PLLLzebnl999dV0ruAtrxAYN26cOX1evxt0eRYFAn4kELR4LRFjgABJBPUM6gyiQ+/fv99kodElEZoVhan3DAaftyEAgZgIaBB7zjnniG42/+yzz2KywU3uEtBZ8ZIlS0qFChU4id5d9NRmiUAQ4zVLaCI2gwCJGFX8LwyqQ+tSCF0SoYcW9ujRg1SY8XclaoBAKAisWLHCLPXUYPbnn38ORZ+D0sm1a9dKrVq1zOGQLNMNyqiGpx9BjdfcHEEEiJu0T1JX0B369ddfN0skNF2vbkqkQAACEIiVwOzZs03wqmcQkewiVoqJve/vv/+Wxo0bm9krTqdP7FhQe3QEgh6vRUcjtqsRILFxi8tdYXDoSZMmSa5cuaR169ayd+/euHDEKAQgEGwCX3zxhdlb1qxZM9EgluJfArp/5+KLL5acOXOK7g+hQMAPBMIQr8V7HBAg8SYchf2wOPSMGTOkQIECZoM6wUMUDsKlEICASWihweoll1wiuseM4n8Chw8fluuuu050r+DIkSP93yF6EHgCYYnX4jmQCJB40o3SdpgcWtf8FilSxJxUvGnTpihJcTkEIBBGAqNHj5Zs2bJJp06dRINWSnAI6F7BPn36mL2CjzzyCAlLgjO0gexJmOK1eA0gAiReZGOwGzaH1hOKy5QpI5UrV2YNdwz+wi0QCBOBl156yQSnN998M4ksAjrwx58VctdddyFCAjrOQehW2OK1eIwZAiQeVGO0GUaHXrVqlREgKkSWLVsWIzlugwAEgkzg8ccfN+Kjb9++BKVBHuj/9U2XYWnWxF69enGgZAjG249dDGO8ZnucECC2iTqwF1aH1iVYNWvWNHnhly5d6oAgt0IAAkEioE/EBw4caILRQYMGIT6CNLgn6YtmTdQ9Ibfeeisi5CSs+Nh9AmGN12ySRoDYpOnQVpgdWk9Nr127tpQoUUKWLFnikCS3QwACfieg4mPAgAFGfOghppTwEXj77beNCOnWrRsiJHzD7+kehzleszUwCBBbJC3YCbtDqwipU6eOESG6P4QCAQiEk4CKj3vuuceIj+HDh4cTAr02BMaMGWMSD9xwww3s/cEnPEMg7PGajYFAgNigaMkGDi2ydetWOfPMM6V48eLy66+/WiKLGQhAwC8EVHzce++9Rnw8//zzfmk27YwjgbFjx5pDbDt37ixHjhyJY02YhkBkBIjXIuOU2VUIkMzouPwZDv1f4Nu2bZO6detKsWLFZNGiRS6PAtVBAAKJIqDi47777jPi47nnnktUM6jXgwQ++ugjyZEjh0nBjAjx4ACFrEnEa84HHAHinKE1Czj0vyhVhNSrV8+IkIULF/77AT9BAAKBJKDiI3nD+bPPPhvIPtIpZwQmTJhgREiXLl1YjuUMJXc7JEC85hCgiCBAnDO0ZgGHTo1y+/btctZZZ5nlWKToTc2GVxAIEgEVH/fff7+Z+XjmmWeC1DX6YpnAuHHjzJ4QPQ9GDy+kQCARBIjXnFNHgDhnaM0CDn0iSp0JqVWrljmwcOXKlSdewDsQgIDvCWiKXT33gWxXvh9KVzrwzjvvmOxYd9xxB6mZXSFOJWkJEK+lJRL9awRI9MzidgcOnT5aPSfk9NNPl9NOO03WrVuX/kW8CwEI+JKAznio+BgyZIgv20+jE0PgtddeM35z9913I0ISMwShrpV4zfnwI0CcM7RmAYfOGKUKDxUgVatWFRUkFAhAwP8EXn31VRNE6sZzCgSiJfDCCy8Y/3nooYeivZXrIeCIAPGaI3zmZgSIc4bWLODQmaP8448/zFIsXZKl6XopEICAfwm8++67LKPx7/B5puVPPvkkM2ieGY3wNIR4zflYI0CcM7RmAYc+OUrdjK5nhOjm9B07dpz8Bq6AAAQ8R2DixInmXIeuXbuykdhzo+O/Bg0ePNiIEJ0RoUDADQLEa84pI0CcM7RmAYeODOWCBQskKSlJGjduLHv37o3sJq6CAAQ8QWD69OmSK1cu6dixI4fKeWJE/N8IzaLWv39/I0L05HQKBOJNgHjNOWEEiHOG1izg0JGj/PHHHyV//vxy0UUXyaFDhyK/kSshAIGEEZg1a5bkzZtX2rZty+9twkYhmBWrCLnpppvMzNrkyZOD2Ul65RkCxGvOhwIB4pyhNQs4dHQov/zyS/Mk9dprr2UZR3TouBoCrhPQA0ULFy4sLVq0kP3797tePxUGn4CekN6hQwfJkyePfPPNN8HvMD1MGAHiNefoESDOGVqzgENHj3L8+PHmUKpevXqRijF6fNwBAVcIrFq1SkqVKiX16tWTXbt2uVInlYSTwIEDB6RVq1ZSsGBB0e9UCgTiQYB4zTlVBIhzhtYs4NCxoUzOB//www/HZoC7IACBuBHYvHmzVK5cWapUqSL6MwUC8Sawe/duOfvss03CkuXLl8e7OuyHkADxmvNBR4A4Z2jNAg4dO8qhQ4eSBSV2fNwJgbgQ2Llzp9StW1dKly4tOgtCgYBbBLZt2yY1atSQU089lQNs3YIeonqI15wPNgLEOUNrFnDo2FEenwVFzxegQAACiSWgS2F0v8cpp5wiixYtSmxjqD2UBDZs2GAOsK1evbps3749lAzodHwIEK8554oAcc7QmgUc2hnK5CwoOXLkkM8//9yZMe6GAARiJqCbgdu3b28yXmnmKwoEEkVAl2AVLVrUpG3ft29foppBvQEjQLzmfEARIM4ZWrOAQztHqYHPpZdealL0/vTTT84NYgECEIiKgD4IuPnmm0061ClTpkR1LxdDIB4E5s6dK/ny5TPfDfodQYGAUwLEa04JiiBAnDO0ZgGHtoNSn3Kde+65ZgPi77//bscoViAAgYgI/Oc//zH7sUaPHh3R9VwEATcITJ061Yji7t27kzHRDeABr4N4zfkAI0CcM7RmAYe2hlJ0A2K1atWkUqVKZN6xhxVLEMiUwCuvvGLExxNPPJHpdXwIgUQQUFGcJUsWefDBBxNRPXUGiADxmvPBRIA4Z2jNAg5tDaUxtGbNGpN956yzzhJNy0iBAATiR2DSpEnmTJ7evXvzhDl+mLHskMCwYcOMCBkxYoRDS9weZgLEa85HHwHinKE1Czi0NZQphhYsWCCFChWSCy+8UA4dOpTyPj9AAAL2CMyePducPt2xY0c5evSoPcNYgoBlArpH6a677pKsWbPKRx99ZNk65sJCgHjN+UgjQJwztGYBh7aGMpWhr7/+WnLlyiXXXXedHDt2LNVnvIAABJwRWLZsmRQpUsSk3NXUuxQIeJ2Afg9cc801kjt3bvnuu++83lza50ECxGvOBwUB4pyhNQs4tDWUJxgaN26ceeI1YMCAEz7jDQhAIDYCf/75p5QvX15q1qwpO3bsiM0Id0EgAQQOHjwo5513niQlJcnSpUsT0AKq9DMB4jXno4cAcc7QmgUc2hrKdA09//zzZu3viy++mO7nvAkBCEROQE85r1OnjpQrV07Wr18f+Y1cCQGPEFDRXKtWLSOiVUxTIBApAeK1SEllfB0CJGM2rn+CQ8cfeb9+/cxMyIQJE+JfGTVAIKAEdD9Vq1atzCnnixcvDmgv6VYYCKh4VhFdt25d2bVrVxi6TB8tECBecw4RAeKcoTULOLQ1lBka0rW/V199tdkwqxtnKRCAQHQEdBNvly5dzL6qb775JrqbuRoCHiTw66+/SuHChUlW4sGx8WqTiNecjwwCxDlDaxZwaGsoMzWkG2WbNWtmNs4uX74802v5EAIQSE3g/vvvN0sZx44dm/oDXkHAxwSSk5XccMMNpJH28Ti61XTiNeekESDOGVqzgENbQ3lSQ9u3b5fq1atLxYoVOajwpLS4AAL/JZB80OBTTz0FEggEjoCKaj2oUEU2BQKZESBey4xOZJ8hQCLj5MpVOLQrmFMqST6osH79+rJnz56U9/kBAhA4kcCnn37KQYMnYuGdgBFQca0i5NVXXw1Yz+iOTQLEa85pIkCcM7RmAYe2hjJiQ/Pnz5cCBQpI27Zt5ciRIxHfx4UQCBOBH3/8UfLlyycdOnTgoMEwDXwI+6p7nO644w4jtqdMmRJCAnQ5EgLEa5FQyvwaBEjmfFz9FId2FXdKZdOmTZMcOXLILbfcwtrfFCr8AIH/Eli5cqUUL15cGjVqJPv37wcLBAJP4OjRo9K+fXsjuufNmxf4/tLB6AkQr0XPLO0dCJC0RBL4GodOHPxRo0aZafchQ4YkrhHUDAGPEdi6datUrVrV/NOfKRAIC4F9+/bJueeeKyVKlJBVq1aFpdv0M0ICxGsRgsrkMgRIJnDc/giHdpt46voGDRpkRMiYMWNSf8ArCISQgM526KyHzn7oLAgFAmEjsGXLFqlSpYpUq1ZNtm3bFrbu099MCBCvZQInwo8QIBGCcuMyHNoNyhnXoWt/u3XrJjlz5pQZM2ZkfCGfQCDgBHQJiu73yJs3r+j+DwoEwkrg999/l2LFikmTJk1YghhWJ0in38Rr6UCJ8i0ESJTA4nk5Dh1PupHZPnz4sLRp00YKFSokixYtiuwmroJAgAioEL/zzjvNJtzJkycHqGd0BQKxEZgzZ44R41deeaXoYabf3o4AACAASURBVLYUCBCvOfcBBIhzhtYs4NDWUDoytHv3bqlbt66UK1dO1q9f78gWN0PAbwSeeeYZsxRx5MiRfms67YVA3AhMnDhRsmbNKn379o1bHRj2DwHiNedjhQBxztCaBRzaGkrHhv7880859dRTpU6dOrJz507H9jAAAT8Q+PDDD434uPfee/3QXNoIAVcJvPTSS+b347nnnnO1XirzHgHiNedjggBxztCaBRzaGkorhhYvXiyFCxeWVq1ayaFDh6zYxAgEvErg+++/l9y5c8s111zDMhOvDhLtSjiBAQMGmJmQ8ePHJ7wtNCBxBIjXnLNHgDhnaM0CDm0NpTVDM2fOlFy5ckmXLl04I8QaVQx5jcCyZcskKSlJWrRoIQcPHvRa82gPBDxDQPeAqEhXsT5r1izPtIuGuEuAeM05bwSIc4bWLODQ1lBaNTR27Fgz7X7//fdbtYsxCHiBwMaNG+W0006TmjVryo4dO7zQJNoAAU8TUJGuYr1IkSKyfPlyT7eVxsWHAPGac64IEOcMrVnAoa2htG7oqaeeMiLk5Zdftm4bgxBIFAFNuFCvXj0pU6aMrFu3LlHNoF4I+I7A33//LTVq1JAKFSrIpk2bfNd+GuyMAPGaM356NwLEOUNrFnBoayitG9LUpL179yY1qXWyGEwUgeSU0wULFpSFCxcmqhnUCwHfEli7dq2ULl1a6tevL3v27PFtP2h49ASI16JnlvYOBEhaIgl8jUMnEH4EVevhbFdccQWHs0XAiku8TUAFddeuXc2hm1999ZW3G0vrIOBhAvPnz5cCBQrIxRdfLCrqKeEgQLzmfJwRIM4ZWrOAQ1tDGTdD+/fvl8aNG5uTcfWEXAoE/EjgP//5j1lS+O677/qx+bQZAp4iMH36dMmRI4fcdNNNJCvx1MjErzHEa87ZIkCcM7RmAYe2hjKuhrZt2ybVqlWTSpUqyebNm+NaF8YhYJvA66+/bsTH0KFDbZvGHgRCS2DMmDHm9+rBBx8MLYMwdZx4zfloI0CcM7RmAYe2hjLuhlavXi2lSpUya391Iy8FAn4gMHnyZLOPqWfPnjyp9cOA0UZfERg2bBjJSnw1YrE3lngtdnbJdyJAkkl44H8c2gODEEUTFixYIIUKFZLWrVtzUGEU3Lg0MQR++OEHs39J9zHpfiYKBCBgl8DxyUomTpxo1zjWPEWAeM35cCBAnDO0ZgGHtobSNUMzZswwG3n1oEI9oIoCAS8SWLp0qTmzoFmzZnLgwAEvNpE2QSAQBFTcd+zYUfLkycNBhYEY0fQ7QbyWPpdo3kWAREMrztfi0HEGHCfzH3zwgZl2v+eee+JUA2YhEDuBDRs2SPny5aVWrVqiZxdQIACB+BJQkd+8eXNJSkoSFf+U4BEgXnM+pggQ5wytWcChraF03dBzzz1nRMjw4cNdr5sKIZARAT3ZXIXHqaeeKuvXr8/oMt6HAAQsE1CxX7NmTSP++d2zDNcD5ojXnA8CAsQ5Q2sWcGhrKBNiaMCAAZI1a1bRGREKBBJN4PinsEuWLEl0c6gfAqEjoMJDZx/1xPTt27eHrv9B7jDxmvPRRYA4Z2jNAg5tDWVCDOkekOuvv97sCZk2bVpC2kClEFACug79yiuvNOvQZ8+eDRQIQCBBBJYvX27OjWrUqJHs3bs3Qa2gWtsEiNecE0WAOGdozQIObQ1lwgzpSbiXXHKJ5M+fX+bOnZuwdlBxeAloJp5bbrnFpNv95JNPwguCnkPAIwR++ukn853AaekeGRALzSBecw4RAeKcoTULOLQ1lAk1tG/fPmnSpInJOsTSl4QORSgrHzhwoNmPNGrUqFD2n05DwIsE9LT0nDlzChkTvTg60beJeC16ZmnvQICkJZLA1zh0AuFbrlo3/9auXVvKlSsna9eutWwdcxBIn8DTTz9txMczzzyT/gW8CwEIJIzAuHHjzD7Bvn37chBowkbBTsXEa845IkCcM7RmAYe2htIThjZu3CgVK1aUatWqyZYtWzzRJhoRXAJvvfWWER/3339/cDtJzyDgcwIjRowwv6dDhw71eU/C3XziNefjjwBxztCaBRzaGkrPGPr999+lZMmS0qBBA9m9e7dn2kVDgkVgwoQJZs/HbbfdxpPVYA0tvQkggYcfftiIkFdeeSWAvQtHl4jXnI8zAsQ5Q2sWcGhrKD1laP78+VKoUCE577zzZP/+/Z5qG43xP4Gvv/5acuXKJVdffbXJfuX/HtEDCASbgCaK6NOnj1mONWbMmGB3NqC9I15zPrAIEOcMrVnAoa2h9Jyh7777TvLmzWsyZB06dMhz7aNB/iSg2XUKFCggrVu3FvzKn2NIq8NJQNO2d+vWTbJnzy4TJ04MJwQf95p4zfngIUCcM7RmAYe2htKThvRsEH1SfdVVV8mRI0c82UYa5R8CCxYskKSkJOF8Af+MGS2FwPEE9Lwe/T7Q7wXNkkXxDwHiNedjhQBxztCaBRzaGkrPGpo0aZJ54nXjjTeKPgGjQCAWAkuXLpXixYtL/fr1ZefOnbGY4B4IQMADBHTmUs8HyZcvn8yaNcsDLaIJkRAgXouEUubXIEAy5+Pqpzi0q7gTVtn7779v1v726NGDDcMJGwX/VqyJDUqXLm3SPG/bts2/HaHlEICAIaB7A1u0aGH2CmocQPE+AeI152OEAHHO0JoFHNoaSs8beuONN0wWlLvvvhsR4vnR8k4D16xZI+XLlzepnTdv3uydhtESCEDAEYFdu3bJ2WefLcWKFZPFixc7ssXN8SdAvOacMQLEOUNrFnBoayh9Yej55583ImTQoEG+aC+NTCyBDRs2SOXKlaVSpUqiP1MgAIFgEdi+fbvUqVNHSpQoIUuWLAlW5wLWG+I15wOKAHHO0JoFHNoaSt8YGjJkiBEhjz76qG/aTEPdJ/DXX3/JGWecIaeeeqqsXr3a/QZQIwQg4AqBrVu3muWVen7UsmXLXKmTSqInQLwWPbO0dyBA0hJJ4GscOoHwE1j1I488gghJIH+vV63io1atWlKqVCn57bffvN5c2gcBCDgksGXLFqlZs6b5nV++fLlDa9weDwLEa86pIkCcM7RmAYe2htJ3hpJFiP5PgUAyAd3nUaNGDROIaOYrCgQgEA4C+rtfvXp1k3CCBw/eG3PiNedjggBxztCaBRzaGkpfGtJlWFmyZBFEiC+Hz3qjN27caJZdacYrnoJax4tBCHiewKZNm8zfgLJly4pmv6N4hwDxmvOxQIA4Z2jNAg5tDaVvDSFCfDt0Vhv+559/yumnny4aePD00ypajEHAVwT0QYT+LShXrpysXLnSV20PcmOJ15yPLgLEOUNrFnBoayh9bQgR4uvhc9x4zXBVtWpVs+GcgMMxTgxAwPcEkh9IlClTRliK6Y3hJF5zPg4IEOcMrVnAoa2h9L2hZBFy//33c06I70cz8g6sW7fOpNrVsz7++OOPyG/kSghAINAEdDmWJqPQc0J++eWXQPfVD50jXnM+SggQ5wytWcChraEMhKGnn37a7Anp2bOnHDt2LBB9ohMZE9DZjooVK0qFChVItZsxJj6BQGgJbNu2TRo0aCCFCxeWH374IbQcvNBx4jXno4AAcc7QmgUc2hrKwBh67bXXJGvWrNKlSxc5cuRIYPpFR1ITWLhwocl0pWu9165dm/pDXkEAAhD4HwE9Mb1Zs2aSP39+mTFjBlwSRIB4zTl4BIhzhtYs4NDWUAbK0AcffCA5cuSQyy+/XA4cOBCovtEZkVmzZpknmmeddZZo/n8KBCAAgcwI7Nu3T1q3bi25c+eWKVOmZHYpn8WJAPGac7AIEOcMrVnAoa2hDJyhzz77TPLkySPnn3++7NmzJ3D9C2uHdFzz5s0rLVu2FH2ySYEABCAQCYGDBw9K+/btzcOpcePGRXIL11gkQLzmHCYCxDlDaxZwaGsoA2no22+/lYIFC0rDhg1l+/btgexjmDr1/vvvm+DhsssuY2YrTANPXyFgicDhw4fluuuuM8t0n3/+eUtWMRMJAeK1SChlfg0CJHM+rn6KQ7uK25eV/fzzz1K0aFFzONWqVat82QcaLTJixAgTNNx4443s7cEhIACBmAlogpIBAwaYhCX9+vUjYUnMJKO7kXgtOl7pXY0ASY9Kgt7DoRME3mfV6sF0lStXlhIlSsiPP/7os9aHu7kaLNx3330mWOjbty/BQrjdgd5DwBqBF154wTzUuPrqq5lRtUY1Y0PEaxmzifQTBEikpFy4Dod2AXJAqtDNyo0aNTL7ByZOnBiQXgW7G/v375eOHTuaIEFTLP/zzz/B7jC9gwAEXCXw8ccfm72CzZs3l7///tvVusNWGfGa8xFHgDhnaM0CDm0NZSgMHR/Qsv7X20O+efNmOeeccyRfvnyCYPT2WNE6CPiZgGbVK1KkiNSoUYOU3nEcSOI153ARIM4ZWrOAQ1tDGRpDx6//7dOnjxw9ejQ0ffdLR3/99Vc57bTTpHTp0qJ7eCgQgAAE4klg+fLl5kBT/Zszd+7ceFYVWtvEa86HHgHinKE1Czi0NZShM6SbmrNlyyaXXnqp7Ny5M3T992qHv/jiCylUqJDUqVNH1q1b59Vm0i4IQCBgBHTWtXHjxpIrVy556623Ata7xHeHeM35GCBAnDO0ZgGHtoYylIb0TInChQuLnqa9ZMmSUDLwSqd1f8fw4cMle/bs0rZtW9m9e7dXmkY7IACBkBDQs0JuueUWk/Sid+/eoml7KXYIEK8554gAcc7QmgUc2hrK0BrSDFk1a9aU/Pnzy0cffRRaDonsuB4U2alTp5RMV0eOHElkc6gbAhAIMQF9GDJy5Ehz5pAeeKoJTCjOCRCvOWeIAHHO0JoFHNoaylAb0gBYUzFmyZJF7rnnHs6ZcNEbli5dKtWrV5cCBQrIhx9+6GLNVAUBCEAgYwJ6kG3x4sXNfrT58+dnfCGfRESAeC0iTJlehADJFI+7H+LQ7vIOcm361EtTveq+kAsuuEC2bt0a5O56om/jxo0zM0+afWbZsmWeaBONgAAEIJBMQPehnXXWWSZ9+xtvvEEq8GQwMfxPvBYDtDS3IEDSAEnkSxw6kfSDWfeMGTOkWLFiUr58edEnYBT7BA4dOiSagUxnnK655hrRGSgKBCAAAS8S0PTt3bt3N3+vdKZ8x44dXmym59tEvOZ8iBAgzhlas4BDW0OJoeMIrF27Vpo1a2YOwBs4cKBowEyxQ0DTXer5Hjly5BA9iZjDBe1wxQoEIBBfAjpjq0lL9OGUnh1CiY4A8Vp0vNK7GgGSHpUEvYdDJwh8CKrV80GGDh1qAmWdgmeJkLNB1/NXXnzxRbOUoUqVKjJnzhxnBrkbAhCAgMsE1qxZI02aNDFLdQcNGsR+wSj4E69FASuDSxEgGYBJxNs4dCKoh6tOPQivWrVqJnDWs0N4Yh/9+Os66latWpklDL169ZK9e/dGb4Q7IAABCHiAgGbpe/jhh40Iadq0qaxevdoDrfJ+E4jXnI8RAsQ5Q2sWcGhrKDGUCYF9+/ZJz549TQCtZ1SsX78+k6v5KJmAirXRo0ebgwXLli0r06dPT/6I/yEAAQj4msB3331nlmPly5dPnnnmGWZDTjKaxGsnARTBxwiQCCC5dQkO7RZp6lECU6ZMkVKlSpmUsfqFwyFVGfuF7qO5/PLLjWi7/vrr2biZMSo+gQAEfEpg165dogcWZs2a1WTL0hlzSvoEiNfS5xLNuwiQaGjF+VocOs6AMX8CAc2Aol84mq63du3abEZMQ+jAgQPyyCOPmCVrpUuXlo8//jjNFbyEAAQgECwCP/74o5x55pnme+Guu+4is1+a4dXkI7pcTTMfatxGiY0AAiQ2bnG5CwESF6wYjYCAPuk6++yzzR/Um266KfSn5epyq0mTJknFihUlZ86cMmDAANm9e3cEJLkEAhCAgP8J6Iz4sGHDzMOXU089VSZOnBj6PYPbt283Kdc166GuHkCAOPNzBIgzflbvRoBYxYmxKAlopqxXXnlFTjnlFElKSpLnn39edAYgbEWfbrVp08Z8ubRu3Vr0NQUCEIBAGAmsWrUq5e9h48aNRfeKhK2oGNOsh0WKFDFLljWj5A8//IAAcegICBCHAG3ejgCxSRNbsRL466+/pFu3bmb6XTdbjxw5MhRnh+gX7S233GJmPCpUqMATv1gdiPsgAIFAEdAZ4WnTpkm9evVM0K3JSxYsWBCoPqbXGe33Z599JtWrVzf7YvQAx02bNplLidfSIxbdewiQ6HjF9WocOq54MR4lgRUrVsh1111n/vDqYVWvv/56IDeq//bbb9K1a1fJnj27FC9e3Cw70NOCKRCAAAQg8C8BPf9IDzCsWrWq+V7o3LmzrFy58t8LAvKTrgbQfiYLrhYtWsgvv/ySqnfEa6lwxPQCARITtvjchEPHhytWnRFYsmSJXHXVVebJV6VKlYwQCcLZF9ov/QLVDfi6wfzZZ5/lTA9nrsLdEIBACAjokqRXX31VypQpYx7cdOzYUb799lvf7xHRJcfaLz1cVvd3XHDBBfLll1+m2y/iNeeOjgBxztCaBRzaGkoMxYHAwoULpUOHDubJV+HChU32rMWLF8ehpviZPHTokIwfP150CYGmmtTNlS+99FIo97rEjzKWIQCBMBDQM6X07+cZZ5xhAnbNnPXGG2+I32aQN2/eLE888YTZWK7fC/rA7WQpiInXnHs4AsQ5Q2sWcGhrKDEURwK6V2LgwIFSokQJ86Wj6QjHjBnj6SB+0aJFoukkixUrZtrcsGFD80WpgoQCAQhAAAKxE9C9Enow66WXXmoe7Ohm7XvuuUf0765+5sWi4um9996Tiy++2Mzi5MqVy+wB1KXHkRTitUgoZX4NAiRzPq5+ikO7ipvKHBLQ4P3DDz+U888/3wT1mj3r2muvNX/UNV1hIot+6S1dulSee+45adCggWmf7u/o37+/+G3WJpEcqRsCEIBANAT++OMP6devn8mmqMuYKleubNKYa9Yo3UOSyHLkyBGzpOqGG24w2ay0fZrZ6+WXX5Zt27ZF1TTitahwpXsxAiRdLIl5E4dODHdqdU5Anxo99NBD5vRc/aOu+yp0ZkSntX/99VdXvni2bt0qH3zwgcngVa5cOSM69KmWPpXTHPac9O58nLEAAQhAIBICBw8elM8//9zMKiTPluteu9tvv90c6Lpu3bq4z46o4NBDFfU8E53pKFCggPle0E30esCsiqVYC/FarOT+vQ8B8i+LhP+EQyd8CGiABQIbNmyQ1157TS677DLJly+f+YOvf/gbNWpkvnz0aZM+DduzZ09MtamQWLZsmUyYMEGGDBkiXbp0kbp165qpfxU/NWvWlL59+5ovP51mp0AAAhCAQOIIaFYpPT9E/y5rinP9O63/SpYsKe3atZPBgwfL1KlTZf369TE9KNIZb93HMXv2bHnnnXfk4YcfTiU48ufPb84y0fM75s2bZ0X4EK859ycEiHOG1izMmjXL/FK+++67os7NPxj43QdUaOg5Infeeaf5QtDpeJ0dSf4C0mVbmuK3du3a0qRJE3NNp06d5PrrrxfNrKKbxc877zw599xzpU6dOubLS0+hTb5fhY3eq2JHv3T0iZvfmdF+fu/xAXwgyD6gf6effvppM1utf9sLFiyY8jdd/7br94JmXDz77LPNd8I111wjV199tUmCojPaF110kbRq1UqaNWtmUgLnzZs31f26B0X3+fXq1UtGjRplZkFs89Q4TduqcRslNgIIkNi4xeWuZIdODq74/79PSeAAB3wAH8AH8AF8AB/wmg9o3EaJjQACJDZucblL17CrM6uitq3WsccTNXwAH8AH8AF8AB/AB5z7gMZpGq9p3EaJjQACJDZu3AUBCEAAAhCAAAQgAAEIxEAAARIDNG6BAAQgAAEIQAACEIAABGIjgACJjRt3QQACEIAABCAAAQhAAAIxEECAxACNWyAAAQhAAAIQgAAEIACB2AggQGLjxl0QgAAEAkFA01RmzZpV3n77bV/0R9vasmVLX7SVRkIAAhCAQPoEECDpc+FdCEAAAr4koId+6UGQzZs3l6SkJMmZM6foScR6jsrNN98skydPTtUvFR56Nsvo0aNTve/VFypA9GyYSMr27dvl9ddfl/bt24ueQaPnBRQuXFiaNm0qb775ppUDySJpB9dAAAIQgEBqAgiQ1Dx4BQEIQMC3BFR86CFdGqTrYVx6oOPAgQOlf//+5iRgPZleD+86vuzatUtWrFgh+r8fSjQC5OWXXzYsypYtK126dJH777/fHH6mB52pHT3skgIBCEAAAu4TQIC4z5waIQABCMSFwJgxY0xgXa9ePdm9e/cJdezfv1+++eabE9730xvRCJCvv/5apkyZckL3Nm/eLOXLlzesPv744xM+5w0IQAACEIgvAQRIfPliHQIQgIBrBHr06GGC6uHDh0dcZ2Z7QL744gtp3Lix6MyJzqjoUqZly5bJjTfeaOpZu3ZtSj2rV68273Xt2lX0506dOknRokUlT5480qBBg3SFgM66PPnkk2ZJlc5S5MqVS4oXLy6XXXaZzJkzJ8X28T9EI0COvy/tz48//rhp75133pn2I15DAAIQgECcCSBA4gwY8xCAAATcIvDQQw+ZoLpnz54RV5ksQNLuARk7dqyxlT9/frnpppvM8qUWLVqYfSW6CVyFQHoCRPdn6J6TRo0aSb9+/YxYURGSPXt2mTlzZqp2qchQ0XHhhRfK7bffbpaLde7cWQoUKGD2rqgASltsCRAVPmpL20iBAAQgAAF3CSBA3OVNbRCAAATiRmD+/PkmoNdN5br/Y8KECbJmzZpM60tPgOjyLd0nocJh0aJFqe6/7777TOCudaQnQDSof+SRR1LdM23aNHNP27ZtU72vMyC6UTxt2bBhg5QpU0aqV6+e9iNjJ9JN6Cfc/L83jhw5IrVq1TK2pk+fntFlvA8BCEAAAnEigACJE1jMQgACEEgEgQ8//FBKly5tgmsVA/pPl0916NBBPv300xOalJ4ASd5L0r179xOu37t3rxEnGQmQihUrpptdSvdc6PKqSEvv3r1N29evX5/qFu2PUwGim/LVTrt27VLZ5gUEIAABCLhDAAHiDmdqgQAEIOAaAX3Cr0/2H374YbOfQgVIshjR/RvHl/QEiC5L0uv1s/RKZkuwVOikV5o0aSI5cuQ44aNZs2bJVVddJeXKlZPcuXOntDO5vXPnzk11j77vRIDo/hi1UaNGDdmxY0cq27yAAAQgAAF3CCBA3OFMLRCAAAQSRuDYsWOiMyO6t0KD70mTJqW0JT0BojMfet3UqVNTrjv+B91grp+ntwRL94ukV3T/iM6aHF90iZi+p+1S4XL33Xcb0TR48GBz2KDW8e233x5/i6k3VgHy4osvmvt1+dVff/2Vyi4vIAABCEDAPQIIEPdYUxMEIACBhBJI3qTep0+flHakJ0D69u1rAvVYZkCiESA1a9Y0GbaWL1+e0p7kH2699VbTBlsC5LnnnjP29EDGrVu3JlfD/xCAAAQgkAACCJAEQKdKCEAAAokgMGzYMBOEH596Nj0BkrwHpFu3bic0c8+ePZnuAYlGgOiSK03Rm7bojI0ukbI1A/LEE08YW2eddVa6m97T1s9rCEAAAhCILwEESHz5Yh0CEICAawTef/99+fLLL9PdBL5p0yapUqWKCcTHjx+f0qb0BEhyFiwVCAsXLky5Vn+49957jY2MNqFHI0DOOOMMKVSokGzcuDGljn/++UeSZ2q0DqczIJqRS4XM2WefzZ6PFMr8AAEIQCCxBBAgieVP7RCAAASsEbjrrrtMsK1ZsHRj94ABA8y/jh07St68ec1naTeJpydAtEHvvfee2Z+RfA7IwIEDJe05IMdnqEo+iDAzAaJC4Pjy6quvmjaVLFlS9BBFnZmpX7++WZalhxE6nQF5++23jQ3d/K7LynRTftp/eg0FAhCAAATcJYAAcZc3tUEAAhCIGwEVBCNGjDAbuqtVq2ZmF/SgPz1T45JLLjGiIm3lGoDrTEPagwj1us8//zzlJPSkpCRzEvqKFSuMLRUHeo5HcjmZANHMWVpP2qL1161bV1ToaJreK664QhYvXiyDBg0y1zuZAVEb2k6tV/9P71+sG9rT9oPXEIAABCAQOQEESOSsuBICEIBA6AkcPXpU9EwPFTUUCEAAAhCAQCwEECCxUOMeCEAAAgEnsHPnTtm3b1+qXur+jORZhV69eqX6jBcQgAAEIACBSAkgQCIlxXUQgAAEQkRAl1/pBnHdS6Lnc9x+++1mqZQuYzrttNNIZRsiX6CrEIAABGwTQIDYJoo9CEAAAgEgoHs6rrvuOqlQoYLZn6EZsTSLlm5037JlSwB6SBcgAAEIQCBRBBAgiSJPvRCAAAQgAAEIQAACEAghAQRICAedLkMAAhCAAAQgAAEIQCBRBBAgiSKfTr1bt26Vd999V2bNmiX/93//xz8Y4AP4AD6AD+AD+AA+4DEf0DhN4zWN2yixEUCAxMYtLnepM2fJkoV/MMAH8AF8AB/AB/ABfMDjPqBxGyU2AgiQ2LjF5S5V1CpA1KGZAWEGCB/AB4LkA/PmzRM9dLBnz56ih/+VKFEiJbg65ZRTpGHDhnLppZeKnqTev39/GTp0qOhJ6R999JGMHz9ePvjgAxkzZoy89dZb5v2XXnpJBg8eLLfccotcfPHFUqtWLVE7yQ9xChQoIM2bN5d+/fqZAxh/+ukn/q567ClykPybvoTr73XyA2ON2yixEUCAxMYtLnfpHzD98tT/KRCAAAT8TuDw4cPy1VdfGdGhBxfq37eCBQsaAXLPPfcYcbFmzRrR80VsFT2dXU9PV3GiQkezd2m9Kk7at28v77///gnnm9iqGzsQgEA4CBCvOR9nBIhzhtYs4NDWUGIIAhBIEIFjx47J1KlT5cYbb5SkpCQT/OvJ6X369DHCQE9Sd7McOHAgCp0hDQAAIABJREFURZCce+65pj06O6LtU3Hkdnvc7Dt1QQAC8SFAvOacKwLEOUNrFnBoaygxBAEIuExgx44d8uyzz0rlypVNkF+9enV54IEHzIyuzRkOp91auXKlmR3RM010ZqRs2bIyYMAA0fcpEIAABCIhQLwWCaXMr0GAZM7H1U9xaFdxUxkEIGCBwOLFi80p6fny5ZOcOXNK586dZe7cuRYsx9eEiiJtZ69evaRo0aKSLVs2ueaaa2T+/PnxrRjrEICA7wkQrzkfQgSIc4bWLODQ1lBiCAIQiDMB3XzZqlUrM4tQqlQpGTRokGzatCnOtcbH/P79+2XEiBHm1HedFWnTpo3MnDnT6t6U+LQcqxCAQCIIEK85p44Acc7QmgUc2hpKDEEAAnEisGjRIpOtSgP1OnXqmAxThw4dilNt7po9cuSI2aSu/dL+aWauzz77DCHi7jBQGwQ8T4B4zfkQIUCcM7RmAYe2hhJDEICAZQKrVq2SLl26SNasWaVSpUomUNcN50EsujxLN9I3bdrUCJHWrVuLLjWjQAACEFACxGvO/QAB4pyhNQs4tDWUGIIABCwR2LZtm/Tu3dvs79ClViNHjpSgzHicDJEKkUmTJpmN9dmzZzfphDn5+GTU+BwCwSdAvOZ8jBEgzhlas4BDW0OJIQhAwCEBDb7feecdKVasmBQuXFgef/xx2bt3r0Or/rz94MGD8vTTT0uhQoXMeSLPPfdcaESYP0eMVkMgvgSI15zzRYA4Z2jNAg5tDSWGIAABBwR+//13ueCCC8zyI80M5dfN5Q4QpHvrX3/9ZTJ+acasatWqyezZs9O9jjchAIFgEyBecz6+CBDnDK1ZwKGtocQQBCAQAwFdWjVkyBDJkyePyQj1+eefx2Al+LcsXLjQbFDX/TD9+/cXzaJFgQAEwkOAeM35WCNAnDO0ZgGHtoYSQxCAQJQEfvzxR6lVq5boXgc9mC+sy60ixaYZs4YNGya5c+eW008/ndmQSMFxHQQCQIB4zfkgIkCcM7RmAYe2hhJDEIBAhASOHj1qZj1UeDRo0ICD+CLklnzZ0qVLmQ1JhsH/EAgJAeI15wONAHHO0JoFHNoaSgxBAAIRENiwYYO0bNnSpNZ94IEH5PDhwxHcxSVpCaiIe/LJJ1NmQ3755Ze0l/AaAhAIEAHiNeeDiQBxztCaBRzaGkoMQQACJyEwceJEKVKkiJQtW9ac+n2Sy/k4AgI6G3LWWWcZIfLKK69wgGEEzLgEAn4kQLzmfNQQIM4ZWrOAQ1tDiSEIQCADAvv27TOZnPSk7/bt24ue80GxR+DAgQPSo0cPk0Gsc+fOsmfPHnvGsQQBCHiCAPGa82FAgDhnaM0CDm0NJYYgAIF0CKxcudJsNNcsVzyhTweQxbfGjh0rBQoUMOl6Fy1aZNEypiAAgUQTIF5zPgIIEOcMrVnAoa2hxBAEIJCGwLRp08whelWrVpXFixen+ZSX8SCwfPlyqV27tuTNm1dGjRoVjyqwCQEIJIAA8Zpz6AgQ5wytWcChraHEEAQg8D8CeqL5U089JXp43sUXXyw7duyAjYsEdMlb9+7dzZKs2267jY3+LrKnKgjEiwDxmnOyCBDnDK1ZwKGtocQQBCAgIhr86j4E3e8xcOBA0WxNlMQQeOONNyRHjhxy/vnny/bt2xPTCGqFAASsECBec44RAeKcoTULOLQ1lBiCQOgJrFmzRurVqyf58uWTcePGhZ6HFwB88803JvOYLoNbsWKFF5pEGyAAgRgIEK/FAC3NLQiQNEAS+RKHTiR96oZAcAjMmTNHihcvLhUqVJAFCxYEp2MB6IkmAjjjjDPMfpyvvvoqAD2iCxAIHwHiNedjjgBxztCaBRzaGkoMQSC0BCZNmmQ2PTdt2lS2bt0aWg5e7vjOnTuldevWoqfPazYyCgQg4C8CxGvOxwsB4pyhNQs4tDWUGIJAKAmMHDnSbDa/8sorRc+joHiXwJEjR6R3795mf07fvn3l2LFj3m0sLYMABFIRIF5LhSOmFwiQmLDF5yYcOj5csQqBoBPQ4PW+++4zwWyfPn0IZn004C+99JJkzZrVJAs4dOiQj1pOUyEQXgLEa87HHgHinKE1Czi0NZQYgkBoCGjQet111xnx8eyzz4am30Hq6EcffSS5cuWSNm3acHJ6kAaWvgSWAPGa86FFgDhnaM0CDm0NJYYgEAoCu3btklatWpnglUxX/h7yGTNmmJPTzznnHPbu+HsoaX0ICBCvOR9kBIhzhtYs4NDWUGIIAoEnoBvM69evL4ULFxZN70rxP4Gff/7ZZC/TLFlr1671f4foAQQCSoB4zfnAIkCcM7RmAYe2hhJDEAg0gY0bN0rNmjVNsEqa3WAN9W+//WbSJ5crV06WLFkSrM7RGwgEhADxmvOBRIA4Z2jNAg5tDSWGIBBYAnrAYJUqVaRs2bKybNmywPYzzB1TgVmnTh1zaKHOilAgAAFvESBecz4eCBDnDK1ZSHboefPmWbOJIQhAIDgE9On4qaeeKhUrVpRVq1YFp2P05AQCO3bskIYNG5oldnPnzj3hc96AAAQSRyA5XtP/KbERQIDExi0udyU7dLt27eTo0aNxqQOjEICAPwksWrRISpYsaU7R3rBhgz87QaujIqBJBpo0aSIFCxaUWbNmRXUvF0MAAvEjkByvIUBiZ4wAiZ2d9TuTHVpzwnfp0gURYp0wBiHgTwI6K1qkSBGpW7eubNmyxZ+doNUxEdizZ4+0bNlS8ufPLzNnzozJBjdBAAJ2CSTHawiQ2LkiQGJnZ/3OZIceOnSoZM+eXa699lrR03IpEIBAeAmo+NBMV+eee67oshxK+Ajs27dPLrjgAsmbN698+eWX4QNAjyHgMQLJ8RoCJPaBQYDEzs76ncc79Pjx4yVHjhzSqVMnRIh10hiEgD8I6N+EU045xYgPXY5DCS+BAwcOyMUXXyy5c+eWqVOnhhcEPYeABwgcH695oDm+bAICxEPDltahJ0yYYERIx44d5fDhwx5qKU2BAATiTeCXX36RpKQk0YPpdu7cGe/qsO8DAgcPHpTLLrvMHDz52Wef+aDFNBECwSSQNl4LZi/j2ysESHz5RmU9PYf+5JNPJGfOnNKhQwdESFQ0uRgC/iWgZ3vono8GDRqw7Mq/wxiXlh86dEguv/xyMxMyffr0uNSBUQhAIHMC6cVrmd/Bp2kJIEDSEkng64wc+tNPPzUi5JprrmFjegLHh6oh4AaBhQsXStGiRc0p53///bcbVVKHzwjoTEjbtm3NnpBvvvnGZ62nuRDwP4GM4jX/98y9HiBA3GN90poyc+iPP/7YbEzv2rWrHDt27KS2uAACEPAfgV9//VWKFSsm9erVk+3bt/uvA7TYNQK6J0Q3pmt2rNmzZ7tWLxVBAAIimcVr8ImMAAIkMk6uXHUyh37vvfdEU/T26NFD/vnnH1faRCUQgIA7BJYvXy4lSpQwqXYRH+4w93stmh2refPmUqhQIfnpp5/83h3aDwHfEDhZvOabjiSwoQiQBMJPW3UkDv3mm29KlixZpF+/foiQtAB5DQGfEli9erWUK1dOatSoIVu3bvVpL2h2Igjs3r1bGjVqZLKlaeICCgQgEH8CkcRr8W+Fv2tAgHho/CJ16JdeesmIkAcffNBDracpEIBALAQ2btwolStXlkqVKsmff/4ZiwnuCTkBzZKmCQt075Au46NAAALxJRBpvBbfVvjbOgLEQ+MXjUM/9dRTRoQMGTLEQz2gKRCAQDQEtm3bJjVr1pSyZcvKqlWrormVayGQioAu2zvzzDOlVKlS8scff6T6jBcQgIBdAtHEa3ZrDo41BIiHxjJahx48eLARIS+++KKHekFTIACBSAjowYL61Lp48eKybNmySG7hGghkSmDz5s1StWpVqVixIrNpmZLiQwg4IxBtvOastmDejQDx0LhG69C6EV33guiekHfffddDPaEpEIBAZgSSNw8XLlxY5s+fn9mlfAaBqAisWbPG7CfSmTWSGUSFjoshEDGBaOO1iA2H6EIEiIcGOxaHVhGiqXlz5MghU6ZM8VBvaAoEIJAeAT1I7qKLLjLpU3/44Yf0LuE9CDgioDNqms65YcOGsmfPHke2uBkCEDiRQCzx2olWwv0OAsRD4x+rQx85csScjJsnTx75/vvvPdQjmgIBCBxPQM/w6dSpk+TKlUu+/PLL4z/iZwhYJfDzzz9LwYIFpVWrVqIHF1IgAAF7BGKN1+y1wP+WECAeGkMnDq2HUrVs2VJ0SceCBQs81CuaAgEIKAGdrbzjjjskW7ZsogeLUiAQbwJ6Snru3LmlQ4cOog+qKBCAgB0CTuI1Oy3wvxUEiIfG0KlD66bW+vXrS8mSJeX333/3UM9oCgQg8Oijj5r9Wq+++iowIOAagcmTJ0v27Nnlpptu4uwo16hTUdAJOI3Xgs4nkv4hQCKh5NI1Nhx6y5YtUq1aNalQoYJs2LDBpZZTDQQgkBmBV155xYgPFSEUCLhNYMyYMcb/Bg4c6HbV1AeBQBKwEa8FEkwUnUKARAEr3pfacui1a9eaLCi1a9eWHTt2xLvZ2IcABDIh8NFHH0nWrFmld+/ePIHOhBMfxZfA008/bUTI8OHD41sR1iEQAgK24rUQoMqwiwiQDNG4/4FNh16yZIkkJSVJ8+bNRfeHUCAAAfcJzJgxw2w4v/baa0U3oFMgkEgCd999txHDY8eOTWQzqBsCvidgM17zPYwYO4AAiRFcPG6z7dCzZs0SzYx15ZVXytGjR+PRZGxCAAIZENDfZ81C1Lp1a9HUuxQIJJqAiuDrr79ecubMSRa2RA8G9fuagO14zdcwYmw8AiRGcPG4LR4OPWnSJJN1p2fPniz/iMegYRMC6RD4448/pESJEnL22WdzDkM6fHgrcQQOHz5szqEpUKCA6HcOBQIQiJ5APOK16Fvh7zsQIB4av3g59GuvvWbW/j722GMe6i1NgUAwCWgiiKpVq0qVKlVEf6ZAwGsE9HDCc845x4jklStXeq15tAcCnicQr3jN8x232EAEiEWYTk3F06EHDx5sRMibb77ptJncDwEIZEBg7969KYGdzoJQIOBVAlu3bpXTTz9dKlWqJJs3b/ZqM2kXBDxJIJ7xmic7HIdGIUDiADVWk/F0aD0E7bbbbjP54DUvPAUCELBLQA96a9euneTPn1/0FGoKBLxOYPXq1VKqVClp0KABSwW9Pli0z1ME4hmveaqjcWwMAiSOcKM1HW+H1o3oeiJu3rx5Ze7cudE2j+shAIEMCKjAv/nmmyVHjhzyxRdfZHAVb0PAewTmz58vuh+kbdu2nJbuveGhRR4lEO94zaPdttosBIhVnM6MueHQ+/fvl8aNG0uxYsU4Ld3ZcHE3BFIIPPzww2aJ4+jRo1Pe4wcI+IXA9OnTjXju3r07yUr8Mmi0M6EE3IjXEtpBFypHgLgAOdIq3HLobdu2mbW/lStXlr/++ivS5nEdBCCQDoHkJA+PP/54Op/yFgT8QUDFc5YsWWTQoEH+aDCthEACCbgVryWwi3GvGgESd8SRV+CmQ69atUpKlixp0oTqxlkKBCAQPYEpU6aYfVWkuY6eHXd4j8CQIUOMCHnjjTe81zhaBAEPEXAzXvNQt602BQFiFaczY247tG6U1Q2zunFWN9BSIACByAnMmzdP8uXLJ5dffjkHfUaOjSs9TOD4ZCWfffaZh1tK0yCQWAJux2uJ7W18akeAxIdrTFYT4dBTp041T3BvvfVW1v7GNGrcFEYCyQcNnnvuubJv374wIqDPASWgD6MuvfRSI65VZFMgAIETCSQiXjuxFf5+BwHiofFLlEO/9dZbZtqdgwo95Aw0xbMEkvdQcdCgZ4eIhjkkoKK6YcOG5qBCXa5LgQAEUhNIVLyWuhX+foUA8dD4JdKhkw8qfOeddzxEhKZAwFsEyCLnrfGgNfEjsGXLFtFEJdWqVRMV3RQIQOBfAomM1/5thb9/QoB4aPwS6dC69ldTMOo5Bl999ZWHqNAUCHiDgJ6jc8UVV3COjjeGg1a4QOC3334zKdubNGkiBw4ccKFGqoCAPwgkMl7zB6GTtxIBcnJGrl2RaIc+fPiwtGnTRgoVKiSLFi1yrd9UBAGvE1CB3qdPH8mWLZt88sknXm8u7YOANQJz5syRPHnyyFVXXSXHjh2zZhdDEPAzgUTHa35ml9x2BEgyCQ/87wWH3r17t9StW1fKli0r69ev9wAVmgCBxBN45plnzD6pkSNHJr4xtAACLhOYOHGiZM2aVfr16+dyzVQHAW8S+Pbbb813gsZtlNgIIEBi4xaXu7wgQLRjf/75p5QvX15q164tO3fujEtfMQoBvxD48MMPzRfNvffe65cm004IWCfw4osvmt+D4cOHW7eNQQj4icChQ4ekQYMGCBCHg4YAcQjQ5u1eESDap8WLF0vhwoWlVatWor9sFAiEkcD3338vuXPnlmuvvZblJ2F0APqcisDdd99tZkI+/vjjVO/zAgJhIaDLca+77jqzXzZLlizCDEjsI48AiZ2d9Tu9JEC0czNnzpRcuXLJDTfcwBkh1kcbg14nsGzZMklKSpKWLVvKwYMHvd5c2geBuBPQPSCdOnUye0Jmz54d9/qoAAJeIzBw4EAz8zF06FBmQBwODgLEIUCbt3tNgGjf3n//ffNL9uCDD9rsKrYg4GkCmzZtkgoVKkiNGjVkx44dnm4rjYOAmwQ0G1bz5s2laNGismLFCjerpi4IJJTAK6+8YuKhp59+2sx8MAPibDgQIM74Wb3biwJEOzhs2DDzS/faa69Z7S/GIOBFAnv37pX69etL6dKlZe3atV5sIm2CQEIJ/P3331K9enWpWLGibN68OaFtoXIIuEHg008/NVkQe/fubVaEeDVec4OFrToQILZIWrDjVYfWNY+9evWS7Nmzy5QpUyz0FBMQ8CaBI0eOyCWXXCIFChSQ+fPne7ORtAoCHiCwZs0aKVWqlNmMq6KdAoGgEvjpp58kX7580qFDB9HzoLR4NV7z0xggQDw0Wl52aP2lu/zyy80v4bx58zxEjaZAwA4BFdq33HKLEdpffPGFHaNYgUCACfzyyy9GrLdr105UvFMgEDQCf/zxh5QoUULOPfdc2b9/f0r3vByvpTTS4z8gQDw0QF536H379plfQv1l1F9KCgSCRODRRx81Sw1HjRoVpG7RFwjElYCKdZ0dv+2220hWElfSGHebwNatW+X000+XqlWriv58fPF6vHZ8W736MwLEQyPjB4fesmWLVKlSJd1fSA+hpCkQiIqAig7dUPjII49EdR8XQwACIm+99Zb5/Xn88cfBAYFAENAHro0aNZLixYvLypUrT+iTH+K1ExrtsTcQIB4aEL84tP4y6i+l/nIePyXpIZQ0BQIRE9AnuDly5DDLr3QZFgUCEIiewKBBg4wIGT16dPQ3cwcEPETg+CXnuv8jveKXeC29tnvlPQSIV0bCZ5uafvzxR8mbN6+0b98+ZVOWh1DSFAhEREC/RHTDuW48Zw17RMi4CALpElDxfvPNNxsxP23atHSv4U0IeJ2A+nGPHj3MssLPPvssw+YiQDJEE/EHCJCIUcX/Qr85tKal07W/+svKk+P4+wc12CWwevVqk8Xn7LPPFrL42GWLtXASOD6LnH6fUSDgNwK6jFCX477xxhuZNt1v8VqmnUnQhwiQBIFPr1o/OvTrr79uflkfe+yx9LrEexDwJIHt27dLtWrVpHLlyvLXX395so00CgJ+JKBi/pxzzpGSJUvKqlWr/NgF2hxSArp8UMWHLic8WfFjvHayPrn9OQLEbeKZ1OdXhx48eLD5pdWNiBQIeJ1A8ubCYsWKyW+//eb15tI+CPiOQHKyEs0glDZ7kO86Q4NDQUCXDepeQF1GGMmKDr/Ga14aTASIh0bDrw6tv6y33nrrSddMegg1TQkpAV0iomcW6KFSuo+JAgEIxIeAJitJPj9BRT8FAl4loLGX7gVs27ZtxHsB/RqveWkMECAeGg0/O7QGdpdddhmBnYf8iaakJqBCuXv37uYp1+eff576Q15BAALWCeihtfnz55dLL7004sDOeiMwCIFMCPz+++9GKOuywWj2Avo5XssEh6sfIUBcxZ15ZX53aJa2ZD6+fJpYAg8++KBZKvjOO+8ktiHUDoEQEVCxr0tbunXrFtHSlhChoasJJrBp0yapVKmS2Q8Y7VJBv8drCUZvqkeAeGEU/teGIDj0tm3bzC9zxYoVZePGjR6iS1PCTGDEiBFGfDz55JNhxkDfIZAQAir6dXPvwIEDE1I/lUIgLYFdu3ZJ3bp1pUyZMrJmzZq0H5/0dRDitZN2Ms4XIEDiDDga80FxaP1l1l/qOnXqyI4dO6JBwLUQsE7go48+kqxZs0rfvn15AmudLgYhEBmBZ555xoiQ5557LrIbuAoCcSJw8OBBOe+886Rw4cKyaNGimGoJSrwWU+ct3YQAsQTShpkgOfTixYslKSlJmjVrxmnpNpwDGzERmDlzpuTKlUs6d+4sx44di8kGN0EAAnYI3HPPPUaEvPvuu3YMYgUCURLQU847duwouXPnlu+++y7Ku/+9PEjx2r+9cvcnBIi7vDOtLWgO/cMPP5jT0tmAmOmw82GcCOjvU8GCBeXCCy+UQ4cOxakWzEIAApES0EQQN910E4kgIgXGdVYJqP/16tVLsmXLJhMnTnRkO2jxmiMYMd6MAIkRXDxuC6JDT5061XzZdO3aleUv8XAabKZLYPny5VK8eHFzINqePXvSvYY3IQAB9wloxkR9KKWpsOfOnet+A6gxtASSzyx77bXXHDMIYrzmGEqUBhAgUQKL5+VBdWidbtcNiAMGDIgnPmxDwBBYt26dnHrqqVKjRg3RpAgUCEDAWwQ0Y2KTJk2kSJEismTJEm81jtYEksDw4cNNHDJkyBAr/QtqvGYFToRGECARgnLjsiA79PPPP29++clC5IYnhbcOPYG5WrVqUqFCBdmwYUN4QdBzCHicwN9//y21a9c2CUtWrVrl8dbSPD8TGD16tIk/7r77bmsrMYIcr7k11ggQt0hHUE/QHfqBBx4wfwRsTH9GgJNLQkZA0yrWr19fSpYsKXq4FAUCEPA2AT2HoUqVKuYshj///NPbjaV1viSgez2yZ88uN998szXxoSCCHq+5MdgIEDcoR1hH0B06eQOYpkR97733IqTCZRA4OYEDBw5Iy5YtTVrFBQsWnPwGroAABDxBQNO2Jy+ZjPYwOE90gEZ4lsBXX31lsiBeffXVotmvbJagx2s2WWVkCwGSEZkEvB8Gh9ZUqDfeeKN5IjFp0qQEUKbKoBE4fPiwXHbZZSbj2qxZs4LWPfoDgcATSE4aoTOYOpNJgYBTAprgIH/+/HLRRRfFJQtiGOI1p2NwsvsRICcj5OLnYXFozYKiebj1fIbp06e7SJiqgkZAn2p16tRJcubMKZpxjQIBCPiTwPz5880MZvPmzUU3qVMgECsBPVxQzyFr2rRp3HwpLPFarGMQyX0IkEgouXRNmBxaz2Vo27atScX4/fffu0SYaoJEQGfTbrjhBjObNmHChCB1jb5AIJQEZs+ebb4TLr744rg8tQ4l1JB1WmfTdB9g3bp1ZceOHXHrfZjitXhBRIDEi2wMdsPm0Pv37zfr9gsVKiQ///xzDMS4JawEdD/Rrbfeag6UGjt2bFgx0G8IBI7Al19+aWbHr7jiCtHllRQIREpAk4+UKVNGatasKZoRMZ4lbPFaPFgiQOJBNUabYXTo3bt3S8OGDaVo0aLy66+/xkiO28JEQMXHnXfeaTKqvf3222HqOn2FQCgITJ482Rxgq5uHdckuBQInI6CpnMuVKydnnHGGbN68+WSXO/48jPGaY2hpDCBA0gBJ5MuwOrTmg9fpUj25GhGSSA/0ft0qPvRASz3Y8pVXXvF+g2khBCAQEwFdVqnpU6+99lrrGYxiahA3eZbA2rVr5bTTTjMpnd1K5xzWeM2mEyBAbNJ0aCvMDq0nVp955plGhCxevNghSW4PKoH//Oc/RnzowZYUCEAg2AQ+/PBDI0Kuv/56REiwhzrm3umBs5UqVZKKFSvKunXrYrYT7Y1hjteiZZXR9QiQjMgk4P2wO7TmgK9Tp46UKFFClixZkoARoEovExg8eLARH0OHDvVyM2kbBCBgkcAHH3xg9np17dpVNPEEBQLJBDZu3Cinn366lC9fXlavXp38tiv/hz1eswEZAWKDoiUbOLQIIsSSMwXIjC67euihh4z4eOyxxwLUM7oCAQhEQkAPrs2WLZt0794dERIJsBBcs2nTJqlevbqULVtWVq5c6XqPidecI0eAOGdozQIO/V+UKkJq165tUuktXbrUGl8M+Y+Aio/77rvPiI9hw4b5rwO0GAIQsEJg9OjRkjVrVpP9jpkQK0h9a2T9+vVm5kMzXq1YsSIh/SBec44dAeKcoTULOPS/KDWFXq1atRAh/yIJ3U8qPvr372/Ex7PPPhu6/tNhCEAgNYFRo0YZEaLn/5AdKzWbsLzSpVa630OXXSVi5iOZM/FaMonY/0eAxM7O+p04dGqkf/31lxEhmh3rl19+Sf0hrwJNQMVHnz59jPh44YUXAt1XOgcBCERO4P333zcb06+66ioOK4wcWyCu1HM+VHjopvM1a9YktE/Ea87xI0CcM7RmAYc+EaUux6pfv74ULlxY9JRcSvAJ6PKKnj17GvExcuTI4HeYHkIAAlER0BS9OXPmlHbt2smBAweiupeL/Ulg2bJl5pBB3XSuma8SXYjXnI8AAsQ5Q2sWcOj0Ue7atUuaNWsm+fLlEz0llxJcArqsQjea6lrv1157LbgdpWcQgIAjAp9//rnkyZNHLrjgAtm7d68jW9zsbQJ6Pphmx9QTznXzuRcK8ZrzUUCAOGdozQIOnTHKffv2yUUXXSS5cuWSTz75JOML+cS3BPRJZocOHczyindUvWNiAAAgAElEQVTeece3/aDhEICAOwRmzpwp+fPnl6ZNm4o+qKIEj8C8efOkaNGi5rBiXRHhlUK85nwkECDOGVqzgENnjvLQoUNy5ZVXmgBV0zJSgkNAg4fzzjvPPNH89NNPg9MxegIBCMSVwJw5c8wS3bPPPtukcY9rZRh3lcD06dONwGzYsKFs377d1bpPVhnx2skInfxzBMjJGbl2BQ59ctS6REcPpNIlOi+//PLJb+AKzxPQZANnnXWWCSK+//57z7eXBkIAAt4ioElKNFlJ1apVZdWqVd5qHK2JiYAmG9B9Pm3btvXkEjvitZiGNdVNCJBUOBL7AoeOjL9uUr7zzjvNJuUHH3xQNGMSxZ8ENKWiBg2lSpWShQsX+rMTtBoCEEg4AU3JWrlyZZO6Xb9LKf4l8Nxzz5nv9xtvvFEOHz7syY4QrzkfFgSIc4bWLODQkaNU0fHkk0+aP1LXX3896RgjR+eZKxcvXmyymmhKxT/++MMz7aIhEICAPwnobGqDBg2kQIECMm3aNH92IsSt1u/1e++913yv6/9efrhIvObcUREgzhlas4BDR49y7NixZmP6+eefLzt37ozeAHckhIBmM9PUymeeeaZnspokBASVQgACVgns2bPHLNvJkSOH6OnpFH8Q0JkOnfHIkiWL6AyI1wvxmvMRQoA4Z2jNAg4dG8pvv/1WTjnlFHNo4bp162Izwl2uEXjllVdMIgHNaoZodA07FUEgNAR0r2C3bt1MMDt06FBPP0kPzaBk0tG///5bLrzwQrPnQ/d++KEQrzkfJQSIc4bWLODQsaNcunSpnHbaaWZJz4IFC2I3xJ1xI3D06FG56667TFBwxx13iAYJFAhAAALxIKDLd/7zn/+Yvzc333wzy3TjAdmCzRUrVogeLpiUlCQzZsywYNEdE8RrzjkjQJwztGYBh3aGUg8o0mxKuv530qRJzoxxt1UCu3fvNqcWZ8uWTV588UWrtjEGAQhAICMCo0aNMst0mzRpIps3b87oMt5PAAFdiqurF8444wz5/fffE9CC2KskXoudXfKdCJBkEh74H4d2Pgi6/lfPCtF1pPr0SzNmURJLQJfF1alTRwoWLCh6ejEFAhCAgJsEfvjhB5Npr1y5cvLzzz+7WTV1pUNAZ6deeuklXy/FJV5LZ2CjfAsBEiWweF6OQ9uhq3/chgwZYs4KadeuHfsM7GCNyYruz9EUuxUqVJBff/01JhvcBAEIQMApgQ0bNogeVpgnTx7R5CWUxBDQzea33367eUioS3L9uhSXeM25/yBAnDO0ZgGHtobSGJo6daqZ3tX1pUuWLLFrHGuZElAROGzYMPOEq2XLlqLpMSkQgAAEEklg//790qVLFxP83nfffaL70ijuEfjzzz+lRYsWZrP566+/7l7FcaiJeM05VASIc4bWLODQ1lCmGNJ1pTVr1jT7QiZMmJDyPj/Ej4BmNLnsssvMl/zAgQN9+4QrfoSwDAEIJIqAPhx56qmnRPejtWnThocjLg2Ensuip9WXLl1avvvuO5dqjV81xGvO2SJAnDO0ZgGHtoYylSHdF9KxY0cTEPfr108OHjyY6nNe2COgPlyxYkWT0eTTTz+1ZxhLEIAABCwSmD59upQoUcIsEdXN0JT4ENAlVg888IBZEt26devACD7iNef+ggBxztCaBRzaGsoTDOlTr2eeecZM/datW1c0bS/FHgHl++qrr0ru3LnNScSrV6+2ZxxLEIAABOJAQDMn6vkTWbNmFV2SpfsTKPYI6JKr5s2bm9km3ZcZpKQwxGvO/QQB4pyhNQs4tDWUGRqaP3++VK9e3WxEHDlyJAdUZUgq8g90f0dy5rGePXsywxQ5Oq6EAAQSTECDYt2vpienn3vuubJq1aoEtygY1X/xxRdmyVXZsmUDseQq7agQr6UlEv1rBEj0zOJ2Bw4dN7SpDO/bt0969OhhlmRdeumlsmXLllSf8yJyAuPHjzdfMsWKFRP9mQIBCEDAjwTmzp1rsvUVKlTIZMnSWV1K9AR27dqVkuXqoosuCuz3K/Fa9L6R9g4ESFoiCXyNQ7sLf/LkyaKBs6aJnTJliruV+7y27du3y7XXXmtEXIcOHQKzrtfnw0LzIQABBwR27twpnTp1Mn/X2rdvL5q6lxI5AZ31OPXUUyV//vwyYsSIQC25SkuBeC0tkehfI0CiZxa3O3DouKHN0LCuAdanNHpwoW5U5wsnQ1QpH+jmchVteoLtu+++yzK2FDL8AAEI+J2AznzobK7+jdPZkJdffjnQgbSN8dLMhzfddJP5Hr3gggskDHsAidecew4CxDlDaxZwaGsoozKkXzjvv/++lCxZ0pzWPXz4cPLDp0NQxVnnzp3Nl0zbtm1FNxhSIAABCASRgAbVt9xyi/l717RpU1m2bFkQu+m4T5988olJrati7Y033gjNAyniNceuIwgQ5wytWcChraGMydCOHTvM2lXNiFK/fn2ZN29eTHaCdtOBAwfksccek3z58pn9Hm+99VZovmSCNpb0BwIQiI7AzJkzpWrVqpIrVy4ZPHiw6GGGFDGCrF27dkagXXLJJbJ+/fpQYSFecz7cCBDnDK1ZwKGtoXRkaM6cOVKnTh2TOlCzOm3evNmRPb/erDNDenijnuuhGWL0DBVdI02BAAQgECYCKjruv/9+83dQ9ziMGjUqtLPkmrSlV69ekj17drNpf9y4caF8IEW85vwvAALEOUNrFnBoaygdG9LDk55++mkpXLiwefKvJ3rrlHxYyqJFi+T888////buA0yKIn38uEhaYMk5CYIS70AkSJCoKOEEBQFPQYkeKohyoKBg4kQFQT3kFIETlCQIiHJkUaKAnMSfJAMokjNI5t7/89b9h9uFZXemw0x3z7efh2fDdFVXfbrYqXe6gvl0q2nTprJ169Z4qTr1RAABBFIU2L59u7Rp08b8XfzDH/5gFi+Jl9Wy9En4kCFDzLwYHW6l3+vv4vWgv2b/zhOA2Dd0LAcatGOUjmWkqz3pBlU6/EiDkUGDBsmJEyccy99rGemwM13VSifllylTRv71r395rYiUBwEEEIipwOrVq6V+/frm76R+1SV8g3qcO3dOxo8fb5526FMPffrB0vUi9Nfst3gCEPuGjuVAg3aM0vGMdLWsJ5980owDzp8/v9lVPSiBiH6Ct2TJErnrrrvMG6qOdx47dqzoGw8HAggggMDVAvp3Uz+g0Sch+oGNrqaoy9AG5YmI7ucxdOhQ0Y0EtX4tWrSQ77///mqIOP0N/TX7N54AxL6hYznQoB2jdC2jXbt2SdeuXc3418TERNE5Ips2bXLtem5mfPHiRfMGWqdOHfMGo/NepkyZErdjm920Jm8EEAimgP4d1eXIq1SpYv6Oli9fXt5//33RDW/9eOhqh3379jVDrTJmzCgdO3aUzZs3+7EqrpaZ/pp9XgIQ+4aO5UCDdozS9Yx0xY+BAweapXv106F69eqJTsbzw1ODbdu2mQmVOplSy16zZk3RvT2C8smd6zefCyCAAAJXCOjfz6VLl0qrVq3MAia5c+c2w3f9sCfG+fPnZe7cufLQQw+JBh06x+OZZ55hX6wr7nHSH+mvJdWw9j0BiDU3V1LRoF1hdTVTDTj0qYEGINqZ182rdKjWl19+KfpH3SuHLjGsn8rVqlXLlFPns/zlL3+RlStXEnh45SZRDgQQCITATz/9ZFYN1I68vi9Ur15d3njjDfnxxx89U79Lly7J8uXLzVN8HVas5SxbtqxZfEWHX3GkLkB/LXWfcF4lAAlHKUrn0KCjBO3SZXTlqB49elweM6s7hevGffpkJNp/0HVYwNq1a82bns7tyJw5s/lUTjcQ1PLE8+olLt1+skUAAQSSCZw8eVImT55snookJCSYTv6tt94qgwcPNvMpov3UWd+H9EmHDrEqUaKEKU+xYsWkT58+8t133/FhVLK7l/oP9NdS9wnnVQKQcJSidA4NOkrQLl9G31S08//CCy9I5cqVzR95faytu+nq05Fx48aJBiu61K9Th06I12uOHDnSvNnp43/9REtX79LJkcOHD5c9e/Y4dTnyQQABBBCIQECDkalTp5plfPXvsv59zpMnj/n7/NJLL5kJ7E4v9a57WH366afmfUfnqFx//fXmugULFjSb7uriI/okhCNyAfprkZtdmYIA5EqRGP5Mg44hvouX3rlzp/z973+Xdu3amR119Y1H/+lTCX0036lTJ3n22WfNo28NTnRllTVr1siOHTvMbrP6ydSKFStk0aJFZt15fRPT1Um6detmloIsXLiwyU/z1A0DNdB58cUXzXhkP8xJcZGerBFAAAHPCegEdV0xS3dW132WQh8Y6d9wXYVQn1rr+8Lzzz8v//jHP2TWrFmiS6TrB1fr1q0zHzbpUsA6hFaHUekcvhEjRphhXzoHRZ+yaHATeq/RzWQffvhhGTNmjOgcwGg/efHcDXCgQPTX7CMSgNg3dCwH/UOifzB0RQ1t3PwLpoF+6jR69Gj561//Ks2bN5eKFStKkSJFzNOK0BtGWl/1cb6O19U3Kg1EdH+Sjz76SJYtW0a74f8ObYA2QBvwURvQp9czZsyQV155RR544AFp2LChWd5Xn1To3htpvR/o6/rhky4sctttt5m9nHSFxtdee03mzJlDW3ChLWg/Td2138ZhTYAAxJqbK6lCDTqcPzac89+nCDjgQBugDdAGaAO0AdpALNqA9ts4rAkQgFhzcyXVwYMHzdMPjah5+hHMpx/cV+4rbYA2QBugDdAG/N0GtJ+mwYf22zisCRCAWHMjFQIIIIAAAggggAACCFgQIACxgEYSBBBAAAEEEEAAAQQQsCZAAGLNjVQIIIAAAggggAACCCBgQYAAxAIaSRBAAAEEEEAAAQQQQMCaAAGINTdSIYAAAr4X+PDDDyVdunRmc0w/VEbL2qBBAz8UlTIigAACCKQiQACSCg4vIYAAAn4SuHjxonzwwQdSr149s7lZxowZpUCBAlKpUiXp2rWrfP7558mqoxtf6u7I48ePT/Z7r/6gAYjukRDu8cwzz0ijRo2kWLFikiVLFmOiFrrBm+4SzYEAAgggEBsBApDYuHNVBBBAwFEBDT6aNGlinmjoLsgdOnSQ/v37mw0v7777brPRZd26dZNd8/jx42ZnZP3qhyPSACRTpkxSq1Yt6dKli7F48sknpXr16sYof/78sn37dj9UmzIigAACgRMgAAncLaVCCCAQjwIff/yx6VhXqVJFTpw4cRXB6dOn5euvv77q9376RaQByLlz51Ksnj4B0bw6d+6c4uv8EgEEEEDAXQECEHd9yR0BBBCIisBjjz1mOtXvvPNO2NdLbQ7IvHnzpHbt2ubJiT5Ruffee2XLli3yyCOPmOvs2rXr8nV+/vln87uOHTuKft+uXTvJmzevJCQkSLVq1WT27NmXzw19o09dhgwZYoZUFS1aVPRphT6VaNGihXzzzTeh05J9jTQASZY4yQ/r16835dUnRhwIIIAAAtEXIACJvjlXRAABBBwXGDhwoOlUP/7442HnHQpArpwDMnnyZJNXtmzZpFOnTvLcc89J/fr1zRwKnQSugUBKAYjOz9A5JzrsqXfv3iZY0SAkffr08tVXXyUrlwYZGnQ0btxYunfvboZIPfjgg5KYmCg6d0UDoCsPpwKQQYMGmTq8/vrrV16CnxFAAAEEoiBAABIFZC6BAAIIuC2wbt0606HXSeU6/2PGjBmyc+fOVC+bUgCiw7dy5cplnl5s3LgxWfp+/fqZjrteI6UARAOEV155JVma+fPnmzTNmjVL9nt9AnL48OFkv9Mfdu/eLUWKFJHy5ctf9ZrVAGTo0KHy4osvylNPPSW33367CXC6desm58+fv+oa/AIBBBBAwH0BAhD3jbkCAgggEBWBqVOnSuHChU2HXzvr+k+HT913333yxRdfXFWGlAKQ0FwSnbh95XHq1CkTnFwrALnxxhvlP//5z5XJ5IYbbjDDq6564Rq/6Nmzpyn7r7/+muwMqwFIoUKFkploEPLll18my5sfEEAAAQSiJ0AAEj1rroQAAgi4LnDhwgVZsGCB+cRf51NoABIKRnT+RtIjpQBEh07p+fpaSkdqQ7A00EnpqFOnjmTIkOGql5YvXy5t2rQxy+Rmzpz5cjlD5V21alWyNFYDkFAmBw4ckJkzZ0rZsmXNsDANtjgQQAABBKIvQAASfXOuiAACCERN4NKlS6JPRnRuhXbgP/vss8vXTikA0Scfet6cOXMun5f0G51grq+nNARL54ukdOj8EX1qkvTQIWL6Oy2XBi59+vQxQdPLL79sNhvUayxZsiRpEnPdSPYBSZY4yQ9adg14dL4KBwIIIIBA9AUIQKJvzhURQACBqAuEJqn36tXr8rVTCkCefvpp09G38gQkkgCkYsWKZoWtrVu3Xi5P6JtHH33UlMGtAESvc8stt5gAiA0JQ+p8RQABBKInQAASPWuuhAACCMRM4I033jCdet2ML3SkFICE5oCktEfGyZMnU50DEkkAok8gdIneKw99YlOhQgXXAxB9+qGrc+m8Fg4EEEAAgegKEIBE15urIYAAAq4ITJo0SRYuXJjiJPC9e/fKTTfdZDr1n3766eXrpxSAhFbB0gBhw4YNl8/Vb5599lmTx7UmoUcSgJQrV05y5Mghe/bsuXwNncAeelKj17DzBER3OT927NjlvEPfaICjywrrEC/dIZ4DAQQQQCD6AgQg0TfniggggIDjArrErHaqdRUsndjdt29f8+/++++XLFmymNeunCSeUgCiBZs4caIZnhTaB6R///5X7QOSdIWq0EaEqQUgWrakx6hRo0yZChYsKLqJoj6ZqVq1qhmWpZPn7c4Beeutt8xSwrrPiC65q0sIa/lKlSpl8i5ZsqTZNDFpmfgeAQQQQCA6AgQg0XHmKggggICrAhoQjBw50kzo1lWe9OmCbvSne2o0b97cBBVXFmDcuHEm0LhyI0I9b+7cuZd3Qs+dO7fZCX3btm0mLw0OdB+P0JFWAKIrZ105CV3T6vV1LoYGOroLeqtWrWTz5s3y0ksvmfPtPAHRfHr06GHyz5cvn1mFS+uhmyQOHjxYdDgZBwIIIIBAbAQIQGLjzlURQAAB3wlcvHjR7OmhQQ0HAggggAACVgUIQKzKkQ4BBBAIqIDOnfj999+T1U7nZ+iTCX368cQTTyR7jR8QQAABBBCIRIAAJBItzkUAAQTiQECHX+kQLp1LovtzdO/e3Qxl0uCjRIkScvDgwThQoIoIIIAAAm4JEIC4JUu+CCCAgE8FdE7HQw89JDpRW+dn6IpYuoqWTnTX3cQ5EEAAAQQQsCNAAGJHj7QIIIAAAggggAACCCAQkQABSERcnIwAAggggAACCCCAAAJ2BAhA7Og5nFbHVU+YMEGWL18u//73v/mHAW2ANkAboA3QBmgDtAGPtQHtp2l/jflw1jvCBCDW7RxPqY35uuuu4x8GtAHaAG2ANkAboA3QBjzeBrTfxmFNgADEmpsrqTSi1gBEGzRPQHgCRBsIVhtYu3atTJkyRbp27So333yzWc5W/79nzZpVqlevLp07dxbdvXv69OmydOlSS38DdOM+vcbw4cPNhPGGDRtKrly5zN8V3QiwfPny8uCDD8p7770na9assXQN2mWw2iX3k/tJG4i8DYQ+MNZ+G4c1AQIQa26upNI/Atoh0a8cCCDgfwHdO0P/P/fv398EHfr/WwOCDh06yJgxY2TTpk2im/u5eWgZtmzZIqNGjZL27dtL8eLFzd+ZQoUKSa9evWT16tWi53AggAACCIQnQH8tPKfUziIASU0nyq/RoKMMzuUQcElAl6odNGiQlCpVynT28+TJY55wzJkzR86dO+fSVcPLVoONVatWmeCjYMGCpnylS5eWAQMGyPbt28PLhLMQQACBOBagv2b/5hOA2Dd0LAcatGOUZIRATAQ2btwoXbp0MftmJCQkSKdOnWT+/Ply/vz5mJQnrYvq05dFixaZ4ChnzpxmWFjLli3NQhhppeV1BBBAIF4F6K/Zv/MEIPYNHcuBBu0YJRkhEDWBS5cuyRdffCF33HGHeZpQpEgRGTx4sO9WRzlz5owZFlauXDlTj9q1a8vMmTNF68eBAAIIIPA/Afpr/7Ow+h0BiFU5F9LRoF1AJUsEXBLQoUyff/65VKxY0XTYa9SoIZMmTfLs045wGTTg0HrVrVvX1KtMmTIyevRouXDhQrhZcB4CCCAQaAH6a/ZvLwGIfUPHcqBBO0ZJRgi4KrBy5crLHXRdaWrFihWuXi9WmetckdatW5uhWfpkZPbs2UxYj9XN4LoIIOAZAfpr9m8FAYh9Q8dyoEE7RklGCLgisHXrVrnvvvvMk4FKlSrJ3Llz46JDvm7dOmnUqJGptw41W79+vSu+ZIoAAgj4QYD+mv27RABi39CxHGjQjlGSEQKOChw5ckS6d+8u6dOnlxtuuEE++uijuJsbERpyVrZsWfNERPct+e233xx1JjMEEEDADwL01+zfJQIQ+4aO5UCDdoySjBBwREA73VOnThVdrjZHjhwybNgw0cna8Xzoil7vvvuu5M2b12yiqJseur2XSTx7U3cEEPCWgC6z3rhxY/NEWPttHNYECECsubmSigDEFVYyRcCSwK+//ir33HOPeZO59957Zffu3ZbyCWqio0ePSs+ePc3TkFq1apnNDoNaV+qFAAII6AdSkydPlnz58okuW87G0fbaBAGIPT9HUxOAOMpJZghYEtBVoEaOHCnZs2cX3S18+vTplvKJl0TLli0TXSkrc+bM8tprr7FaVrzceOqJQBwJ7NmzR/SDKA062rZtKwsXLiQAsXn/CUBsAjqZnADESU3yQiBygR07dkidOnXMG8ujjz4q+ik/R9oCp0+flr59+8r1118vVatWlQ0bNqSdiDMQQAABjwvoU4/x48dL7ty5pUCBApc/kKK/Zv/GEYDYN3QsBxq0Y5RkhEDEAhMmTJDExEQpXbq0LFmyJOL0JBBZvXq1VKhQQTJkyCCvvvpq3E3Upw0ggEBwBHSuR/Pmzc0HUu3bt5dDhw5drhz9tcsUlr8hALFM53xCGrTzpuSIQFoCJ0+elEceecS8yTz00ENy/PjxtJLweioCZ8+elX79+pm5IXfddZfs378/lbN5CQEEEPCegH4IVaRIEcmfP7/ZmPXKEtJfu1Ik8p8JQCI3cy0FDdo1WjJGIEWB7777zsxfyJYtm4wbNy4u9vRIEcKFX86fP9+8eRcuXFi+/vprF65AlggggICzAjoH8G9/+5sZTlq/fv1rLjVOf82+OwGIfUPHcqBBO0ZJRgikKqDjet9++23JlCmTVKlSRbZt25bq+bxoTUD3CWnQoIF5Mx80aBDL9VpjJBUCCERBQJ/W6lPbdOnSycCBA1NdUIP+mv0bQgBi39CxHGjQjlGSEQLXFNAhV61btzZDrnr16iU6ZIjDPQHdI+SFF14wb+p33nmn7Nu3z72LkTMCCCBgQUCf0urTWh1ytWDBgjRzoL+WJlGaJxCApEkUvRNo0NGz5krxKfDTTz9JpUqVzGTzmTNnxidCjGq9aNEis6Gjjqv+9ttvY1QKLosAAgj8T0Cfhr/11lvmKa0+rdWntuEc9NfCUUr9HAKQ1H2i+ioNOqrcXCzOBBYvXmx27y5VqpRs2rQpzmrvjerqWvq33XabJCQkyJQpU7xRKEqBAAJxKXDu3Dnp0qWLeRrep0+fiIaI0l+z32QIQOwbOpYDDdoxSjJC4LKAfsL17rvvSvr06UWHAB0+fPjya3wTfYEzZ86IrjamG3rp0Cyd9MmBAAIIRFNAl9itW7eumQf44YcfRnxp+msRk12VgADkKpLY/YIGHTt7rhxMAf2Eq1u3bqaz+9RTT6U6qTCYAt6slQaFgwcPNvfl/vvvl99//92bBaVUCCAQOAF9Al6yZEkz32P58uWW6kd/zRJbskQEIMk4YvsDDTq2/lw9WAJHjhyRevXqmU+4/vnPfwarcgGpjc7D0SWQb731Vvn1118DUiuqgQACXhX44osvzBxAnQu4c+dOy8Wkv2aZ7nJCApDLFLH/hgYd+3tACYIhsGvXLrMjd548ecTqJ1zBkPB+LdavXy/FixeXQoUKybp167xfYEqIAAK+FHjnnXfManwtW7YUXQ3RzkF/zY7ef9MSgNg3dCwHGrRjlGQUxwIbN26UokWLmkfsW7dujWMJ/1Rdl+atVq2aZM+eXXSxAA4EEEDAKQEd8tmvX7/Lk82dmHdGf83+3SEAsW/oWA40aMcoyShOBbTzmiNHDrO54N69e+NUwZ/V1k8kdRMw3Rxy6tSp/qwEpUYAAU8JnD9/Xjp27GiCj+HDhztWNvpr9ikJQOwbOpYDDdoxSjKKQwFd1lU7r40bN5YTJ07EoYD/q6yLBugKWboTsa5cxoEAAghYFdDFLZo3by4ZMmSQCRMmWM0mxXT011JkieiXBCARcbl7Mg3aXV9yD66AfrKly7p26NBBtBPL4V8BHR7Ru3dvcz8HDBggOnyCAwEEEIhE4NChQ1KrVi2zyMW8efMiSRrWufTXwmJK9SQCkFR5ovsiDTq63lzN/wLaOR04cKDprOoYXzqr/r+noRoMHTrU3NeuXbuyfHIIha8IIJCmwC+//CLly5eXfPnyyZo1a9I838oJ9NesqCVPQwCS3COmP9GgY8rPxX0moMFG6JPyN954w2elp7jhCHz00Udm+ITuFaJjuTkQQACB1AR27NghN9xwg5QoUUK2bduW2qm2XqO/ZovPJCYAsW/oWA40aMcoySjgAjpMp3v37uYT8hEjRgS8tvFdvVmzZpm5PS1atJCzZ8/GNwa1RwCBawps2bJFihQpImXKlJHdu3df8zwnXqC/Zl+RAMS+oWM50KAdoySjAAtcuHBBHnnkETNReezYsQGuKVULCcydO1cSEhKkSZMmcvr06dCv+YoAAvmulHQAACAASURBVAgYAd3dvECBAlKxYkWJxgqI9NfsNzwCEPuGjuVAg3aMkowCKqDDcNq2bSvp06eXSZMmBbSWVCslgUWLFkmWLFmkUaNGcurUqZRO4XcIIBCHAt99953kzZtXbrnlFjl48GBUBOiv2WcmALFv6FgONGjHKMkogAJnzpyRe+65xwzHmTlzZgBrSJXSEliyZIkkJibK7bffLsePH0/rdF5HAIGAC6xevVpy5col1atXl8OHD0ettvTX7FMTgNg3dCwHGrRjlGQUMAEd+6/rueswHB2OwxG/At98843kzJlTbrvtNjl69Gj8QlBzBOJcYPny5ZI9e3apXbu2HDt2LKoa9NfscxOA2Dd0LAcatGOUZBQgAd3XQ598ZM6cWRYsWBCgmlEVqwJr166V3LlzS7Vq1aLe8bBaZtIhgIBzAhp8ZMuWTRo0aCAnT550LuMwc6K/FiZUKqcRgKSCE+2XaNDRFud6XhfQOR/33nuvGXbFkw+v363olk/HfevQi5o1a7LzfXTpuRoCMRVYtWqVefJRv3590d3OY3HQX7OvTgBi39CxHGjQjlGSUQAENPho3bq1ZMyYUWbPnh2AGlEFpwV0k7EcOXJI3bp1mZjuNC75IeBBAX36qUMw69SpE5MnHyES+mshCetfCUCs2zmekgbtOCkZ+lRAl9rV1a40+Pj88899WguKHQ2BlStXmonpDRs2jNmnodGoJ9dAIN4F1q9fL3ny5DHzv2K9CAX9NfutkQDEvqFjOdCgHaMkIx8LXLx4Uf785z+bHbBnzJjh45pQ9GgJLF26VLJmzSp33XWX6GppHAggECyBzZs3S758+eTWW2/1xOIT9Nfsty8CEPuGjuVAg3aMkox8KqA7nOsmg7rPx7Rp03xaC4odC4HFixebVdJ0tTRduIADAQSCIbB161YpWLCgVK5cOapL7aamR38tNZ3wXiMACc8pKmfRoKPCzEU8KvCf//xHevToYXY4Z5NBj94kjxdr/vz5ZsECXbhAh/FxIICAvwV++OEHKVKkiNnh/MCBA56pDP01+7eCAMS+oWM50KAdoyQjHwo8//zzct1118moUaN8WHqK7BUBXbAgQ4YM0qFDB9EnahwIIOBPgd9++01uvPFGKVOmjOzbt89TlaC/Zv92EIDYN3QsBxq0Y5Rk5DOBIUOGmOBDv3IgYFdg8uTJ5klaz549RZ+scSCAgL8EDh06JBUqVJDixYvLrl27PFd4+mv2bwkBiH1Dx3KgQTtGSUY+Enj//fdN8DFgwAAflZqiel1An6TpE7WBAwd6vaiUDwEEkgicOHFCqlevLvnz55dt27YlecU739Jfs38vCEDsGzqWAw3aMUoy8omAzvVIly6d8Em1T26Yz4r5+uuvmyBk2LBhPis5xUUgPgV0FTtdUlv399HNRr160F+zf2cIQOwbOpYDDdoxSjLygYDu76GrXXXs2JGx+j64X34tYr9+/UwQMnbsWL9WgXIjEBcCuvlsixYtJEuWLLJs2TJP15n+mv3bQwBi39CxHGjQjlGSkccFdN+GhIQEadWqFasVefxe+b14Ogeke/fucv3117O0s99vJuUPrIAuGNG+fXuz+ezcuXM9X0/6a/ZvEQGIfUPHcqBBO0ZJRh4W2LBhg+TMmVMaNWokZ8+e9XBJKVpQBEKbW2bMmFEWLlwYlGpRDwQCIaAfEugwXB2O+8knn/iiTvTX7N8mAhD7ho7lQIN2jJKMPCrw008/SeHChc1utsePH/doKSlWEAV0eEfTpk0lMTFR1q5dG8QqUicEfCnw6quvmmGSuiCJXw76a/bvFAGIfUPHcqBBO0ZJRh4U2L9/v9x0001SunRpz63p7kEuiuSCwKlTp6RGjRpmdZ3t27e7cAWyRACBSARGjx5tgo+XX345kmQxP5f+mv1bQABi39CxHGjQjlGSkccEdFnFqlWrSqFCheTHH3/0WOkoTjwJHDx4UMqWLWs2ONu7d288VZ26IuApgVmzZpm5WY899pjv9uuhv2a/KRGA2Dd0LAcatGOUZOQhAZ3ncccdd5hlFdevX++hklGUeBXYuXOnFClSRCpXrizHjh2LVwbqjUDMBHSVK12IpHXr1qJztPx20F+zf8cIQOwbOpYDDdoxSjLyiIC+sbRp00YyZ84sS5Ys8UipKAYCIps2bZJcuXJJgwYNRPce4EAAgegIBOH/Hv01+22FAMS+oWM50KAdoyQjDwiEVjbR5U9nzJjhgRJRBASSC4SWg77//vt9+Sls8trwEwLeFwjK00f6a/bbGgGIfUPHcqBBO0ZJRh4QGDJkiO9WNvEAG0WIssBnn31mxqE/8cQTvhuHHmUqLoeALYHDhw9LuXLlzPyrPXv22Mor1onpr9m/AwQg9g0dy4EG7RglGcVYYMKECSb4GDBgQIxLwuURSFvggw8+MO319ddfT/tkzkAAgYgFTp8+LXXq1JF8+fJJEFago78WcRO4KgEByFUksfsFDTp29lzZOQHd6E03fOvYsSOfKDvHSk4uCwwcONAEIR9//LHLVyJ7BOJLQOcCtmrVSrJkySKrVq0KROXpr9m/jQQg9g0dy4EG7RglGcVIYN26dZI9e3Zp0qSJ6MZvHAj4RUDnLHXq1EkyZMjAbul+uWmU0/MC+v+qR48eZpjj559/7vnyhltA+mvhSl37PAKQa9tE/RUadNTJuaCDAj///LPZ50P3+zh58qSDOZMVAtER0KBZg2cNojWY5kAAAXsCb7zxhnmyOGrUKHsZeSw1/TX7N4QAxL6hYznQoB2jJKMoCxw6dOjy5m779u2L8tW5HALOCWjwrEF04cKFRVfs4UAAAWsCEydODOxcQPpr1tpE0lQEIEk1Yvw9DTrGN4DLWxJIOrlw27ZtlvIgEQJeEtAg+sYbbzQr9ujKPRwIIBCZwKJFiwI9F5D+WmTtIaWzCUBSUonR72jQMYLnspYFLl26ZHay1cmF33zzjeV8SIiA1wQ0mM6bN6/cfvvtbFTotZtDeTwtsGHDBsmRI0eg5wLSX7PfBAlA7Bs6lgMN2jFKMoqSwFNPPWUmF+peChwIBE1Ag+qEhARp06aNaLDNgQACqQv8+uuvUrRoUalSpYqcOHEi9ZN9/Cr9Nfs3jwDEvqFjOdCgHaMkoygIvPXWW2Z877vvvhuFq3EJBGIjMGPGDEmXLp307t07NgXgqgj4RODYsWPyhz/8QUqUKCF+32gwLXL6a2kJpf06AUjaRlE7gwYdNWouZFNg2rRpplP2zDPP2MyJ5Ah4X2DEiBEm2H7nnXe8X1hKiEAMBM6dOyeNGjWS3Llzy/fffx+DEkT3kvTX7HsTgNg3dCwHGrRjlGTkosCyZcskc+bM8sADDzAsxUVnsvaWQJ8+fUzQPX36dG8VjNIgEGMB3eujffv2kilTJlm6dGmMSxOdy9Nfs+9MAGLf0LEcaNCOUZKRSwJbt26VPHnySP369eXs2bMuXYVsEfCegM4BadeunZkTsmLFCu8VkBIhECOB5557zjwhnDJlSoxKEP3L0l+zb04AYt/QsRxo0I5RkpELAnv37pWSJUtKhQoV5MiRIy5cgSwR8LbAmTNnpF69emZ1LJac9va9onTREdANBq+77jp58803o3NBj1yF/pr9G0EAYt/QsRxo0I5RkpHDAkk3Z9u1a5fDuZMdAv4R0H1Bypcvb/YJYdNN/9w3Suq8wOzZs80qiD179hQdhhVPB/01+3ebAMS+oWM50KAdoyQjBwUuXLggzZo1k8TERFm3bp2DOZMVAv4U0B3SCxUqJNWrV5dTp075sxKUGgEbAmvWrJGsWbPKvffeKxcvXrSRkz+T0l+zf98IQOwbOpYDDdoxSjJySEA/1Xr00Uclffr0Mm/ePIdyJRsE/C+gf6+zZcsmf/rTn0SDdA4E4kXgxx9/lAIFCkjNmjXl9OnT8VLtZPWkv5aMw9IPBCCW2NxJRIN2x5VcrQu8+uqrZnzvP//5T+uZkBKBgArMnTvXBOfdu3ePuyEoAb2lVCsNgUOHDkmZMmXkpptukgMHDqRxdnBfpr9m/94SgNg3dCwHGrRjlGTkgMDHH39sgo+XXnrJgdzIAoFgCowdO9b8P3nttdeCWUFqhcD/F9CnHbVr15b8+fPLDz/8ENcu9Nfs334CEPuGjuVAg3aMkoxsCnz55ZeSMWNG6dy5M5/s2rQkefAFXnjhBROETJw4MfiVpYZxKaDzPFq1aiVZsmSR1atXx6VB0krTX0uqYe17AhBrbq6kokG7wkqmEQps2LBBcuTIIXfddZecP38+wtScjkD8CehcqY4dO5qgffHixfEHQI0DL9CrVy+z4tWsWbMCX9dwKkh/LRyl1M8hAEndJ6qv0qCjys3FUhD45ZdfpGjRolKlShU5ceJECmfwKwQQSElAg3UN2jV437RpU0qn8DsEfCkwfPhw84Rv5MiRviy/G4Wmv2ZflQDEvqFjOdCgHaMkIwsCR48elYoVK0qJEiVkz549FnIgCQLxLXD8+HG55ZZbTBCvwTwHAn4X0N3NdaPBZ5991u9VcbT89NfscxKA2Dd0LAcatGOUZBShgO7wXL9+fcmTJ49s2bIlwtScjgACIQEN3jWI12Beg3oOBPwq8NVXX0mmTJmkffv2cunSJb9Ww5Vy01+zz0oAYt/QsRxo0I5RklEEAvrG0rZtW0lISJDly5dHkJJTEUAgJQEN4jWY16Beg3sOBPwmsHHjRsmZM6fceeedcu7cOb8V3/Xy0l+zT0wAYt/QsRxo0I5RklEEAk8//bSkS5dOZsyYEUEqTkUAgdQENJjXoF6Dez49Tk2K17wmEJoLqMMJdVghx9UC9NeuNon0NwQgkYq5eD4N2kVcsk5RIDS5cMSIESm+zi8RQMC6gAb1GtxrkM+BgB8Eks4F/O233/xQ5JiUkf6afXYCEPuGjuVAg3aMkozCEJg8ebLpHDG5MAwsTkHAosC7775rJvEOGzbMYg4kQyA6AswFDN+Z/lr4Vtc6kwDkWjIx+D0NOgbocXrJhQsXmj0LOnTowPCQOG0DVDt6Av379zdByKRJk6J3Ua6EQAQCutFgmzZtmAsYphn9tTChUjmNACQVnGi/RIOOtnh8Xk/bWWJiojRp0oSNBuOzCVDrKAvoRoUPP/ywCfoXLFgQ5atzOQRSF9D2+cQTT5iNBpkLmLpV6FX6ayEJ618JQKzbOZ6SBu04KRleIfDDDz9IgQIFpEaNGnLy5MkrXuVHBBBwS0A3KmzWrJlky5ZN1qxZ49ZlyBeBiAUGDRpkntB98MEHEaeN1wT01+zfeQIQ+4aO5UCDdoySjFIQ2Ldvn5QqVUrKlCkjBw8eTOEMfoUAAm4KnDp1SmrWrCn58uWTbdu2uXkp8kYgLIFRo0aZ4EODEI7wBeivhW91rTMJQK4lE4Pf06BjgB4nl9SlFKtUqSKFCxeWn3/+OU5qTTUR8J7AoUOHpHz58mazQlYZ8t79iacSTZ8+3Qy76tGjh+gwLI7wBeivhW91rTMJQK4lE4Pf06BjgB4Hlzx79qzccccdkiNHDtmwYUMc1JgqIuBtAd1noVixYvLHP/6R3dK9fasCW7qvv/5aMmfObPap0QnoHJEJ0F+LzCulswlAUlKJ0e9o0DGCD/BlQyub6BuNvuFwIICANwT+7//+z+yWXrduXTl9+rQ3CkUp4kJg/fr15gMp/WBKP6DiiFyA/lrkZlemIAC5UiSGP9OgY4gfwEvrI/WuXbtK+vTpZebMmQGsIVVCwN8CK1eulCxZskiLFi1Ykc7ft9I3pdeFSAoVKiRVq1aVEydO+KbcXiso/TX7d4QAxL6hYznQoB2jjPuMNPjo27evmVw4fvz4uPcAAAGvCsyZM0cyZMgg7du3Z08er96kgJRr9+7dUrJkSbMQyf79+wNSq9hUg/6afXcCEPuGjuVAg3aMMu4zeu2110zw8fbbb8e9BQAIeF3gk08+kXTp0pm9GJgM7PW75c/y6cqHuvhB8eLFZdeuXf6shIdKTX/N/s0gALFv6FgONGjHKOM6o/fee88EHy+++GJcO1B5BPwkMHr0aPP/9vnnn/dTsSmrDwSOHTtmhlzpHlAs/+zMDaO/Zt+RAMS+oWM50KAdo4zbjCZPnmw+SX3yySdZVjFuWwEV96vAm2++aYKQIUOG+LUKlNtjAr///rvoQgc5c+YUnXzO4YwA/TX7jgQg9g0dy4EG7RhlXGb0r3/9y4wlf/jhhxlLHpctgEoHQWDAgAEmCNEN4jgQsCNw7tw5adq0qWTNmlVWrFhhJyvSXiFAf+0KEAs/EoBYQHMrCQ3aLdng57to0SKzpnvLli3lwoULwa8wNUQgoAI6B0Q3htM5IfpEkwMBKwK6BHu7du0kU6ZMsmDBAitZkCYVAfprqeCE+RIBSJhQ0TiNBh0N5eBdY8mSJWYpT/2kizXdg3d/qVH8CVy6dEn0SaaujsUS2vF3/+3WWNtPp06dzC7nuts5h/MC9NfsmxKA2Dd0LIdQg169erVjeZJRsAV0H4HExESz0zmbmQX7XlO7+BLQJ5lt27aVjBkzyhdffBFflae2lgU0+OjWrZt5gjZhwgTL+ZAwdYFQf02/clgTIACx5uZKqlCD1t1Jz58/78o1yDQ4At9++63ZzVYnGJ46dSo4FaMmCCBgBPR9oFWrVmYYzdy5c1FBIFUBHb732GOPmeBj3LhxqZ7Li/YEQv01AhDrjgQg1u0cTxlq0PrYXT/5Yiy/48SByXDdunWSO3duqVmzJrvZBuauUhEErhbQicT33HOPmePFWP6rffjNfwU0+NDVD6+77joZM2YMLC4LhPprBCDWoQlArNs5njLUoHUpRg1CHnjgAYIQx5X9n+GmTZskX758Zl33o0eP+r9C1AABBFIV0LldzZo1k4SEBFm8eHGq5/Ji/Alo8PHXv/7VBB+6DxSH+wKh/hoBiHVrAhDrdo6nTNqgdeJY+vTp5cEHHxRdzYIDARXYvHmzFCxYUCpVqiSHDx8GBQEE4kTgzJkz0rhxY7Okqi48wYGACmjw8eyzz5rgY8SIEaBESSBpfy1KlwzcZQhAPHRLr2zQ06ZNM0FI+/btCUI8dJ9iVRTdREqffFSuXFkOHDgQq2JwXQQQiJGAbirXsGFDyZYtmxCExOgmeOiyGnw899xzJvgYPny4h0oW/KJc2V8Lfo2dryEBiPOmlnNMqUF/8sknJgjp0KEDw7Esy/o/4dq1a82cj6pVq/Lkw/+3kxogYFlAF5zQICRLliyycOFCy/mQ0N8CGnz07t3bBB9Dhw71d2V8WPqU+ms+rEZMi0wAElP+5Be/VoPWzah0OFabNm1EJyRyxJfAqlWrJGfOnHLbbbcJcz7i695TWwRSEtAlt5s0aWImps+ePTulU/hdgAV0qV1d7UonnDPsKjY3+lr9tdiUxp9XJQDx0H1LrUHrZlS6o2nz5s2F/R48dNNcLsqyZcske/bsUqdOHTl+/LjLVyN7BBDwi4BOTG/ZsqXZJ4TN5vxy1+yXU+eE6iaD6dKlk9GjR9vPkBwsCaTWX7OUYRwmIgDx0E1Pq0HPmzfPPHbXx+8nT570UMkpihsCX331lRnrXb9+fe63G8DkiYDPBXSfkHbt2pkn5BMnTvR5bSh+WgJ6v3V1TB0RwSaDaWm5+3pa/TV3rx6M3AlAPHQfw2nQOvFQPxGvVasWw3E8dO+cLsqcOXNMsHnnnXeKTjzlQAABBFIS0E/EO3bsaD4RHzt2bEqn8LsACOgTr/vuu88s0f/pp58GoEb+rkI4/TV/19D90hOAuG8c9hXCbdBr1qwxE5JvueUWVkMKW9c/J3788cfmTaZFixYMt/PPbaOkCMRMIOmcgHfeeSdm5eDC7gjowgNNmzZlzo87vJZyDbe/ZinzOElEAOKhGx1Jg96wYYMUKFBAypcvL7/88ouHakFR7AjoUoo6sbBz586semYHkrQIxJmArorUt29f8/ejf//+Zn+IOCMIZHUPHjwoNWrUkMTERFY989AdjqS/5qFie6ooBCAeuh2RNuht27bJDTfcIEWLFpWNGzd6qCYUJVKBpJtJ9evXj85DpICcjwACRmDYsGEmCHnkkUdE5wxw+Ffgp59+kjJlypgPG3Updg7vCETaX/NOyb1TEgIQ79wLsdKgf/vtN9GhWDly5JAvv/zSQ7WhKOEKXLhwwTzx0CcfbCYVrhrnIYDAtQQmTZpkVsfSYTs6fIfDfwLr1q2TQoUKSenSpeWHH37wXwUCXmIr/bWAk0RcPQKQiMncS2C1QevyrI0bNzZvOKyE4t79cSNnXVJZ53pkyJBBdO4HBwIIIOCEgG5SqMN2qlevzlxBJ0CjmMeiRYvMYjPVqlWT/fv3R/HKXCpcAav9tXDzj4fzCEA8dJftNGh91K6P3PVT9Ndff50hPB66r9cqyt69e83mglmzZhVd9YoDAQQQcFLgu+++k4IFC8rNN98sP/74o5NZk5dLArrxcMaMGeXuu+9m+XWXjJ3I1k5/zYnrByEPAhAP3UW7DVrnEQwcONAEIY8//rjo8owc3hTQx+vFixeXIkWKyLfffuvNQlIqBBDwvYAGHjfddJPkz59fdGNTDm8K6Pv33/72N/P+3aFDB+bvePM2XS6V3f7a5Yzi+BsCEA/dfKca9AcffGA2KmrWrBl7hXjo/oaK8tlnn5kNBqtWrSq7d+8O/ZqvCCCAgCsCupKSbmiqn6yPGTPGlWuQqXUB3etJN5TUEQwvvvii6LLKHN4WcKq/5u1auls6AhB3fSPK3ckGPX/+fMmVK5d59P79999HVA5OdkdAP+HS4XHp0qWT1q1bs8GgO8zkigACKQicO3dO/vKXv5hObq9evVjmOwWjWPxKl9G/9dZbRYfiTps2LRZF4JoWBJzsr1m4fCCSEIB46DY63aB37NghFStWNJPZZs2a5aGaxl9RdBfb0Byd559/nk+44q8JUGMEPCHwj3/8wzwh14VLjhw54okyxWshVq5caebo6HL6OiyXwz8CTvfX/FNz50pKAOKcpe2c3GjQJ06ckPvuu8986vXKK6/Q8bV9lyLPQD/hqlWrltnFdsKECZFnQAoEEEDAQYHFixdLnjx5zNyQLVu2OJgzWYUr8OGHH0qmTJmkTp06rHQVLpqHznOjv+ah6kWlKAQgUWEO7yJuNWgdT6rBh44v1WBEgxKO6Ajo6lZ58+Y1E85XrVoVnYtyFQQQQCANAd1bokKFCmYPqU8++SSNs3nZKYEzZ87IE088Yd6Pu3TpIvp0nMN/Am711/wnYb3EBCDW7RxP6XaD1mFY2bNnl7Jly/K41/G7lzxD3Vywf//+5k1GFwM4dOhQ8hP4CQEEEIixgO4hFZr8rPNDdF8iDvcEtm3bZjYOzpw5s4wcOZLl8t2jdj1nt/trrlfAAxcgAPHATQgVIRoNWh+3687p+uj3rbfe4g9gCN/Br7o7fb169cw4a510zoomDuKSFQIIOCqgi2PoyokJCQnyxz/+UVi0xFHey5l99NFHZvXDMmXK8AHgZRX/fhON/pp/dcIrOQFIeE5ROStaDVof+T711FPm0/mmTZsy/tTBu6u7D+t6+7q/x9KlSx3MmawQQAAB9wQ2btwo5cuXN6sx6fwEDUw47AucOnVKOnbsaN5vdX+PkydP2s+UHGIuEK3+Wswr6mIBCEBcxI0062g3aJ2fUKBAAbMKhy7by2FdQN9kevbsad5kdHWZ/fv3W8+MlAgggEAMBPTvWOfOnc3fsfbt24sO0eKwLrB+/XopV66cCerGjRtnPSNSek4g2v01zwE4UCACEAcQncoiFg163759cvfdd5s3nN69e7M3hYWb+fXXX0upUqUkS5Ys8vbbbzPkyoIhSRBAwDsCulpfYmKiFCtWTGbPnu2dgvmkJDrK4IUXXpAMGTJIpUqVhJXGfHLjIihmLPprERTPF6cSgHjoNsWqQeschWHDhpllYrUjzdOQ8BpF0qcet99+u2zfvj28hJyFAAIIeFxg586d0qRJE/Ph1IMPPigHDhzweIm9UTzd20OHsmnwoUEIq1x54744XYpY9decrkcs8yMAiaX+FdeOdYPWFToaNmx4+Q2HYURX3KAkP/LUIwkG3yKAQCAFdB6ITp7WPUPy5csnkydPZm7INe60zu3QHebTpUsn1atXF51TwxFcgVj314IgSwDiobvohQatbzg6VlX3rsidO7eMHTuWN5wkbURXuArtaM5TjyQwfIsAAoEV0KG6bdu2NR9O3XPPPaKbq3L8T0BHDZQsWdIMw9XRBBcvXvzfi3wXSAEv9Nf8DksA4qE76KUGrY/bdcUO3bywfv36cb9soK6PP2jQIDOZUD8JfO+995jr4aH/OxQFAQTcF5g5c6YULlzYLNmr+xwdO3bM/Yt6+AqbN28W3edJ3yd19IBu7sgRHwJe6q/5VZwAxEN3zosNWpeV1XXL9Q/sn//8Z9mxY4eHxNwvij4R0mEHxYsXl4wZM0qfPn3k6NGj7l+YKyCAAAIeFDhx4oQMGDDAfNqvH8aMGDFCzp8/78GSulckfRLetWtXuf7666V06dIydepURgq4x+3JnL3YX/MkVCqFIgBJBSfaL3m1Qeubi25UpXtb6MS67t27i/4BDvKhgceSJUukdu3aJvhq2bJl3AVfQb6/1A0BBOwJ7N692yzZq3Mebr75Zpk+fXrgO+EafA0cONA8Cdd5Mbrq4blz5+xBktqXAl7tr/kJkwDEQ3fL6w1ahyENHTrUzA3RJWf79esnhw4d8pCg/aLoimCzZs2SWrVqmcCjcuXKsmjRIvsZkwMCCCAQQAGdbB1aLatGjRoybdq0wM2BOHz4sAwePNjsm5U5c2Z59tlneRIewLYcSZW83l+LpC6xOpcAJFbyKVzXLw1ahyA9//zz5lOghIQE6dKli+iGS34+9CmPTr6vUKGCCTx0grmuf69PQjgQQAABBFIX0OG6DRo0lvrIDwAACJFJREFUMH8/b7zxRvN0QJ8Y+PnQpdUff/xx816ngUe3bt1k165dfq4SZXdIwC/9NYeq60o2BCCusFrL1G8NWieqv/rqq2azKp0jUrduXTMW1k/jgXUo2ZAhQ8wcD63Dn/70J1m2bJm1G0gqBBBAIM4F9H3soYceMsN1c+bMKX379pVff/3VNyqh4bc67FaHl+XPn19eeuklYVl639zCqBTUb/21qKBEeBECkAjB3Dzdrw36woUL5rG7BiDaiS9atKjZgEkfzXvxCYJ+Kjd+/Hi58847zRuMfrLVvn171m13s3GTNwIIxJWABh0afGgQopO177jjDjOX0KvDdnVFK53fUbZsWfM+pk/Dx4wZI2fOnImr+0ZlwxPwa38tvNpF5ywCkOg4h3WVIDRoHYqlQ7Jy5Mhh/ojrCiH6JqS7w+r8ilgdukmUDqnSHX2zZs1qyqbLC+sbDKtaxequcF0EEAi6gH7go4uYaACigYguZKJzRj788MOY/+3dunWrvPLKK1KxYkXznqDBku7zpPt6ePHDs6C3FT/VLwj9tVh7E4DE+g4kuX6QGvTZs2dlzpw5ZsysPsLWJyO6fvyjjz5q3nj00yY3N2vSgGPevHlmonzNmjUlffr0pgzly5c3kwkZx5uk4fEtAgggEAUB3dBw5MiRUq9ePfP0WZc215UG9UMq3WPEzWFOGlBs2bLFbK7buXPny086EhMTzQdTuviIvm9xIBCOQJD6a+HU141zCEDcULWYZ1AbtAYaS5culaefflrKlStn3ng0IMmWLZuZN9K7d2+ZOHGirFixwmzkdOrUqbAFjxw5It9++61MmTLFzEfRN5bbbrvNfMqm1yhUqJA88MAD8v7774t+2sWnWmHTciICCCDgmoAu46t7iOgO6zpsV/9e6z9d0lefQrz55ptmTuE333xjln0P9wm6DpnS/aoWL15shtq+/PLLZm6fLpur+eu8jkqVKsljjz0mn376qejqjhwIRCoQ1P5apA52zicAsaPncNrly5ebP5ATJkwQbdxB/af7a4waNUqefPJJady4sdlfJPTmE/qqq2sVK1bMvFFUq1ZNdDlcHZOrmyLqCiv6WmiYVyhN9uzZzTlNmzYV3aVX16Vfu3ZtYB2D2j6oV3D/73NvubcptQH9O61DZHVREw1IdB6GvgeE/rbrVx26pU/R9X1Ah0xd+U/T5M6dO1kaTZcrVy7zoZQ+fdenL/r+k1IZ+B1tM5I2oP00bV/ab+OwJkAAYs3NlVShBp30jy7f//dTMRxwoA3QBmgDtAHaAG3AS21A+20c1gQIQKy5uZLq4MGDoo1ZI+pIInHO5ZMb2gBtgDZAG6AN0AZoA9FpA9pP0/6a9ts4rAkQgFhzIxUCCCCAAAIIIIAAAghYECAAsYBGEgQQQAABBBBAAAEEELAmQABizY1UCCCAAAIIIIAAAgggYEGAAMQCGkkQQAABBBBAAAEEEEDAmgABiDU3UiGAAAIIIIAAAggggIAFAQIQC2gkQQABBBBAAAEEEEAAAWsCBCDW3EiFAAIIIIAAAggggAACFgQIQCygkQQBBBBAAAEEEEAAAQSsCRCAWHMjFQIIIIAAAggggAACCFgQIACxgEYSBBBAAAEEEEAAAQQQsCZAAGLNjVQIIIAAAggggAACCCBgQYAAxAIaSRBAAAEEEEAAAQQQQMCaAAGINTdSIYAAAggggAACCCCAgAUBAhALaCRBAAEEEEAAAQQQQAABawIEINbcSIUAAggggAACCCCAAAIWBAhALKCRBAEEEEAAAQQQQAABBKwJEIBYcyMVAggggAACCCCAAAIIWBAgALGARhIEEEAAAQQQQAABBBCwJkAAYs2NVAgggAACCCCAAAIIIGBBgADEAhpJEEAAAQQQQAABBBBAwJoAAYg1N1IhgAACCCCAAAIIIICABQECEAtoJEEAAQQQQAABBBBAAAFrAgQg1txIhQACCCCAAAIIIIAAAhYECEAsoJEEAQQQQAABBBBAAAEErAkQgFhzIxUCCCCAAAIIIIAAAghYECAAsYBGEgQQQAABBBBAAAEEELAmQABizY1UCCCAAAIIIIAAAgggYEGAAMQCGkkQQAABBBBAAAEEEEDAmgABiDU3UiGAAAIIIIAAAggggIAFAQIQC2gkQQABBBBAAAEEEEAAAWsCBCDW3EiFAAIIIIAAAggggAACFgQIQCygkQQBBBBAAAEEEEAAAQSsCRCAWHMjFQIIIIAAAggggAACCFgQIACxgEYSBBBAAAEEEEAAAQQQsCZAAGLNjVQIIIAAAggggAACCCBgQYAAxAIaSRBAAAEEEEAAAQQQQMCaAAGINTdSIYAAAggggAACCCCAgAUBAhALaCRBAAEEEEAAAQQQQAABawIEINbcSIUAAggggAACCCCAAAIWBAhALKCRBAEEEEAAAQQQQAABBKwJEIBYcyMVAggggAACCCCAAAIIWBAgALGARhIEEEAAAQQQQAABBBCwJkAAYs2NVAgggAACCCCAAAIIIGBBgADEAhpJEEAAAQQQQAABBBBAwJoAAYg1N1IhgAACCCCAAAIIIICABQECEAtoJEEAAQQQQAABBBBAAAFrAgQg1txIhQACCCCAAAIIIIAAAhYECEAsoJEEAQQQQAABBBBAAAEErAkQgFhzIxUCCCCAAAIIIIAAAghYECAAsYBGEgQQQAABBBBAAAEEELAmQABizY1UCCCAAAIIIIAAAgggYEGAAMQCGkkQQAABBBBAAAEEEEDAmgABiDU3UiGAAAIIIIAAAggggIAFAQIQC2gkQQABBBBAAAEEEEAAAWsCBCDW3EiFAAIIIIAAAggggAACFgQIQCygkQQBBBBAAAEEEEAAAQSsCRCAWHMjFQIIIIAAAggggAACCFgQIACxgEYSBBBAAAEEEEAAAQQQsCbw/wDyxzeQMuXIAAAAAABJRU5ErkJggg==)
###Code
###Output
_____no_output_____
###Markdown
(d) Make sin(x),cos(x), tan(x),-argtan(x) plot with 4 subplots (2 rows, 2 columns)hint: you'll now have to index into axes like axes[0, 0]).1. Create a loop over all the axes objects (hint: use `axes.flatten()`) so that the legend and set_ylabel functions are called for all subplots.2. Use the loop from 2 to add a title to only the top row of plots using the `set_title function`.3. Look at the documentation for the `fig.tight_layout()` command to optimize figure layout. * Note that this doesn't play nice with the figure suptitle. try using the `plt.subplots_adjust(top=0.85) `command to control the whitespace at the top of the plot.
###Code
###Output
_____no_output_____
###Markdown
(bonus)Plot as points, the periods vs. distances for each planet on a log-log plot.Write the name of the planet next to the point for that planet on the plotThe distances of the planets from the Sun [0.39, 0.72, 1.00, 1.52, 5.20, 9.54, 19.22, 30.06, 39.48]The corresponding periods of their orbits [0.24, 0.62, 1.00, 1.88, 11.86, 29.46, 84.01, 164.8, 248.09]names of the planets ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune", "Pluto"]
###Code
###Output
_____no_output_____ |
binary_startup_classification_model.ipynb | ###Markdown
Binary Startup Classification Model
###Code
# Import Libraries and Dependancies
import pandas as pd
from pathlib import Path
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
###Output
_____no_output_____
###Markdown
Import/Prepare Data (Neural Network)
###Code
#Read in data and review df
df = pd.read_csv(Path('Resources/applicants_data.csv'))
df.head()
#Drop EIN and NAME from df not revelant to model and review df
df = df.drop(columns=['EIN','NAME'])
df.head()
#Create a list of all categorical variables in the df and then review
categorical_var = []
for c in df.columns:
if df[c].dtypes == 'O':
categorical_var.append(c)
display(categorical_var)
display(df[categorical_var].dtypes)
#Encode categorical variables (OneHotEncoder), and then create new df to store encoded variables
enc = OneHotEncoder()
enc_data = enc.fit_transform(df[categorical_var]).toarray()
enc_df = pd.DataFrame(enc_data, columns=enc.get_feature_names(categorical_var))
enc_df.head()
#add numerical columns from original df to enc_df
enc_df = pd.concat([df.drop(columns=categorical_var), enc_df], axis=1)
enc_df.head()
#define the features(X) and target(y = "IS_SUCCESSFUL")
X = enc_df.drop(columns=['IS_SUCCESSFUL'])
y = enc_df['IS_SUCCESSFUL']
display(X.head())
display(y[:5])
#split datasets into training and testing with random_state=1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
#Scale training and testing dataset(X)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Compile and Evaluate a Binary Classification Model using a Nueral Network
###Code
# Define input features and output/hidden layer nodes
num_input_features = len(X_train.iloc[0])
num_output_neurons = 1
hid_nodes_l1 = int(((num_input_features + num_output_neurons)/2) +1)
hid_nodes_l2 = int((hid_nodes_l1/2) +1)
display(num_input_features, num_output_neurons, hid_nodes_l1, hid_nodes_l2)
# Create 2-layer neural network using 'relu' and 'sigmoid'
nn = Sequential()
nn.add(Dense(units=hid_nodes_l1, input_dim=num_input_features, activation='relu'))
nn.add(Dense(units=hid_nodes_l2, activation='relu'))
nn.add(Dense(units=num_output_neurons, activation='sigmoid'))
nn.summary()
# Compile and fit using 'binary_crossentropy', 'adam', and 'accuracy' as metric. Fit for 50 epochs using X_train_scaled
nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
nn.fit(x=X_train_scaled, y=y_train, epochs=50, verbose=1)
# Evaluate the model original loss and accuracy metrics for test data
model_loss, model_accuracy = nn.evaluate(x=X_test_scaled, y=y_test, verbose=0)
print(f"Loss: {model_loss:.4f}, Accuracy: {model_accuracy:.4f}")
# Save and export nn to HDF5 file, named AlphabetSoup.h5
nn.save(Path('Resources/AlphabetSoup.h5'), save_format='h5')
###Output
_____no_output_____
###Markdown
Optimize Neural Network Model
###Code
# Define input features and output/hidden layer nodes adding one more layer
num_input_features = len(X_train.iloc[0])
num_output_neurons = 1
hid_nodes_l1 = int(((num_input_features + num_output_neurons+1)/2))
hid_nodes_l2 = int((hid_nodes_l1+1)/2)
hid_nodes_l3 = int((hid_nodes_l2+1)/2)
display(num_input_features, num_output_neurons, hid_nodes_l1, hid_nodes_l2, hid_nodes_l3)
# Create 3-layer neural network using 'relu' and 'sigmoid'
nn1 = Sequential()
nn1.add(Dense(units=hid_nodes_l1, input_dim=num_input_features, activation='relu'))
nn1.add(Dense(units=hid_nodes_l2, activation='relu'))
nn1.add(Dense(units=hid_nodes_l3, activation='relu'))
nn1.add(Dense(units=num_output_neurons, activation='sigmoid'))
nn1.summary()
# Compile and fit using 'binary_crossentropy', 'adam', and 'accuracy' as metric. Fit for 50 epochs using X_train_scaled
nn1.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
nn1.fit(x=X_train_scaled, y=y_train, epochs=50, verbose=1)
# Evaluate the model alt 1 loss and accuracy metrics for test data
model_loss, model_accuracy = nn1.evaluate(x=X_test_scaled, y=y_test, verbose=0)
print(f"Loss: {model_loss:.4f}, Accuracy: {model_accuracy:.4f}")
# Save and export nn1 to HDF5 file, named AlphabetSoup.h5
nn1.save(Path('Resources/AlphabetSoupAlt1.h5'), save_format='h5')
###Output
_____no_output_____
###Markdown
Alternate 1Adding a layer did not affect the accuracy of the neural network by much.
###Code
# Define input features and output/hidden layer nodes adding one more layer (doubling total nodes)
num_input_features = len(X_train.iloc[0])
num_output_neurons = 1
hid_nodes_l1 = int((num_input_features+num_output_neurons)*(4/3))
hid_nodes_l2 = int((hid_nodes_l1+1)*(1/3))
print(num_input_features, num_output_neurons, hid_nodes_l1, hid_nodes_l2)
# Create 2-layer neural network using 'relu' and 'sigmoid'
nn2 = Sequential()
nn2.add(Dense(units=hid_nodes_l1, input_dim=num_input_features, activation='relu'))
nn2.add(Dense(units=hid_nodes_l2, activation='relu'))
nn2.add(Dense(units=num_output_neurons, activation='sigmoid'))
nn2.summary()
# Compile and fit using 'binary_crossentropy', 'adam', and 'accuracy' as metric. Fit for 50 epochs using X_train_scaled
nn2.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
nn2.fit(x=X_train_scaled, y=y_train, epochs=50, verbose=1)
# Evaluate the model alt 2 loss and accuracy metrics for test data
model_loss, model_accuracy = nn2.evaluate(x=X_test_scaled, y=y_test, verbose=0)
print(f"Loss: {model_loss:.4f}, Accuracy: {model_accuracy:.4f}")
# Save and export nn2 to HDF5 file, named AlphabetSoup.h5
nn2.save(Path('Resources/AlphabetSoupAlt2.h5'), save_format='h5')
###Output
_____no_output_____
###Markdown
Alternate 2Doubled the total nodes but the accuracy did not increase. Increasing the epochs(50 to 100) did not increase accuracy by much(about 1%).
###Code
print("Original Model Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(x=X_test_scaled, y=y_test, verbose=0)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 1 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn1.evaluate(x=X_test_scaled, y=y_test, verbose=0)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 2 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn2.evaluate(x=X_test_scaled, y=y_test, verbose=0)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
###Output
Alternative Model 2 Results
Loss: 0.5641979575157166, Accuracy: 0.7292128205299377
|
assignments/assignment4_solutions.ipynb | ###Markdown
Assignment 4 SolutionsWelcome to the fourth and final programming assignment for the course. This assignments will help to familiarise you with the B92 QKD protocol while revisiting the topics discussed in this week's lectures. Submission GuidelinesFor final submission, and to ensure that you have no errors in your solution, please use the 'Restart and Run All' option availble in the Kernel menu at the top of the page. To submit your solution, run the completed notebook and attach the solved notebook (with results visible) as a .ipynb file using the 'Add or Create' option under the 'Your Work' heading on the assignment page in Google Classroom. This assignment is sensetive to the versions of certain libraries. The cell below checks if those libraries are available. The recommended way to check this assignment is to use IBM Quantum Experience. If that option is not available to you, please make sure you are a version of `numpy` newer than `1.19` and a version of `qiskit` newer than `0.20`. Earlier version will not behave correctly. As before, the notebooks contain some checks to test your solutions. Please know that these are very basic and do not guarantee that your solution is correct as there may be some edge-cases that the checks miss. If you are confident in your solution, please submit it and it will be evaluated and graded after the deadline. Please contact us via Google classroom for any queries/concerns regarding this.
###Code
# %pip install numpy==1.19 qiskit==0.20 pylatexenc # Please uncomment this line if you are running on Google Colab
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute
from qiskit.providers.aer import QasmSimulator
from qiskit.visualization import *
import qiskit
from packaging.version import parse as parse_version
assert parse_version(np.__version__) >= parse_version('1.19'), "Please install the correct version of numpy using the command 'pip install --upgrade numpy==1.19'"
assert parse_version(qiskit.__qiskit_version__['qiskit-terra']) >= parse_version('0.15'), "Please make sure you have the correct version of Qiskit installed or run this on IBM Quantum Experience"
assert parse_version(qiskit.__qiskit_version__['qiskit-aer']) >= parse_version('0.6'), "Please make sure you have the correct version of Qiskit installed or run this on IBM Quantum Experience"
assert parse_version(qiskit.__qiskit_version__['qiskit']) >= parse_version('0.20'),"Please make sure you have the correct version of Qiskit installed or run this on IBM Quantum Experience"
from cryptography.fernet import Fernet
import base64
basis_gates = ['id', 'x', 'y', 'z', 's', 't', 'sdg', 'tdg', 'h', 'p', 'sx' ,'r', 'rx', 'ry', 'rz', 'u', 'u1', 'u2', 'u3', 'cx', 'barrier', 'measure']
secret_message = b'gAAAAABfevgMDRKfpM75bCBMUfAvaUW_Fjs2PxFYkYOSCldJTUnl8oLKVZRaiPitXqwQwbMTx4YwSCf_n0HQ-RIBvLa58AN4Pi7Fp9hFxGtjwzIpWUXIUr-BGE_9SLvjUGgsQCyrhK9ZJ5Yy9R5F6w4Me0Csr19UU3IqQQIP3ffhInE5o68_CI_URCjHXpBUnztJoDmlBnZz3Ka5NykfUN22iulaFvXOyw=='
print(f"The secret message is {secret_message.decode()}")
###Output
The secret message is gAAAAABfevgMDRKfpM75bCBMUfAvaUW_Fjs2PxFYkYOSCldJTUnl8oLKVZRaiPitXqwQwbMTx4YwSCf_n0HQ-RIBvLa58AN4Pi7Fp9hFxGtjwzIpWUXIUr-BGE_9SLvjUGgsQCyrhK9ZJ5Yy9R5F6w4Me0Csr19UU3IqQQIP3ffhInE5o68_CI_URCjHXpBUnztJoDmlBnZz3Ka5NykfUN22iulaFvXOyw==
###Markdown
The B92 Quantum Key Distribution ProtocolFor the puposes of this assignment, we will follow the convention defined in **Exercise 2.11** of _Quantum Computing: A Gentle Introduction_ by Eleanor Rieffel and Wolfgang Polak. This protocol is different from BB84 and was proposed by Charles Bennett in 1992. We will consider the version of the protocol without eavesdropping. As before, there are two parties, Alice and Bob. They communicate via a unidirectional quantum channel from Alice to Bob, and a authenticated bidirectional classical communication channel. The setup is shown in the figure below:![QKD Setup](https://raw.githubusercontent.com/deadbeatfour/quantum-computing-course/master/img/qkd.png)In this protocol, Alice and Bob generate one random binary string each. Alice encodes qubits according to values of her random binary string. For each bit in her binary string, she encodes $0$ as the $|0\rangle$ state and encodes $1$ as the $|+\rangle$ state and then sends all the qubits to Bob. Bob measures the qubits by choosing bases according to his random binary string. If the $i^{th}$ bit of his string is $0$, he measures the $i^{th}$ qubit in the Hadamard basis. If the $i^{th}$ bit is $1$, he measures the $i^{th}$ qubit in the computational basis. Finally Bob announces the results of his measurements over the classical channel. Alice and Bob keep only those bits from their binary strings corresponding to the qubits for which Bob measured an outcome of 1 to obtain their keys. The steps are mentioned in detail in the sections below: Choosing bases and encoding statesAlice generates one binary string and encodes her qubits using the following scheme:$0 \rightarrow |0\rangle$$1 \rightarrow |+\rangle$ Bob also generates a binary string and uses the following convention to choose a basis for measurement$0 \rightarrow$ Hadamard basis$1 \rightarrow$ Computational basisIn the cell below, we generate two random binary strings for Alice and Bob respectively. These will be used by Alice to encode her state, and by Bob to decide his measurement bases. Since this is a standardised assignment, we have seeded the random number generator to produce the same output every time you run the cell below. We have used this setup and a symmetric key cipher to encrypt a secret message (the ciphertext was printed after the cell above). Your goal in this exercise is to complete the B92 Protocol correctly and discover the secret message.
###Code
num_qubits = 64
rng = np.random.default_rng(seed=10)
alice_state = rng.integers(0, 2, size=num_qubits)
bob_basis = rng.integers(0, 2, size=num_qubits)
print(f"Alice's State:\t {np.array2string(alice_state, separator='')}")
print(f"Bob's Bases:\t {np.array2string(bob_basis, separator='')}")
###Output
Alice's State: [1100111011000101001101001111110110110001100110010110101100110000]
Bob's Bases: [1101001000000011110100000000011011001001111111110011010000111001]
###Markdown
Creating the circuitBased on the following result:$H|0\rangle = |+\rangle$Our algorithm to construct the circuit is as follows:1. Whenever Alice wants to encode `1` in a qubit, she applies an $H$ gate to the qubit. To encode `0`, no action is needed.2. She then _sends_ the qubits to Bob (symbolically represented in this circuit using wires)3. Bob measures the qubits according to his binary string. To measure a qubit in the Hadamard basis, he applies an $H$ gate to the corresponding qubit and then performs a standard basis measurement. **Problem 1** (5 points)Given below is the structure for a function `make_b92_circ(enc_state, meas_basis)` which returns a `QuantumCircuit()` to simulate the B92 QKD protocol. Your task is to implement steps 1 through 3 above and populate the function below. For step 3, you need to apply the gate to change the basis, but don't need to perform a measurement. A measurement operation has already been added at the end. The method is the same as was used for BB84. Warning: Please note that the measurement convention is the opposite of the BB84 case, i.e., 0 means Hadamard basis measurement and 1 means computational basis measurement.
###Code
def make_b92_circ(enc_state, meas_basis):
'''
A function that makes a B92 QKD protocol simulation circuit
enc_state: array of 0s and 1s denoting the state to be encoded using the following scheme:
0 -> |0>
1 -> |+>
meas_basis: array of 0s and 1s denoting the basis to be used for measurement
0 -> Hadamard Basis
1 -> Computational Basis
Note that both enc_state and meas_basis are arrays of integers, so if you are using them in
if statements, compare them to integer values like 0 and 1 (without quotes).
Since this is a function, you only have access to the variables enc_state and meas_basis.
You may define other local variables. One such variable, num_qubits has been defined for you.
This is the number of qubits in the B92 simulation QuantumCircuit()
'''
num_qubits = len(enc_state)
b92 = QuantumCircuit(num_qubits)
# Sender prepares qubits
# Add code below to encode the state in qubits
for index in range(len(enc_state)):
if enc_state[index] == 1:
b92.h(index)
b92.barrier()
# Receiver measures the received qubits
# Add code below to change basis for measurements. DO NOT add a measure() or measure_all()
for index in range(len(meas_basis)):
if meas_basis[index] == 0:
b92.h(index)
# Do not change below this line
b92.measure_all()
return b92
###Output
_____no_output_____
###Markdown
Simulating B92Once you have populated the function above, run the cell below to check if the function works correctly. We have added some basic checks and we are checking your measurement results against the solution. If you feel that your solution is correct but does not pass the check, consult all the library version related instructions at the top of this notebook, and then contact us via Google Classroom private comment for clarification. The result of Bob's measurements are also printed below.
###Code
try:
b92_circ = make_b92_circ(alice_state, bob_basis)
assert list(b92_circ.count_ops()) != [], "Circuit cannot be empty"
assert set(b92_circ.count_ops().keys()).difference(basis_gates) == set(), f"Only the following basic gates are allowed: {basis_gates}"
assert all([type(gate[0]) == qiskit.circuit.measure.Measure for gate in b92_circ.data[-b92_circ.num_qubits:len(b92_circ.data)]]), "Measurement must be the last operation in a circuit."
assert b92_circ.count_ops()['measure'] == b92_circ.num_qubits, "Please do not add or remove measurements."
temp_key = execute(
b92_circ.reverse_bits(),
backend=QasmSimulator(),
shots=1,
seed_simulator=10
).result().get_counts().most_frequent()
assert temp_key == bin(16228741048440553634)[2:], "Your circuit did not perform as expected. Please check the gates again."
print(f"Bob's results:\t{temp_key}\nYour answer is correct.")
except AssertionError as e:
print(f'Your code has an error: {e.args[0]}')
except Exception as e:
print(f'This error occured: {e.args[0]}')
###Output
Bob's results: 1110000100111000000100100000000000000000000110011000000010100010
Your answer is correct.
###Markdown
Creating the keyNow we need to generate the key via sifting. The sifting process for B92 is different from that of BB84. After Bob has measured the qubits, Bob will announce his measured result (the binary string printed after the previous cell). Then, Alice and Bob keep the bits in their randomly generated binary strings at the positions where Bob measured an outcome `1` in his result. Then both he and Alice discard all other bits from their respective strings. **Problem 2** (5 points)Given below is the structure for a function `b92_sifting(enc_state, meas_basis, meas_result)`. This function will perform key sifting based on Bob's measurement results. Inside the function there are two variables `sender_key` and `receiver_key`. The names are self explanatory. The sifting process is given below:Loop through each character in the `meas_result` argument. For the $i^{th}$ character:1. If the measured outcome is `'1'`, - Append the $i^{th}$ bit from the `enc_state` argument to the `sender_key` - Append the $i^{th}$ bit from the `meas_basis` argument to the `receiver_key` 2. If the measured outcome is `'0'`, - Do nothing.
###Code
def b92_sifting(enc_state, meas_basis, meas_result):
'''
The function that implements key sifting for the B92 QKD protocol.
enc_state: array of 0s and 1s denoting the state to be encoded.
(Array of integers)
meas_basis: array of 0s and 1s denoting the basis to be used for measurement.
(Array of integers)
meas_result: A string of characters representing the results of measurement after the
B92 QKD protocol. Note that this is a string and its elements are characters,
so while using any if statements, compare the elements to '0' and '1' (with quotes)
Since this is a function, you only have access to the variables enc_state, meas_basis and meas_result.
You may define other local variables. num_qubits has been defined for you.
This is the number of qubits in the B92 simulation QuantumCircuit.
sender_key and receiver_key are initialised as two empty strings. You may append bits using the +=
operation as shown in the BB84 notebook. Note that you can only add characters. To change from other
data types to characters, you may use str(). Check the BB84 notebook for examples.
'''
num_qubits = len(enc_state)
sender_key = ''
receiver_key = ''
# Loop over all bits in the meas_result string and add the necessary bits to both sender_key and receiver_key
# Add your code below
for i in range(len(meas_result)):
if meas_result[i] == '1': # Only choose bits where Bob measured a 1
sender_key += str(enc_state[i])
receiver_key += str(meas_basis[i])
# Do not change bolow this line.
return (sender_key, receiver_key)
###Output
_____no_output_____
###Markdown
Obtaining the final key and decrypting the messageOnce you have filled in the function above, run the following cell. We use the function you filled to obtain the final sifted key from Alice's and Bob's binary strings and Bob's measurement results. Those keys are printed. If all goes well, the secret message will also be decrypted for you.
###Code
try:
alice_key, bob_key = b92_sifting(alice_state, bob_basis, temp_key)
assert ''.join([str(x ^ y) for x, y in zip(alice_key.encode(), bob_key.encode())]) != '1'*len(alice_key), "Please check your measurement convention"
assert alice_key == bob_key, "They keys are different for Alice and Bob."
assert alice_key == bob_key == bin(49522)[2:], "They keys is incorrect. Please check your solutions."
print(f"Alice's Key: \t{alice_key}\nBob's Key: \t{bob_key}\nYour answer is correct.")
g = Fernet(base64.b64encode(bob_key.encode()*2))
print(f"The secret message is: {g.decrypt(secret_message).decode()}")
except AssertionError as e:
print(f'Your code has an error: {e.args[0]}')
except Exception as e:
print(f'This error occured: {e.args[0]}')
###Output
Alice's Key: 1100000101110010
Bob's Key: 1100000101110010
Your answer is correct.
The secret message is:
Thank you for participating in the course. We hope you had fun.
-With ❤️ from IIT Roorkee
|
week-4/jupyter_build/01_python_scraping_wikipedia_and_reddit_apis.ipynb | ###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Python LibrariesWe'll need a few common libraries for all these examples. (if one of these doesn't exist, put it in your `requirements.txt` file and then rebuild your docker image.
###Code
# Lets us talk to other servers on the web
import requests
# APIs spit out data in JSON
import json
# Use BeautifulSoup to parse some HTML
from bs4 import BeautifulSoup
# Handling dates and times
from datetime import datetime
# DataFrames!
import pandas as pd
# Data visualization
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
# operating system commands
import os
###Output
_____no_output_____
###Markdown
Scraping WikipediaConsider the Wikipedia page for [George H.W. Bush](https://en.wikipedia.org/wiki/George_H._W._Bush). This seems like a relatively straightforward webpage to scrape out the hyperlinks to other articles or to compare the content to other presidential biographies. However, Wikipedia also preserves the [history of every revision made to this article](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&action=history) going back to the first (available) revisions in 2001, like [this](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&oldid=345784898). Thinking back to the Oscars example, it seems promising to find the "oldid" values and visit each revision's webpage to parse the content out. However, Wikipedia will give you much of this revision history data for free through its [application programming interface](http://en.wikipedia.org/w/api.php) (API). Current contentWe can use `requests` to get the current HTML markup of an article from the API, for example.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to parse the content of a page
query_params['action'] = 'parse'
# Which page?
query_params['page'] = 'George H. W. Bush'
# We want the text
query_params['prop'] = 'text'
# Ignore the edit buttons and table of contents
query_params['disableeditsection'] = 1
query_params['disabletoc'] = 1
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
We have only set up our request to the API, but not sent it or received the data back.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
What's waiting inside? A dictionary of dictionaries. The inner dictionary has keys for the title of the page we requested ("George H. W. Bush"), the pageid (a numeric identifier), and the text of the article.
###Code
json_response['parse'].keys()
###Output
_____no_output_____
###Markdown
We could count the number of links in the article.
###Code
ghwb_soup = BeautifulSoup(json_response['parse']['text'])
ghwb_soup.find_all('a')[:5]
###Output
_____no_output_____
###Markdown
Or the content of the article.
###Code
ghwb_soup.find_all('p')[:5]
###Output
_____no_output_____
###Markdown
Revision historyThere is also an API endpoint for the revision history of this article that contains metadata about the who and when of previous changes.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to query properties of a page
query_params['action'] = 'query'
# Which page?
query_params['titles'] = 'George H. W. Bush'
# We want the revisions
query_params['prop'] = 'revisions'
# In particular, we want the revision ids, users, comments, timestamps
query_params['rvprop'] = 'ids|userid|comment|timestamp|user|size|sha1'
# Get 500 revisions
query_params['rvlimit'] = 500
# Start old and go newer
query_params['rvdir'] = 'newer'
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
Make the request.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
Inspect this `json_response`. This returns a dictionary with both "continue" and "query" keys. The continue indicates there are more than 500 revisions present in the article's history and provides an index for the next query to pick up from. The query contains the revision history we care about—buried a bit in a nested data structure of lists and dictionaries, but we eventually get to the "revisions" list of dictionaries with the revision histories.
###Code
revisions = json_response['query']['pages'][0]['revisions']
revisions[:3]
###Output
_____no_output_____
###Markdown
Convert to a DataFrame.
###Code
rev_df = pd.DataFrame(revisions)
rev_df.head()
###Output
_____no_output_____
###Markdown
Plot out how the size of the article changed over the first 500 revisions.
###Code
ax = rev_df.plot(y='size',legend=False)
ax.set_ylabel('Size (bytes)')
ax.set_xlabel('Revision')
ax.set_xlim((0,500))
###Output
_____no_output_____
###Markdown
Or count how many times an editor made a contribution.
###Code
rev_df['user'].value_counts().head()
###Output
_____no_output_____
###Markdown
There are many other parts of the very powerful Wikipedia API and scraping these APIs exposes much more metadata than parsing the HTML of these webpages, while also being easier on the servers hosting it. I will share a notebook that has functions for retrieving and parsing content, revisions, pageviews, and other information. Scraping RedditReddit also hosts a lot of detailed behavioral data that could be of interest to social scientists. As was the case with Wikipedia, our naïve inclination may be to develop scrapers and parsers to extract this information, but Reddit will give much of it to you for free through their API!You can retrieve a few different types of entities from Reddit's API: sub-reddits, submissions, comments, and redditors. Many of these are interoperable: a sub-reddit contains submissions contributed by redditors with comments from other redditors.We will use a wrapper library to communicate with the Reddit API called [Python Reddit API Wrapper](https://praw.readthedocs.io/en/latest/) or `praw`. Afterwards, we can import `praw`.
###Code
import praw
###Output
_____no_output_____
###Markdown
We then need to authenticate with Reddit to get access to the API. Typically you can just enter the client ID, client secret, password, username, *etc*. as strings. 1. You will need to create an account on Reddit. After you have created an account and logged in, go to https://www.reddit.com/prefs/apps/. 2. Scroll down and click the "create app" button at the bottom. Provide a basic name, description, and enter a URL for your homepage (or just use http://www.ucla.edu).3. You will need the client ID (the string of characters beneath the name of your app) as well as the secret (the other string of characters) as well as your username and password.4. I had to change to a script app to get this to work. 5. You can make up a user-agent string, but include your username as good practice for the sysadmins to track you down if you break things.![Image from Cody Buntain](http://www.cs.umd.edu/~cbuntain/inst728e/reddit_screens/1-003a.png)You'll create an API connector object (`r`) below that will authenticate with the API and handle making the requests.
###Code
#r = praw.Reddit(client_id='your application id',
#client_secret='your application secret',
#password='your account password',
#user_agent='scraping script by /u/youraccountname',
#username='your account name')
###Output
_____no_output_____
###Markdown
You can confirm that this authentication process worked by making a simple request like printing your username.
###Code
#print(r.user.me())
###Output
langholz-stat
###Markdown
I'm going to read them in from a local file ("login.json") so that I post this notebook on the internet in the future without compromising my account security. This won't work for you, so just skip this step.
###Code
# Load my credentials from a local disk so I don't show the world
with open('reddit_login.json','r') as f:
r_creds = json.load(f)
# Create an authenticated reddit instance using the creds
r = praw.Reddit(client_id = r_creds['client_id'],
client_secret = r_creds['client_secret'],
password = r_creds['password'],
user_agent = r_creds['user_agent'],
username = r_creds['username'])
# Make sure your reddit instance works
print(r.user.me())
###Output
langholz-stat
###Markdown
Sub-redditsNow print the top 25 stories in /r/news.[Documentation for the Subreddit model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html). Create a `news_subreddit` object to store the various attributes about this sub-reddit.
###Code
news_subreddit = r.subreddit('news')
###Output
_____no_output_____
###Markdown
The `news_subreddit` has a number of attributes and methods you can call on it. The time the sub-reddit was founded.
###Code
news_subreddit.created_utc
###Output
_____no_output_____
###Markdown
That's formatted in a UNIX timecode (seconds since 1 January 1970), but we can convert it into a more readable timestamp with `datetime`'s `utcfromtimestamp`.
###Code
print(datetime.utcfromtimestamp(news_subreddit.created_utc))
###Output
2008-01-25 06:49:25
###Markdown
There are other attributes such as the number of subscribers, current active users, as well as the description of the sub-reddit.
###Code
'{0:,}'.format(news_subreddit.subscribers)
news_subreddit.over18
news_subreddit.active_user_count
print(news_subreddit.description)
###Output
>* **[/r/inthenews](/r/inthenews?hl)**
>#
>* **[/r/worldnews](/r/worldnews?hl)**
>#
>* **[/r/politics](/r/politics?hl)**
>#
>* **[new comments](/r/news/comments?hl)**
1. **[Post all analysis/opinion/politics articles to /r/InTheNews](/r/InTheNews)**
> [](http://goo.gl/R6as4?ri)
> [](http://goo.gl/gBldE?ri)
> [](http://goo.gl/u5EZN?ri)
> [](http://goo.gl/exK8j?ri)
> [](http://www.reddit.com/r/news?ri)
> [](http://www.reddit.com/r/restorethefourth?ri)
Want to talk?
Follow [@rslashnews on Twitter](https://twitter.com/rslashnews)
See a post that violates the rules below? Had your post stuck in the spam filter? Have a question about policy? Just want to give feedback? [Send the mod team a message](http://www.reddit.com/message/compose?to=%2Fr%2Fnews).
---
Submit all self- & meta-posts to /r/inthenews
Your post will likely be removed if it:
- is not news
- is an opinion/analysis or advocacy piece.
- primarily concerns politics.
- has a title not taken from the article.
- has a pay wall or steals content.
- covers an already-submitted story.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/), especially regarding personal info.
Your comment will likely be removed if it:
- advocates or celebrates the death of another person
- is racist, sexist, vitriolic, or overly crude.
- is unnecessarily rude or provocative.
- is a cheap and distracting joke or meme.
- is responding to spam.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/).
Extreme or repeat offenders will be banned.
**\>\>\>[Expanded Rules](https://www.reddit.com/r/news/about/rules/)<<<**
---
If your post doesn't fit, consider [finding an appropriate news article on that story](http://www.reddit.com/r/news/wiki/recommendedsources) to submit instead, or submitting yours to lower moderation subreddits:
[/r/inthenews](/r/inthenews) - all news-related content
[/r/AnythingGoesNews](/r/AnythingGoesNews) - unrestricted news
[/r/truereddit](/r/truereddit) - insightful articles
/r/self - any self-post
/r/misc, /r/redditdotcom - anything
or other news subreddits:
[/r/worldnews](/r/worldnews) - from outside the USA only
[/r/SyrianCivilWar](/r/syriancivilwar) - about the conflict in Syria
[/r/MidEastRegionalWar](/r/mideastregionalwar) - on MidEast conflict
[/r/UpliftingNews](/r/upliftingnews) - uplifting
[/r/SavedYouAClick](/r/savedyouaclick) - making media more straightforward
or subreddits for other topics:
[/r/FoodForThought](/r/FoodForThought) - discussion-worthy long form articles about interesting subjects
[/r/politics](/r/politics) - for shouting about politics
[/r/moderatepolitics](/r/ModeratePolitics) - less shouting
[/r/politicaldiscussion](/r/PoliticalDiscussion) - even less shouting
[/r/geopolitics](/r/geopolitics) - intl. politics and geography
[/r/entertainment](/r/entertainment) - Justin Bieber updates, etc.
or check out the [200 most active subreddits, categorized by content](http://redd.it/1f7hqc) and the [full list of subreddits by subscribers](http://redditmetrics.com/top).
---
Recommendations:
/r/full_news
/r/qualitynews
/r/neutralnews
/r/worldevents
---
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
[submit news article](http://www.reddit.com/r/news/submit)
[submit something else](http://www.reddit.com/r/misc/submit)
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
###Markdown
The rules of the sub-reddit are available as a method `.rules()` which returns a list of dictionaries of rule objects.
###Code
news_subreddit.rules()['rules']
###Output
_____no_output_____
###Markdown
When were each of these rules created? Loop through each of the rules and print the "short_name" of the rule and the rule timestamp.
###Code
for rule in news_subreddit.rules()['rules']:
created = rule['created_utc']
print(rule['short_name'], datetime.utcfromtimestamp(created))
###Output
Not news 2016-01-26 06:24:11
Opinion/analysis or advocacy piece 2016-01-26 06:27:59
Politics 2016-01-26 06:31:33
Title not from article/editorialized title 2016-01-26 06:35:51
Paywall or is blogspam/steals content 2016-01-26 06:40:33
Covers an already-submitted story 2016-01-26 06:44:40
Racist, sexist, vitriolic, or overly crude 2016-01-26 06:47:09
Unnecessarily rude or provocative 2016-01-26 06:49:35
Cheap or distracting joke or meme 2016-01-26 06:51:12
Breaks sitewide rules, witchhunting 2016-01-26 06:56:47
###Markdown
We can also get a list of the moderators for this subreddit.
###Code
mod_list = []
for mod in news_subreddit.moderator():
mod_list.append(mod.name)
mod_list
###Output
_____no_output_____
###Markdown
SubmissionsWe can get a list of submissions to a sub-reddit using [a few different methods](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html).* `.controversial()`* `.hot()`* `.new()`* `.rising()`* `.search()`* `.top()`Here we will use the `.top()` method to get the top 25 submissions on the /r/news subreddit from the past 12 months.[Documentation for the Submission model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
###Code
top25_news = r.subreddit('news').top('year',limit=25)
###Output
_____no_output_____
###Markdown
`top25_news` is a `ListingGenerator` object, which is a special [generator](https://www.dataquest.io/blog/python-generators-tutorial/) class defined by PRAW. It does not actually go out and get the data at this stage. There's not much you can do to look inside this `ListingGenerator` other than loop through and perform operations. In this case, lets add each submission to a list of `top25_submissions`.
###Code
top25_submissions = []
for submission in r.subreddit('news').top('year',limit=25):
top25_submissions.append(submission)
###Output
_____no_output_____
###Markdown
We can inspect the first (top) `Submission` object.
###Code
first_submission = top25_submissions[0]
first_submission
###Output
_____no_output_____
###Markdown
Use the `dir` function to see the other methods and attributes inside this first top `Submission` object. (There are a lot of other "hidden" attributes and methods that use the "\_" which we can ignore with this list comprehension.)
###Code
[i for i in dir(first_submission) if '_' not in i]
###Output
_____no_output_____
###Markdown
`vars` may be even more helpful.
###Code
vars(first_submission)
###Output
_____no_output_____
###Markdown
We can extract the features of each submission, store them in a dictionary, and save to an external list. This step will take a while (approximately one second per submission) because we make an API call for each submission in the `ListingGenerator` returned by the `r.subreddit('news').top('year',limit=25)` we're looping through.
###Code
submission_stats = []
for submission in r.subreddit('news').top('year',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
submission_stats.append(d)
###Output
_____no_output_____
###Markdown
We can turn `submission_stats` into a pandas DataFrame.
###Code
top25_df = pd.DataFrame(submission_stats)
top25_df.head()
###Output
_____no_output_____
###Markdown
Plot out the relationship between score and number of comments.
###Code
ax = top25_df.plot.scatter(x='score',y='num_comments',s=50,c='k',alpha=.5)
ax.set_xlim((0,200000))
ax.set_ylim((0,16000))
###Output
_____no_output_____
###Markdown
CommentsThis is a simple Reddit submission: [What is a dataset that you can't believe is available to the public?](https://www.reddit.com/r/datasets/comments/akb4mr/what_is_a_dataset_that_you_cant_believe_is/). We can inspect the comments in this simple submission.[Documentation for Comment model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/comment.html).
###Code
cant_believe = r.submission(id='akb4mr')
print("This submission was made on {0}.".format(datetime.utcfromtimestamp(cant_believe.created_utc)))
print("There are {0:,} comments.".format(cant_believe.num_comments))
###Output
This submission was made on 2019-01-27 10:59:04.
There are 37 comments.
###Markdown
We can inspect these comments, working from the [Comment Extraction and Parsing](https://praw.readthedocs.io/en/latest/tutorials/comments.html) tutorial in PRAW.
###Code
cant_believe.comments.replace_more(limit=None)
for comment in cant_believe.comments.list():
print(comment.body)
###Output
State voter files. You can see whether every single registered voter voted in a given election. There’s actually a political science [paper](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=gerber+turnout+pressure&btnG=#d=gs_qabs&u=%23p%3D0F49X22wGIcJ) where the researchers threatened to send letters to all your neighbors with information about which people in the neighborhood had voted to see if it would increase turnout. It did.
First and last name of every US person who has renounced their citizenship each quarter: [https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate](https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate)
The SEC published "Apache log files that record and store user access statistics for the SEC.gov website" for 2003 through 2017: https://www.sec.gov/dera/data/edgar-log-file-data-set.html
[Enron email dataset](https://www.cs.cmu.edu/~./enron/)
ICIJ Offshore Leaks Database
[https://offshoreleaks.icij.org/pages/database](https://offshoreleaks.icij.org/pages/database)
If open directories count, 'Pemiblanc' seems pretty alarming.
[https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/](https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/)
You can scrape searches from the Baidu chinese search engine. You can forexample count the number of times trade war was searched.
The NPI database lists personal information about every single physician in the United States. Not only is it public, doctors are required to keep their information updated in the database.
Although not a true dataset, the Department of Energy has pretty detailed data (mostly in PDFs) showing exactly where and how uranium moves from mine sites, to processing facilities, to reactors. Down to the ships and trucks used, the routes taken, etc. it’s really cool info. You can get a detailed image of how the global nuclear industry operates.
Social Security Death Master File
There is a website that pretty much lists everyone's former addresses, phone numbers, and even leaves vague trashy (libel) warnings like 'criminal record, sex offender registered or past debts on record, pay 30 dollars to see it'. So if you were late on a single payment, like pretty much anyone, you're a pedo, and here are your addresses and phone numbers, and every one of your family members as well.
States that have government transparency websites. You can literally find out the salaries of K-12 teachers, high school teachers, government workers, and even professors.
Vaccine Adverse Event Reporting data - unverified, self-reported data available to the public from the CDC. What could go wrong?
[https://wonder.cdc.gov/VAERS.html](https://wonder.cdc.gov/VAERS.html)
[removed]
Pipl
that is an excellent response i'm surprised that's legal. I guess the purpose is so that people can check that their vote was counted and/or see if someone used their name to vote for them if they didn't vote?
How data is being applied in politics on an international basis today.
[The Rise of the Weaponized AI Propaganda Machine](https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b)
According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.”
But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging.
Do you have an example of such dataset?
Are there docs on how to do this?
Sounds like google trends right?
Like someone else pointed out, this is like Google trends. What would set this way apart was if you could somehow find out who searched what and when like by their IP or something
Do you have a source that does not require signing up for an account such as ancestry?
What is link?
For my own playground I'm scraping RateMyProfessor to determine very loosely whether my professors salaries are lining up with their rate my professor scores (quality) and for fun, their address using county GIS data to create a heatmap of Rate My Professors scores. Won't ever see the light of day, more of a "welcome to a real world Tableau project".
Mostly it’s legal because political campaigns want the data. They use it to target voters they think are likely to vote for them. Sort of a conflict of interest, right?
It’s even creepier when you realize there are companies like Catalist that link voter files with consumer datasets. High level political campaigns know what sorts of cars you drive, etc.
Fortunately most of this stuff is completely useless for political campaigns. Although they have access to lots of data, the actual models they end up using don’t include any of it. Cambridge Analytica is mostly [bullshit](https://www.washingtonpost.com/news/monkey-cage/wp/2018/03/23/four-and-a-half-reasons-not-to-worry-that-cambridge-analytica-skewed-the-2016-election/?noredirect=on&utm_term=.ca1fa0ce6a83) marketing.
You have to either request them from the state or purchase access to a national voter file. Requirements are different for each state. The CA voter file is free, you can request it with [this](https://www.co.siskiyou.ca.us/sites/default/files/CLK-20180223_CaliforniaVoterRegistrationFileRequest.pdf) form. Other states charge anywhere from $50 to several hundred dollars.
National voter files maintained by companies like Catalist can cost as much as 30k per year 😳
I never talked about who searched what, just counting the searches for a topic. Which in my mind is suprising given the propensity for the Chinese government to want to put away certain types of information. Hence my comment that the information could be modified and therefore to take with a grain of salt. So yea, its like someone else said, similar to google trends (but also good luck finding the link if you dont read chinese or know very precisely what it is called, so your welcome :)
[https://www.reddit.com/r/bigquery/comments/76e3o3/public\_dataset\_social\_security\_death\_master\_file/](https://www.reddit.com/r/bigquery/comments/76e3o3/public_dataset_social_security_death_master_file/)
mylife.com and/or peekyou
It's so many things wrong with it. It's like the ultimate stalking tool. It even scrapes photos of people as well, it's got about everything on the US population.
Which I think is an interesting side issue -- publicly available datasets that are really only useful for particular interested parties.
Maybe, seems like many elections around the world are going toward hardline conservatives.
Ah, this is on me for thinking public == free. Thanks for the reply, and the link.
Thanks, got message "unable to find dataset" when I clicked on Big Query link.
Yeah, I’m a political scientist that uses this data occasionally so I won’t complain about it too much!
Some are definitely free. CA, UT and NC are free off the top of my head, but there are others.
###Markdown
Each comment has a lot of metadata we can preserve.
###Code
cant_believe_comment_metadata = []
for comment in cant_believe.comments.list():
print(comment)
if not comment.collapsed: # Skip collapsed/deleted comments
d = {}
d['id'] = comment.id
d['parent_id'] = comment.parent_id
d['body'] = comment.body
d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
#d['author'] = comment.author.name
cant_believe_comment_metadata.append(d)
###Output
ef3kluj
ef3cp0s
ef4iy3r
ef4fe4q
ef4ykjj
ef3glgq
ef50l6m
ef5h6gw
ef4y2h3
ef3k0s5
ef56bbg
ef7k3iz
ef4xb43
ei8dcty
ef414ov
ef57qap
ef54a1l
ef5g6lv
ef5satl
efbc7zt
ef57xrj
ef3o2n4
ej3basg
ef496rc
ef5c2qk
ef5bu9y
efbhq6o
egbxvft
ef595hr
ef4a3xl
ef5dnjk
ef5c2gp
egbzydh
ef4a7qm
ef5cbig
###Markdown
Convert to a DataFrame.
###Code
cant_believe_df = pd.DataFrame(cant_believe_comment_metadata)
# How long is the comment
cant_believe_df['comment_length'] = cant_believe_df['body'].str.len()
cant_believe_df.head()
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have lower scores?
###Code
sb.catplot(x='depth',y='score',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have shorter lengths?
###Code
sb.catplot(x='depth',y='comment_length',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
RedditorsA Redditor is a user and we can get meta-data about the account as well as the history of the user's comments and submissions from the API.[Documentation for the Redditor model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/redditor.html).How much link and comment karma does this user have?
###Code
spez = r.redditor('spez')
print("Link karma: {0:,}".format(spez.link_karma))
print("Comment karma: {0:,}".format(spez.comment_karma))
###Output
Link karma: 114,865
Comment karma: 691,729
###Markdown
Interestingly, Reddit flags the users who are employees of Reddit as well as if accounts have verified email addresses.
###Code
spez.is_employee
spez.has_verified_email
###Output
_____no_output_____
###Markdown
We can also get the time this user's account was created.
###Code
datetime.utcfromtimestamp(spez.created_utc)
###Output
_____no_output_____
###Markdown
We can also get information about individual redditors' submissions and comment histories. Here we will use u/spez (the CEO of Reddit), get his top-voted submissions, and loop through them to get the data for each submission.
###Code
spez_submissions = []
for submission in r.redditor('spez').submissions.top('all',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
spez_submissions.append(d)
###Output
_____no_output_____
###Markdown
Again we can turn this list of dictionaries into a DataFrame to do substantive data analysis.
###Code
pd.DataFrame(spez_submissions).head()
###Output
_____no_output_____
###Markdown
We can also get all the comments made by an editor.
###Code
spez_comments = []
for comment in r.redditor('spez').comments.top('all',limit=25):
d = {}
d['id'] = comment.id
d['body'] = comment.body
#d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
d['author'] = comment.author.name
spez_comments.append(d)
pd.DataFrame(spez_comments).head()
###Output
_____no_output_____
###Markdown
This user's top comments are mostly focused in the /r/announcements subreddit.
###Code
pd.DataFrame(spez_comments)['subreddit'].value_counts()
###Output
_____no_output_____
###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Python LibrariesWe'll need a few common libraries for all these examples. (if one of these doesn't exist, put it in your `requirements.txt` file and then rebuild your docker image.
###Code
# Lets us talk to other servers on the web
import requests
# APIs spit out data in JSON
import json
# Use BeautifulSoup to parse some HTML
from bs4 import BeautifulSoup
# Handling dates and times
from datetime import datetime
# DataFrames!
import pandas as pd
# Data visualization
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
# operating system commands
import os
###Output
_____no_output_____
###Markdown
Scraping WikipediaConsider the Wikipedia page for [George H.W. Bush](https://en.wikipedia.org/wiki/George_H._W._Bush). This seems like a relatively straightforward webpage to scrape out the hyperlinks to other articles or to compare the content to other presidential biographies. However, Wikipedia also preserves the [history of every revision made to this article](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&action=history) going back to the first (available) revisions in 2001, like [this](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&oldid=345784898). Thinking back to the Oscars example, it seems promising to find the "oldid" values and visit each revision's webpage to parse the content out. However, Wikipedia will give you much of this revision history data for free through its [application programming interface](http://en.wikipedia.org/w/api.php) (API). Current contentWe can use `requests` to get the current HTML markup of an article from the API, for example.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to parse the content of a page
query_params['action'] = 'parse'
# Which page?
query_params['page'] = 'George H. W. Bush'
# We want the text
query_params['prop'] = 'text'
# Ignore the edit buttons and table of contents
query_params['disableeditsection'] = 1
query_params['disabletoc'] = 1
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
We have only set up our request to the API, but not sent it or received the data back.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
What's waiting inside? A dictionary of dictionaries. The inner dictionary has keys for the title of the page we requested ("George H. W. Bush"), the pageid (a numeric identifier), and the text of the article.
###Code
json_response['parse'].keys()
###Output
_____no_output_____
###Markdown
We could count the number of links in the article.
###Code
ghwb_soup = BeautifulSoup(json_response['parse']['text'])
ghwb_soup.find_all('a')[:5]
###Output
_____no_output_____
###Markdown
Or the content of the article.
###Code
ghwb_soup.find_all('p')[:5]
###Output
_____no_output_____
###Markdown
Revision historyThere is also an API endpoint for the revision history of this article that contains metadata about the who and when of previous changes.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to query properties of a page
query_params['action'] = 'query'
# Which page?
query_params['titles'] = 'George H. W. Bush'
# We want the revisions
query_params['prop'] = 'revisions'
# In particular, we want the revision ids, users, comments, timestamps
query_params['rvprop'] = 'ids|userid|comment|timestamp|user|size|sha1'
# Get 500 revisions
query_params['rvlimit'] = 500
# Start old and go newer
query_params['rvdir'] = 'newer'
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
Make the request.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
Inspect this `json_response`. This returns a dictionary with both "continue" and "query" keys. The continue indicates there are more than 500 revisions present in the article's history and provides an index for the next query to pick up from. The query contains the revision history we care about—buried a bit in a nested data structure of lists and dictionaries, but we eventually get to the "revisions" list of dictionaries with the revision histories.
###Code
revisions = json_response['query']['pages'][0]['revisions']
revisions[:3]
###Output
_____no_output_____
###Markdown
Convert to a DataFrame.
###Code
rev_df = pd.DataFrame(revisions)
rev_df.head()
###Output
_____no_output_____
###Markdown
Plot out how the size of the article changed over the first 500 revisions.
###Code
ax = rev_df.plot(y='size',legend=False)
ax.set_ylabel('Size (bytes)')
ax.set_xlabel('Revision')
ax.set_xlim((0,500))
###Output
_____no_output_____
###Markdown
Or count how many times an editor made a contribution.
###Code
rev_df['user'].value_counts().head()
###Output
_____no_output_____
###Markdown
There are many other parts of the very powerful Wikipedia API and scraping these APIs exposes much more metadata than parsing the HTML of these webpages, while also being easier on the servers hosting it. I will share a notebook that has functions for retrieving and parsing content, revisions, pageviews, and other information. Scraping RedditReddit also hosts a lot of detailed behavioral data that could be of interest to social scientists. As was the case with Wikipedia, our naïve inclination may be to develop scrapers and parsers to extract this information, but Reddit will give much of it to you for free through their API!You can retrieve a few different types of entities from Reddit's API: sub-reddits, submissions, comments, and redditors. Many of these are interoperable: a sub-reddit contains submissions contributed by redditors with comments from other redditors.We will use a wrapper library to communicate with the Reddit API called [Python Reddit API Wrapper](https://praw.readthedocs.io/en/latest/) or `praw`. Afterwards, we can import `praw`.
###Code
import praw
###Output
_____no_output_____
###Markdown
We then need to authenticate with Reddit to get access to the API. Typically you can just enter the client ID, client secret, password, username, *etc*. as strings. 1. You will need to create an account on Reddit. After you have created an account and logged in, go to https://www.reddit.com/prefs/apps/. 2. Scroll down and click the "create app" button at the bottom. Provide a basic name, description, and enter a URL for your homepage (or just use http://www.ucla.edu).3. You will need the client ID (the string of characters beneath the name of your app) as well as the secret (the other string of characters) as well as your username and password.4. I had to change to a script app to get this to work. 5. You can make up a user-agent string, but include your username as good practice for the sysadmins to track you down if you break things.![Image from Cody Buntain](http://www.cs.umd.edu/~cbuntain/inst728e/reddit_screens/1-003a.png)You'll create an API connector object (`r`) below that will authenticate with the API and handle making the requests.
###Code
#r = praw.Reddit(client_id='your application id',
#client_secret='your application secret',
#password='your account password',
#user_agent='scraping script by /u/youraccountname',
#username='your account name')
###Output
_____no_output_____
###Markdown
You can confirm that this authentication process worked by making a simple request like printing your username.
###Code
#print(r.user.me())
###Output
langholz-stat
###Markdown
I'm going to read them in from a local file ("login.json") so that I post this notebook on the internet in the future without compromising my account security. This won't work for you, so just skip this step.
###Code
# Load my credentials from a local disk so I don't show the world
with open('reddit_login.json','r') as f:
r_creds = json.load(f)
# Create an authenticated reddit instance using the creds
r = praw.Reddit(client_id = r_creds['client_id'],
client_secret = r_creds['client_secret'],
password = r_creds['password'],
user_agent = r_creds['user_agent'],
username = r_creds['username'])
# Make sure your reddit instance works
print(r.user.me())
###Output
langholz-stat
###Markdown
Sub-redditsNow print the top 25 stories in /r/news.[Documentation for the Subreddit model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html). Create a `news_subreddit` object to store the various attributes about this sub-reddit.
###Code
news_subreddit = r.subreddit('news')
###Output
_____no_output_____
###Markdown
The `news_subreddit` has a number of attributes and methods you can call on it. The time the sub-reddit was founded.
###Code
news_subreddit.created_utc
###Output
_____no_output_____
###Markdown
That's formatted in a UNIX timecode (seconds since 1 January 1970), but we can convert it into a more readable timestamp with `datetime`'s `utcfromtimestamp`.
###Code
print(datetime.utcfromtimestamp(news_subreddit.created_utc))
###Output
2008-01-25 06:49:25
###Markdown
There are other attributes such as the number of subscribers, current active users, as well as the description of the sub-reddit.
###Code
'{0:,}'.format(news_subreddit.subscribers)
news_subreddit.over18
news_subreddit.active_user_count
print(news_subreddit.description)
###Output
>* **[/r/inthenews](/r/inthenews?hl)**
>#
>* **[/r/worldnews](/r/worldnews?hl)**
>#
>* **[/r/politics](/r/politics?hl)**
>#
>* **[new comments](/r/news/comments?hl)**
1. **[Post all analysis/opinion/politics articles to /r/InTheNews](/r/InTheNews)**
> [](http://goo.gl/R6as4?ri)
> [](http://goo.gl/gBldE?ri)
> [](http://goo.gl/u5EZN?ri)
> [](http://goo.gl/exK8j?ri)
> [](http://www.reddit.com/r/news?ri)
> [](http://www.reddit.com/r/restorethefourth?ri)
Want to talk?
Follow [@rslashnews on Twitter](https://twitter.com/rslashnews)
See a post that violates the rules below? Had your post stuck in the spam filter? Have a question about policy? Just want to give feedback? [Send the mod team a message](http://www.reddit.com/message/compose?to=%2Fr%2Fnews).
---
Submit all self- & meta-posts to /r/inthenews
Your post will likely be removed if it:
- is not news
- is an opinion/analysis or advocacy piece.
- primarily concerns politics.
- has a title not taken from the article.
- has a pay wall or steals content.
- covers an already-submitted story.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/), especially regarding personal info.
Your comment will likely be removed if it:
- advocates or celebrates the death of another person
- is racist, sexist, vitriolic, or overly crude.
- is unnecessarily rude or provocative.
- is a cheap and distracting joke or meme.
- is responding to spam.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/).
Extreme or repeat offenders will be banned.
**\>\>\>[Expanded Rules](https://www.reddit.com/r/news/about/rules/)<<<**
---
If your post doesn't fit, consider [finding an appropriate news article on that story](http://www.reddit.com/r/news/wiki/recommendedsources) to submit instead, or submitting yours to lower moderation subreddits:
[/r/inthenews](/r/inthenews) - all news-related content
[/r/AnythingGoesNews](/r/AnythingGoesNews) - unrestricted news
[/r/truereddit](/r/truereddit) - insightful articles
/r/self - any self-post
/r/misc, /r/redditdotcom - anything
or other news subreddits:
[/r/worldnews](/r/worldnews) - from outside the USA only
[/r/SyrianCivilWar](/r/syriancivilwar) - about the conflict in Syria
[/r/MidEastRegionalWar](/r/mideastregionalwar) - on MidEast conflict
[/r/UpliftingNews](/r/upliftingnews) - uplifting
[/r/SavedYouAClick](/r/savedyouaclick) - making media more straightforward
or subreddits for other topics:
[/r/FoodForThought](/r/FoodForThought) - discussion-worthy long form articles about interesting subjects
[/r/politics](/r/politics) - for shouting about politics
[/r/moderatepolitics](/r/ModeratePolitics) - less shouting
[/r/politicaldiscussion](/r/PoliticalDiscussion) - even less shouting
[/r/geopolitics](/r/geopolitics) - intl. politics and geography
[/r/entertainment](/r/entertainment) - Justin Bieber updates, etc.
or check out the [200 most active subreddits, categorized by content](http://redd.it/1f7hqc) and the [full list of subreddits by subscribers](http://redditmetrics.com/top).
---
Recommendations:
/r/full_news
/r/qualitynews
/r/neutralnews
/r/worldevents
---
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
[submit news article](http://www.reddit.com/r/news/submit)
[submit something else](http://www.reddit.com/r/misc/submit)
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
###Markdown
The rules of the sub-reddit are available as a method `.rules()` which returns a list of dictionaries of rule objects.
###Code
news_subreddit.rules()['rules']
###Output
_____no_output_____
###Markdown
When were each of these rules created? Loop through each of the rules and print the "short_name" of the rule and the rule timestamp.
###Code
for rule in news_subreddit.rules()['rules']:
created = rule['created_utc']
print(rule['short_name'], datetime.utcfromtimestamp(created))
###Output
Not news 2016-01-26 06:24:11
Opinion/analysis or advocacy piece 2016-01-26 06:27:59
Politics 2016-01-26 06:31:33
Title not from article/editorialized title 2016-01-26 06:35:51
Paywall or is blogspam/steals content 2016-01-26 06:40:33
Covers an already-submitted story 2016-01-26 06:44:40
Racist, sexist, vitriolic, or overly crude 2016-01-26 06:47:09
Unnecessarily rude or provocative 2016-01-26 06:49:35
Cheap or distracting joke or meme 2016-01-26 06:51:12
Breaks sitewide rules, witchhunting 2016-01-26 06:56:47
###Markdown
We can also get a list of the moderators for this subreddit.
###Code
mod_list = []
for mod in news_subreddit.moderator():
mod_list.append(mod.name)
mod_list
###Output
_____no_output_____
###Markdown
SubmissionsWe can get a list of submissions to a sub-reddit using [a few different methods](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html).* `.controversial()`* `.hot()`* `.new()`* `.rising()`* `.search()`* `.top()`Here we will use the `.top()` method to get the top 25 submissions on the /r/news subreddit from the past 12 months.[Documentation for the Submission model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
###Code
top25_news = r.subreddit('news').top('year',limit=25)
###Output
_____no_output_____
###Markdown
`top25_news` is a `ListingGenerator` object, which is a special [generator](https://www.dataquest.io/blog/python-generators-tutorial/) class defined by PRAW. It does not actually go out and get the data at this stage. There's not much you can do to look inside this `ListingGenerator` other than loop through and perform operations. In this case, lets add each submission to a list of `top25_submissions`.
###Code
top25_submissions = []
for submission in r.subreddit('news').top('year',limit=25):
top25_submissions.append(submission)
###Output
_____no_output_____
###Markdown
We can inspect the first (top) `Submission` object.
###Code
first_submission = top25_submissions[0]
first_submission
###Output
_____no_output_____
###Markdown
Use the `dir` function to see the other methods and attributes inside this first top `Submission` object. (There are a lot of other "hidden" attributes and methods that use the "\_" which we can ignore with this list comprehension.)
###Code
[i for i in dir(first_submission) if '_' not in i]
###Output
_____no_output_____
###Markdown
`vars` may be even more helpful.
###Code
vars(first_submission)
###Output
_____no_output_____
###Markdown
We can extract the features of each submission, store them in a dictionary, and save to an external list. This step will take a while (approximately one second per submission) because we make an API call for each submission in the `ListingGenerator` returned by the `r.subreddit('news').top('year',limit=25)` we're looping through.
###Code
submission_stats = []
for submission in r.subreddit('news').top('year',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
submission_stats.append(d)
###Output
_____no_output_____
###Markdown
We can turn `submission_stats` into a pandas DataFrame.
###Code
top25_df = pd.DataFrame(submission_stats)
top25_df.head()
###Output
_____no_output_____
###Markdown
Plot out the relationship between score and number of comments.
###Code
ax = top25_df.plot.scatter(x='score',y='num_comments',s=50,c='k',alpha=.5)
ax.set_xlim((0,200000))
ax.set_ylim((0,16000))
###Output
_____no_output_____
###Markdown
CommentsThis is a simple Reddit submission: [What is a dataset that you can't believe is available to the public?](https://www.reddit.com/r/datasets/comments/akb4mr/what_is_a_dataset_that_you_cant_believe_is/). We can inspect the comments in this simple submission.[Documentation for Comment model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/comment.html).
###Code
cant_believe = r.submission(id='akb4mr')
print("This submission was made on {0}.".format(datetime.utcfromtimestamp(cant_believe.created_utc)))
print("There are {0:,} comments.".format(cant_believe.num_comments))
###Output
This submission was made on 2019-01-27 10:59:04.
There are 37 comments.
###Markdown
We can inspect these comments, working from the [Comment Extraction and Parsing](https://praw.readthedocs.io/en/latest/tutorials/comments.html) tutorial in PRAW.
###Code
cant_believe.comments.replace_more(limit=None)
for comment in cant_believe.comments.list():
print(comment.body)
###Output
State voter files. You can see whether every single registered voter voted in a given election. There’s actually a political science [paper](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=gerber+turnout+pressure&btnG=#d=gs_qabs&u=%23p%3D0F49X22wGIcJ) where the researchers threatened to send letters to all your neighbors with information about which people in the neighborhood had voted to see if it would increase turnout. It did.
First and last name of every US person who has renounced their citizenship each quarter: [https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate](https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate)
The SEC published "Apache log files that record and store user access statistics for the SEC.gov website" for 2003 through 2017: https://www.sec.gov/dera/data/edgar-log-file-data-set.html
[Enron email dataset](https://www.cs.cmu.edu/~./enron/)
ICIJ Offshore Leaks Database
[https://offshoreleaks.icij.org/pages/database](https://offshoreleaks.icij.org/pages/database)
If open directories count, 'Pemiblanc' seems pretty alarming.
[https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/](https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/)
You can scrape searches from the Baidu chinese search engine. You can forexample count the number of times trade war was searched.
The NPI database lists personal information about every single physician in the United States. Not only is it public, doctors are required to keep their information updated in the database.
Although not a true dataset, the Department of Energy has pretty detailed data (mostly in PDFs) showing exactly where and how uranium moves from mine sites, to processing facilities, to reactors. Down to the ships and trucks used, the routes taken, etc. it’s really cool info. You can get a detailed image of how the global nuclear industry operates.
Social Security Death Master File
There is a website that pretty much lists everyone's former addresses, phone numbers, and even leaves vague trashy (libel) warnings like 'criminal record, sex offender registered or past debts on record, pay 30 dollars to see it'. So if you were late on a single payment, like pretty much anyone, you're a pedo, and here are your addresses and phone numbers, and every one of your family members as well.
States that have government transparency websites. You can literally find out the salaries of K-12 teachers, high school teachers, government workers, and even professors.
Vaccine Adverse Event Reporting data - unverified, self-reported data available to the public from the CDC. What could go wrong?
[https://wonder.cdc.gov/VAERS.html](https://wonder.cdc.gov/VAERS.html)
[removed]
Pipl
that is an excellent response i'm surprised that's legal. I guess the purpose is so that people can check that their vote was counted and/or see if someone used their name to vote for them if they didn't vote?
How data is being applied in politics on an international basis today.
[The Rise of the Weaponized AI Propaganda Machine](https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b)
According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.”
But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging.
Do you have an example of such dataset?
Are there docs on how to do this?
Sounds like google trends right?
Like someone else pointed out, this is like Google trends. What would set this way apart was if you could somehow find out who searched what and when like by their IP or something
Do you have a source that does not require signing up for an account such as ancestry?
What is link?
For my own playground I'm scraping RateMyProfessor to determine very loosely whether my professors salaries are lining up with their rate my professor scores (quality) and for fun, their address using county GIS data to create a heatmap of Rate My Professors scores. Won't ever see the light of day, more of a "welcome to a real world Tableau project".
Mostly it’s legal because political campaigns want the data. They use it to target voters they think are likely to vote for them. Sort of a conflict of interest, right?
It’s even creepier when you realize there are companies like Catalist that link voter files with consumer datasets. High level political campaigns know what sorts of cars you drive, etc.
Fortunately most of this stuff is completely useless for political campaigns. Although they have access to lots of data, the actual models they end up using don’t include any of it. Cambridge Analytica is mostly [bullshit](https://www.washingtonpost.com/news/monkey-cage/wp/2018/03/23/four-and-a-half-reasons-not-to-worry-that-cambridge-analytica-skewed-the-2016-election/?noredirect=on&utm_term=.ca1fa0ce6a83) marketing.
You have to either request them from the state or purchase access to a national voter file. Requirements are different for each state. The CA voter file is free, you can request it with [this](https://www.co.siskiyou.ca.us/sites/default/files/CLK-20180223_CaliforniaVoterRegistrationFileRequest.pdf) form. Other states charge anywhere from $50 to several hundred dollars.
National voter files maintained by companies like Catalist can cost as much as 30k per year 😳
I never talked about who searched what, just counting the searches for a topic. Which in my mind is suprising given the propensity for the Chinese government to want to put away certain types of information. Hence my comment that the information could be modified and therefore to take with a grain of salt. So yea, its like someone else said, similar to google trends (but also good luck finding the link if you dont read chinese or know very precisely what it is called, so your welcome :)
[https://www.reddit.com/r/bigquery/comments/76e3o3/public\_dataset\_social\_security\_death\_master\_file/](https://www.reddit.com/r/bigquery/comments/76e3o3/public_dataset_social_security_death_master_file/)
mylife.com and/or peekyou
It's so many things wrong with it. It's like the ultimate stalking tool. It even scrapes photos of people as well, it's got about everything on the US population.
Which I think is an interesting side issue -- publicly available datasets that are really only useful for particular interested parties.
Maybe, seems like many elections around the world are going toward hardline conservatives.
Ah, this is on me for thinking public == free. Thanks for the reply, and the link.
Thanks, got message "unable to find dataset" when I clicked on Big Query link.
Yeah, I’m a political scientist that uses this data occasionally so I won’t complain about it too much!
Some are definitely free. CA, UT and NC are free off the top of my head, but there are others.
###Markdown
Each comment has a lot of metadata we can preserve.
###Code
cant_believe_comment_metadata = []
for comment in cant_believe.comments.list():
print(comment)
if not comment.collapsed: # Skip collapsed/deleted comments
d = {}
d['id'] = comment.id
d['parent_id'] = comment.parent_id
d['body'] = comment.body
d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
#d['author'] = comment.author.name
cant_believe_comment_metadata.append(d)
###Output
ef3kluj
ef3cp0s
ef4iy3r
ef4fe4q
ef4ykjj
ef3glgq
ef50l6m
ef5h6gw
ef4y2h3
ef3k0s5
ef56bbg
ef7k3iz
ef4xb43
ei8dcty
ef414ov
ef57qap
ef54a1l
ef5g6lv
ef5satl
efbc7zt
ef57xrj
ef3o2n4
ej3basg
ef496rc
ef5c2qk
ef5bu9y
efbhq6o
egbxvft
ef595hr
ef4a3xl
ef5dnjk
ef5c2gp
egbzydh
ef4a7qm
ef5cbig
###Markdown
Convert to a DataFrame.
###Code
cant_believe_df = pd.DataFrame(cant_believe_comment_metadata)
# How long is the comment
cant_believe_df['comment_length'] = cant_believe_df['body'].str.len()
cant_believe_df.head()
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have lower scores?
###Code
sb.catplot(x='depth',y='score',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have shorter lengths?
###Code
sb.catplot(x='depth',y='comment_length',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
RedditorsA Redditor is a user and we can get meta-data about the account as well as the history of the user's comments and submissions from the API.[Documentation for the Redditor model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/redditor.html).How much link and comment karma does this user have?
###Code
spez = r.redditor('spez')
print("Link karma: {0:,}".format(spez.link_karma))
print("Comment karma: {0:,}".format(spez.comment_karma))
###Output
Link karma: 114,865
Comment karma: 691,729
###Markdown
Interestingly, Reddit flags the users who are employees of Reddit as well as if accounts have verified email addresses.
###Code
spez.is_employee
spez.has_verified_email
###Output
_____no_output_____
###Markdown
We can also get the time this user's account was created.
###Code
datetime.utcfromtimestamp(spez.created_utc)
###Output
_____no_output_____
###Markdown
We can also get information about individual redditors' submissions and comment histories. Here we will use u/spez (the CEO of Reddit), get his top-voted submissions, and loop through them to get the data for each submission.
###Code
spez_submissions = []
for submission in r.redditor('spez').submissions.top('all',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
spez_submissions.append(d)
###Output
_____no_output_____
###Markdown
Again we can turn this list of dictionaries into a DataFrame to do substantive data analysis.
###Code
pd.DataFrame(spez_submissions).head()
###Output
_____no_output_____
###Markdown
We can also get all the comments made by an editor.
###Code
spez_comments = []
for comment in r.redditor('spez').comments.top('all',limit=25):
d = {}
d['id'] = comment.id
d['body'] = comment.body
#d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
d['author'] = comment.author.name
spez_comments.append(d)
pd.DataFrame(spez_comments).head()
###Output
_____no_output_____
###Markdown
This user's top comments are mostly focused in the /r/announcements subreddit.
###Code
pd.DataFrame(spez_comments)['subreddit'].value_counts()
###Output
_____no_output_____
###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Python LibrariesWe'll need a few common libraries for all these examples. (if one of these doesn't exist, put it in your `requirements.txt` file and then rebuild your docker image.
###Code
# Lets us talk to other servers on the web
import requests
# APIs spit out data in JSON
import json
# Use BeautifulSoup to parse some HTML
from bs4 import BeautifulSoup
# Handling dates and times
from datetime import datetime
# DataFrames!
import pandas as pd
# Data visualization
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
# operating system commands
import os
###Output
_____no_output_____
###Markdown
Scraping WikipediaConsider the Wikipedia page for [George H.W. Bush](https://en.wikipedia.org/wiki/George_H._W._Bush). This seems like a relatively straightforward webpage to scrape out the hyperlinks to other articles or to compare the content to other presidential biographies. However, Wikipedia also preserves the [history of every revision made to this article](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&action=history) going back to the first (available) revisions in 2001, like [this](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&oldid=345784898). Thinking back to the Oscars example, it seems promising to find the "oldid" values and visit each revision's webpage to parse the content out. However, Wikipedia will give you much of this revision history data for free through its [application programming interface](http://en.wikipedia.org/w/api.php) (API). Current contentWe can use `requests` to get the current HTML markup of an article from the API, for example.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to parse the content of a page
query_params['action'] = 'parse'
# Which page?
query_params['page'] = 'George H. W. Bush'
# We want the text
query_params['prop'] = 'text'
# Ignore the edit buttons and table of contents
query_params['disableeditsection'] = 1
query_params['disabletoc'] = 1
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
We have only set up our request to the API, but not sent it or received the data back.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
What's waiting inside? A dictionary of dictionaries. The inner dictionary has keys for the title of the page we requested ("George H. W. Bush"), the pageid (a numeric identifier), and the text of the article.
###Code
json_response['parse'].keys()
###Output
_____no_output_____
###Markdown
We could count the number of links in the article.
###Code
ghwb_soup = BeautifulSoup(json_response['parse']['text'])
ghwb_soup.find_all('a')[:5]
###Output
_____no_output_____
###Markdown
Or the content of the article.
###Code
ghwb_soup.find_all('p')[:5]
###Output
_____no_output_____
###Markdown
Revision historyThere is also an API endpoint for the revision history of this article that contains metadata about the who and when of previous changes.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to query properties of a page
query_params['action'] = 'query'
# Which page?
query_params['titles'] = 'George H. W. Bush'
# We want the revisions
query_params['prop'] = 'revisions'
# In particular, we want the revision ids, users, comments, timestamps
query_params['rvprop'] = 'ids|userid|comment|timestamp|user|size|sha1'
# Get 500 revisions
query_params['rvlimit'] = 500
# Start old and go newer
query_params['rvdir'] = 'newer'
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
Make the request.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
Inspect this `json_response`. This returns a dictionary with both "continue" and "query" keys. The continue indicates there are more than 500 revisions present in the article's history and provides an index for the next query to pick up from. The query contains the revision history we care about—buried a bit in a nested data structure of lists and dictionaries, but we eventually get to the "revisions" list of dictionaries with the revision histories.
###Code
revisions = json_response['query']['pages'][0]['revisions']
revisions[:3]
###Output
_____no_output_____
###Markdown
Convert to a DataFrame.
###Code
rev_df = pd.DataFrame(revisions)
rev_df.head()
###Output
_____no_output_____
###Markdown
Plot out how the size of the article changed over the first 500 revisions.
###Code
ax = rev_df.plot(y='size',legend=False)
ax.set_ylabel('Size (bytes)')
ax.set_xlabel('Revision')
ax.set_xlim((0,500))
###Output
_____no_output_____
###Markdown
Or count how many times an editor made a contribution.
###Code
rev_df['user'].value_counts().head()
###Output
_____no_output_____
###Markdown
There are many other parts of the very powerful Wikipedia API and scraping these APIs exposes much more metadata than parsing the HTML of these webpages, while also being easier on the servers hosting it. I will share a notebook that has functions for retrieving and parsing content, revisions, pageviews, and other information. Scraping RedditReddit also hosts a lot of detailed behavioral data that could be of interest to social scientists. As was the case with Wikipedia, our naïve inclination may be to develop scrapers and parsers to extract this information, but Reddit will give much of it to you for free through their API!You can retrieve a few different types of entities from Reddit's API: sub-reddits, submissions, comments, and redditors. Many of these are interoperable: a sub-reddit contains submissions contributed by redditors with comments from other redditors.We will use a wrapper library to communicate with the Reddit API called [Python Reddit API Wrapper](https://praw.readthedocs.io/en/latest/) or `praw`. Afterwards, we can import `praw`.
###Code
import praw
###Output
_____no_output_____
###Markdown
We then need to authenticate with Reddit to get access to the API. Typically you can just enter the client ID, client secret, password, username, *etc*. as strings. 1. You will need to create an account on Reddit. After you have created an account and logged in, go to https://www.reddit.com/prefs/apps/. 2. Scroll down and click the "create app" button at the bottom. Provide a basic name, description, and enter a URL for your homepage (or just use http://www.ucla.edu).3. You will need the client ID (the string of characters beneath the name of your app) as well as the secret (the other string of characters) as well as your username and password.4. I had to change to a script app to get this to work. 5. You can make up a user-agent string, but include your username as good practice for the sysadmins to track you down if you break things.![Image from Cody Buntain](http://www.cs.umd.edu/~cbuntain/inst728e/reddit_screens/1-003a.png)You'll create an API connector object (`r`) below that will authenticate with the API and handle making the requests.
###Code
# r = praw.Reddit(client_id='your application id',
# client_secret='your application secret',
# password='your account password',
# user_agent='scraping script by /u/youraccountname',
# username='your account name')
###Output
_____no_output_____
###Markdown
You can confirm that this authentication process worked by making a simple request like printing your username.
###Code
#print(r.user.me())
###Output
_____no_output_____
###Markdown
I'm going to read them in from a local file ("login.json") so that I post this notebook on the internet in the future without compromising my account security. This won't work for you, so just skip this step.
###Code
# Load my credentials from a local disk so I don't show the world
with open('reddit_login.json','r') as f:
r_creds = json.load(f)
# Create an authenticated reddit instance using the creds
r = praw.Reddit(client_id = r_creds['client_id'],
client_secret = r_creds['client_secret'],
password = r_creds['password'],
user_agent = r_creds['user_agent'],
username = r_creds['username'])
# Make sure your reddit instance works
print(r.user.me())
###Output
_____no_output_____
###Markdown
Sub-redditsNow print the top 25 stories in /r/news.[Documentation for the Subreddit model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html). Create a `news_subreddit` object to store the various attributes about this sub-reddit.
###Code
news_subreddit = r.subreddit('news')
###Output
_____no_output_____
###Markdown
The `news_subreddit` has a number of attributes and methods you can call on it. The time the sub-reddit was founded.
###Code
news_subreddit.created_utc
###Output
_____no_output_____
###Markdown
That's formatted in a UNIX timecode (seconds since 1 January 1970), but we can convert it into a more readable timestamp with `datetime`'s `utcfromtimestamp`.
###Code
print(datetime.utcfromtimestamp(news_subreddit.created_utc))
###Output
2008-01-25 06:49:25
###Markdown
There are other attributes such as the number of subscribers, current active users, as well as the description of the sub-reddit.
###Code
'{0:,}'.format(news_subreddit.subscribers)
news_subreddit.over18
news_subreddit.active_user_count
print(news_subreddit.description)
###Output
>* **[/r/inthenews](/r/inthenews?hl)**
>#
>* **[/r/worldnews](/r/worldnews?hl)**
>#
>* **[/r/politics](/r/politics?hl)**
>#
>* **[new comments](/r/news/comments?hl)**
1. **[Post all analysis/opinion/politics articles to /r/InTheNews](/r/InTheNews)**
> [](http://goo.gl/R6as4?ri)
> [](http://goo.gl/gBldE?ri)
> [](http://goo.gl/u5EZN?ri)
> [](http://goo.gl/exK8j?ri)
> [](http://www.reddit.com/r/news?ri)
> [](http://www.reddit.com/r/restorethefourth?ri)
Want to talk?
Follow [@rslashnews on Twitter](https://twitter.com/rslashnews)
See a post that violates the rules below? Had your post stuck in the spam filter? Have a question about policy? Just want to give feedback? [Send the mod team a message](http://www.reddit.com/message/compose?to=%2Fr%2Fnews).
---
Submit all self- & meta-posts to /r/inthenews
Your post will likely be removed if it:
- is not news
- is an opinion/analysis or advocacy piece.
- primarily concerns politics.
- has a title not taken from the article.
- has a pay wall or steals content.
- covers an already-submitted story.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/), especially regarding personal info.
Your comment will likely be removed if it:
- advocates or celebrates the death of another person
- is racist, sexist, vitriolic, or overly crude.
- is unnecessarily rude or provocative.
- is a cheap and distracting joke or meme.
- is responding to spam.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/).
Extreme or repeat offenders will be banned.
**\>\>\>[Expanded Rules](https://www.reddit.com/r/news/about/rules/)<<<**
---
If your post doesn't fit, consider [finding an appropriate news article on that story](http://www.reddit.com/r/news/wiki/recommendedsources) to submit instead, or submitting yours to lower moderation subreddits:
[/r/inthenews](/r/inthenews) - all news-related content
[/r/AnythingGoesNews](/r/AnythingGoesNews) - unrestricted news
[/r/truereddit](/r/truereddit) - insightful articles
/r/self - any self-post
/r/misc, /r/redditdotcom - anything
or other news subreddits:
[/r/worldnews](/r/worldnews) - from outside the USA only
[/r/SyrianCivilWar](/r/syriancivilwar) - about the conflict in Syria
[/r/MidEastRegionalWar](/r/mideastregionalwar) - on MidEast conflict
[/r/UpliftingNews](/r/upliftingnews) - uplifting
[/r/SavedYouAClick](/r/savedyouaclick) - making media more straightforward
or subreddits for other topics:
[/r/FoodForThought](/r/FoodForThought) - discussion-worthy long form articles about interesting subjects
[/r/politics](/r/politics) - for shouting about politics
[/r/moderatepolitics](/r/ModeratePolitics) - less shouting
[/r/politicaldiscussion](/r/PoliticalDiscussion) - even less shouting
[/r/geopolitics](/r/geopolitics) - intl. politics and geography
[/r/entertainment](/r/entertainment) - Justin Bieber updates, etc.
or check out the [200 most active subreddits, categorized by content](http://redd.it/1f7hqc) and the [full list of subreddits by subscribers](http://redditmetrics.com/top).
---
Recommendations:
/r/full_news
/r/qualitynews
/r/neutralnews
/r/worldevents
---
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
[submit news article](http://www.reddit.com/r/news/submit)
[submit something else](http://www.reddit.com/r/misc/submit)
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
###Markdown
The rules of the sub-reddit are available as a method `.rules()` which returns a list of dictionaries of rule objects.
###Code
news_subreddit.rules()['rules']
###Output
_____no_output_____
###Markdown
When were each of these rules created? Loop through each of the rules and print the "short_name" of the rule and the rule timestamp.
###Code
for rule in news_subreddit.rules()['rules']:
created = rule['created_utc']
print(rule['short_name'], datetime.utcfromtimestamp(created))
###Output
Not news 2016-01-26 06:24:11
Opinion/analysis or advocacy piece 2016-01-26 06:27:59
Politics 2016-01-26 06:31:33
Title not from article/editorialized title 2016-01-26 06:35:51
Paywall or is blogspam/steals content 2016-01-26 06:40:33
Covers an already-submitted story 2016-01-26 06:44:40
Racist, sexist, vitriolic, or overly crude 2016-01-26 06:47:09
Unnecessarily rude or provocative 2016-01-26 06:49:35
Cheap or distracting joke or meme 2016-01-26 06:51:12
Breaks sitewide rules, witchhunting 2016-01-26 06:56:47
###Markdown
We can also get a list of the moderators for this subreddit.
###Code
mod_list = []
for mod in news_subreddit.moderator():
mod_list.append(mod.name)
mod_list
###Output
_____no_output_____
###Markdown
SubmissionsWe can get a list of submissions to a sub-reddit using [a few different methods](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html).* `.controversial()`* `.hot()`* `.new()`* `.rising()`* `.search()`* `.top()`Here we will use the `.top()` method to get the top 25 submissions on the /r/news subreddit from the past 12 months.[Documentation for the Submission model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
###Code
top25_news = r.subreddit('news').top('year',limit=25)
###Output
_____no_output_____
###Markdown
`top25_news` is a `ListingGenerator` object, which is a special [generator](https://www.dataquest.io/blog/python-generators-tutorial/) class defined by PRAW. It does not actually go out and get the data at this stage. There's not much you can do to look inside this `ListingGenerator` other than loop through and perform operations. In this case, lets add each submission to a list of `top25_submissions`.
###Code
top25_submissions = []
for submission in r.subreddit('news').top('year',limit=25):
top25_submissions.append(submission)
###Output
_____no_output_____
###Markdown
We can inspect the first (top) `Submission` object.
###Code
first_submission = top25_submissions[0]
first_submission
###Output
_____no_output_____
###Markdown
Use the `dir` function to see the other methods and attributes inside this first top `Submission` object. (There are a lot of other "hidden" attributes and methods that use the "\_" which we can ignore with this list comprehension.)
###Code
[i for i in dir(first_submission) if '_' not in i]
###Output
_____no_output_____
###Markdown
`vars` may be even more helpful.
###Code
vars(first_submission)
###Output
_____no_output_____
###Markdown
We can extract the features of each submission, store them in a dictionary, and save to an external list. This step will take a while (approximately one second per submission) because we make an API call for each submission in the `ListingGenerator` returned by the `r.subreddit('news').top('year',limit=25)` we're looping through.
###Code
submission_stats = []
for submission in r.subreddit('news').top('year',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
submission_stats.append(d)
###Output
_____no_output_____
###Markdown
We can turn `submission_stats` into a pandas DataFrame.
###Code
top25_df = pd.DataFrame(submission_stats)
top25_df.head()
###Output
_____no_output_____
###Markdown
Plot out the relationship between score and number of comments.
###Code
ax = top25_df.plot.scatter(x='score',y='num_comments',s=50,c='k',alpha=.5)
ax.set_xlim((0,200000))
ax.set_ylim((0,16000))
###Output
_____no_output_____
###Markdown
CommentsThis is a simple Reddit submission: [What is a dataset that you can't believe is available to the public?](https://www.reddit.com/r/datasets/comments/akb4mr/what_is_a_dataset_that_you_cant_believe_is/). We can inspect the comments in this simple submission.[Documentation for Comment model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/comment.html).
###Code
cant_believe = r.submission(id='akb4mr')
print("This submission was made on {0}.".format(datetime.utcfromtimestamp(cant_believe.created_utc)))
print("There are {0:,} comments.".format(cant_believe.num_comments))
###Output
This submission was made on 2019-01-27 10:59:04.
There are 37 comments.
###Markdown
We can inspect these comments, working from the [Comment Extraction and Parsing](https://praw.readthedocs.io/en/latest/tutorials/comments.html) tutorial in PRAW.
###Code
cant_believe.comments.replace_more(limit=None)
for comment in cant_believe.comments.list():
print(comment.body)
###Output
State voter files. You can see whether every single registered voter voted in a given election. There’s actually a political science [paper](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=gerber+turnout+pressure&btnG=#d=gs_qabs&u=%23p%3D0F49X22wGIcJ) where the researchers threatened to send letters to all your neighbors with information about which people in the neighborhood had voted to see if it would increase turnout. It did.
First and last name of every US person who has renounced their citizenship each quarter: [https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate](https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate)
The SEC published "Apache log files that record and store user access statistics for the SEC.gov website" for 2003 through 2017: https://www.sec.gov/dera/data/edgar-log-file-data-set.html
[Enron email dataset](https://www.cs.cmu.edu/~./enron/)
ICIJ Offshore Leaks Database
[https://offshoreleaks.icij.org/pages/database](https://offshoreleaks.icij.org/pages/database)
If open directories count, 'Pemiblanc' seems pretty alarming.
[https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/](https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/)
You can scrape searches from the Baidu chinese search engine. You can forexample count the number of times trade war was searched.
The NPI database lists personal information about every single physician in the United States. Not only is it public, doctors are required to keep their information updated in the database.
Although not a true dataset, the Department of Energy has pretty detailed data (mostly in PDFs) showing exactly where and how uranium moves from mine sites, to processing facilities, to reactors. Down to the ships and trucks used, the routes taken, etc. it’s really cool info. You can get a detailed image of how the global nuclear industry operates.
Social Security Death Master File
There is a website that pretty much lists everyone's former addresses, phone numbers, and even leaves vague trashy (libel) warnings like 'criminal record, sex offender registered or past debts on record, pay 30 dollars to see it'. So if you were late on a single payment, like pretty much anyone, you're a pedo, and here are your addresses and phone numbers, and every one of your family members as well.
States that have government transparency websites. You can literally find out the salaries of K-12 teachers, high school teachers, government workers, and even professors.
Vaccine Adverse Event Reporting data - unverified, self-reported data available to the public from the CDC. What could go wrong?
[https://wonder.cdc.gov/VAERS.html](https://wonder.cdc.gov/VAERS.html)
[removed]
Pipl
that is an excellent response i'm surprised that's legal. I guess the purpose is so that people can check that their vote was counted and/or see if someone used their name to vote for them if they didn't vote?
How data is being applied in politics on an international basis today.
[The Rise of the Weaponized AI Propaganda Machine](https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b)
According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.”
But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging.
Do you have an example of such dataset?
Are there docs on how to do this?
Sounds like google trends right?
Like someone else pointed out, this is like Google trends. What would set this way apart was if you could somehow find out who searched what and when like by their IP or something
Do you have a source that does not require signing up for an account such as ancestry?
What is link?
For my own playground I'm scraping RateMyProfessor to determine very loosely whether my professors salaries are lining up with their rate my professor scores (quality) and for fun, their address using county GIS data to create a heatmap of Rate My Professors scores. Won't ever see the light of day, more of a "welcome to a real world Tableau project".
Mostly it’s legal because political campaigns want the data. They use it to target voters they think are likely to vote for them. Sort of a conflict of interest, right?
It’s even creepier when you realize there are companies like Catalist that link voter files with consumer datasets. High level political campaigns know what sorts of cars you drive, etc.
Fortunately most of this stuff is completely useless for political campaigns. Although they have access to lots of data, the actual models they end up using don’t include any of it. Cambridge Analytica is mostly [bullshit](https://www.washingtonpost.com/news/monkey-cage/wp/2018/03/23/four-and-a-half-reasons-not-to-worry-that-cambridge-analytica-skewed-the-2016-election/?noredirect=on&utm_term=.ca1fa0ce6a83) marketing.
You have to either request them from the state or purchase access to a national voter file. Requirements are different for each state. The CA voter file is free, you can request it with [this](https://www.co.siskiyou.ca.us/sites/default/files/CLK-20180223_CaliforniaVoterRegistrationFileRequest.pdf) form. Other states charge anywhere from $50 to several hundred dollars.
National voter files maintained by companies like Catalist can cost as much as 30k per year 😳
I never talked about who searched what, just counting the searches for a topic. Which in my mind is suprising given the propensity for the Chinese government to want to put away certain types of information. Hence my comment that the information could be modified and therefore to take with a grain of salt. So yea, its like someone else said, similar to google trends (but also good luck finding the link if you dont read chinese or know very precisely what it is called, so your welcome :)
[https://www.reddit.com/r/bigquery/comments/76e3o3/public\_dataset\_social\_security\_death\_master\_file/](https://www.reddit.com/r/bigquery/comments/76e3o3/public_dataset_social_security_death_master_file/)
mylife.com and/or peekyou
It's so many things wrong with it. It's like the ultimate stalking tool. It even scrapes photos of people as well, it's got about everything on the US population.
Which I think is an interesting side issue -- publicly available datasets that are really only useful for particular interested parties.
Maybe, seems like many elections around the world are going toward hardline conservatives.
Ah, this is on me for thinking public == free. Thanks for the reply, and the link.
Thanks, got message "unable to find dataset" when I clicked on Big Query link.
Yeah, I’m a political scientist that uses this data occasionally so I won’t complain about it too much!
Some are definitely free. CA, UT and NC are free off the top of my head, but there are others.
###Markdown
Each comment has a lot of metadata we can preserve.
###Code
cant_believe_comment_metadata = []
for comment in cant_believe.comments.list():
print(comment)
if not comment.collapsed: # Skip collapsed/deleted comments
d = {}
d['id'] = comment.id
d['parent_id'] = comment.parent_id
d['body'] = comment.body
d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
#d['author'] = comment.author.name
cant_believe_comment_metadata.append(d)
###Output
ef3kluj
ef3cp0s
ef4iy3r
ef4fe4q
ef4ykjj
ef3glgq
ef50l6m
ef5h6gw
ef4y2h3
ef3k0s5
ef56bbg
ef7k3iz
ef4xb43
ei8dcty
ef414ov
ef57qap
ef54a1l
ef5g6lv
ef5satl
efbc7zt
ef57xrj
ef3o2n4
ej3basg
ef496rc
ef5c2qk
ef5bu9y
efbhq6o
egbxvft
ef595hr
ef4a3xl
ef5dnjk
ef5c2gp
egbzydh
ef4a7qm
ef5cbig
###Markdown
Convert to a DataFrame.
###Code
cant_believe_df = pd.DataFrame(cant_believe_comment_metadata)
# How long is the comment
cant_believe_df['comment_length'] = cant_believe_df['body'].str.len()
cant_believe_df.head()
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have lower scores?
###Code
sb.catplot(x='depth',y='score',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have shorter lengths?
###Code
sb.catplot(x='depth',y='comment_length',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
RedditorsA Redditor is a user and we can get meta-data about the account as well as the history of the user's comments and submissions from the API.[Documentation for the Redditor model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/redditor.html).How much link and comment karma does this user have?
###Code
spez = r.redditor('spez')
print("Link karma: {0:,}".format(spez.link_karma))
print("Comment karma: {0:,}".format(spez.comment_karma))
###Output
Link karma: 114,865
Comment karma: 691,729
###Markdown
Interestingly, Reddit flags the users who are employees of Reddit as well as if accounts have verified email addresses.
###Code
spez.is_employee
spez.has_verified_email
###Output
_____no_output_____
###Markdown
We can also get the time this user's account was created.
###Code
datetime.utcfromtimestamp(spez.created_utc)
###Output
_____no_output_____
###Markdown
We can also get information about individual redditors' submissions and comment histories. Here we will use u/spez (the CEO of Reddit), get his top-voted submissions, and loop through them to get the data for each submission.
###Code
spez_submissions = []
for submission in r.redditor('spez').submissions.top('all',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
spez_submissions.append(d)
###Output
_____no_output_____
###Markdown
Again we can turn this list of dictionaries into a DataFrame to do substantive data analysis.
###Code
pd.DataFrame(spez_submissions).head()
###Output
_____no_output_____
###Markdown
We can also get all the comments made by an editor.
###Code
spez_comments = []
for comment in r.redditor('spez').comments.top('all',limit=25):
d = {}
d['id'] = comment.id
d['body'] = comment.body
#d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
d['author'] = comment.author.name
spez_comments.append(d)
pd.DataFrame(spez_comments).head()
###Output
_____no_output_____
###Markdown
This user's top comments are mostly focused in the /r/announcements subreddit.
###Code
pd.DataFrame(spez_comments)['subreddit'].value_counts()
###Output
_____no_output_____
###Markdown
Web Data Scraping AcknowledgementsThese notebooks are adaptations from a 5 session mini course at the University of Colorado. The github repo can be found [here](https://github.com/CU-ITSS/Web-Data-Scraping-S2019) [Spring 2019 ITSS Mini-Course] The course is taught by [Brian C. Keegan, Ph.D.](http://brianckeegan.com/) [Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan). They have been adapted for relevant content and integration with Docker so that we all have the same environment. Professor Keegan suggests using a most recent version of Python (3.7) which is set in the `requirements.txt` file.The Spring ITSS Mini-Course was adapted from a number of sources including [Allison Morgan](https://allisonmorgan.github.io/) for the [2018 Summer Institute for Computational Social Science](https://github.com/allisonmorgan/sicss_boulder), which were in turn derived from [other resources](https://github.com/simonmunzert/web-scraping-with-r-extended-edition) developed by [Simon Munzert](http://simonmunzert.github.io/) and [Chris Bail](http://www.chrisbail.net/). This notebook is adapted from excellent notebooks in Dr. [Cody Buntain](http://cody.bunta.in/)'s seminar on [Social Media and Crisis Informatics](http://cody.bunta.in/teaching/2018_winter_umd_inst728e/) as well as the [PRAW documentation](https://praw.readthedocs.io/en/latest/). Python LibrariesWe'll need a few common libraries for all these examples. (if one of these doesn't exist, put it in your `requirements.txt` file and then rebuild your docker image.
###Code
# Lets us talk to other servers on the web
import requests
# APIs spit out data in JSON
import json
# Use BeautifulSoup to parse some HTML
from bs4 import BeautifulSoup
# Handling dates and times
from datetime import datetime
# DataFrames!
import pandas as pd
# Data visualization
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
# operating system commands
import os
###Output
_____no_output_____
###Markdown
Scraping WikipediaConsider the Wikipedia page for [George H.W. Bush](https://en.wikipedia.org/wiki/George_H._W._Bush). This seems like a relatively straightforward webpage to scrape out the hyperlinks to other articles or to compare the content to other presidential biographies. However, Wikipedia also preserves the [history of every revision made to this article](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&action=history) going back to the first (available) revisions in 2001, like [this](https://en.wikipedia.org/w/index.php?title=George_H._W._Bush&oldid=345784898). Thinking back to the Oscars example, it seems promising to find the "oldid" values and visit each revision's webpage to parse the content out. However, Wikipedia will give you much of this revision history data for free through its [application programming interface](http://en.wikipedia.org/w/api.php) (API). Current contentWe can use `requests` to get the current HTML markup of an article from the API, for example.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to parse the content of a page
query_params['action'] = 'parse'
# Which page?
query_params['page'] = 'George H. W. Bush'
# We want the text
query_params['prop'] = 'text'
# Ignore the edit buttons and table of contents
query_params['disableeditsection'] = 1
query_params['disabletoc'] = 1
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
We have only set up our request to the API, but not sent it or received the data back.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
What's waiting inside? A dictionary of dictionaries. The inner dictionary has keys for the title of the page we requested ("George H. W. Bush"), the pageid (a numeric identifier), and the text of the article.
###Code
json_response['parse'].keys()
###Output
_____no_output_____
###Markdown
We could count the number of links in the article.
###Code
ghwb_soup = BeautifulSoup(json_response['parse']['text'])
ghwb_soup.find_all('a')[:5]
###Output
_____no_output_____
###Markdown
Or the content of the article.
###Code
ghwb_soup.find_all('p')[:5]
###Output
_____no_output_____
###Markdown
Revision historyThere is also an API endpoint for the revision history of this article that contains metadata about the who and when of previous changes.
###Code
# Where the API server lives
query_url = "https://en.wikipedia.org/w/api.php"
# An empty dictionary to store our query parameters
query_params = {}
# We want to query properties of a page
query_params['action'] = 'query'
# Which page?
query_params['titles'] = 'George H. W. Bush'
# We want the revisions
query_params['prop'] = 'revisions'
# In particular, we want the revision ids, users, comments, timestamps
query_params['rvprop'] = 'ids|userid|comment|timestamp|user|size|sha1'
# Get 500 revisions
query_params['rvlimit'] = 500
# Start old and go newer
query_params['rvdir'] = 'newer'
# Get the results back as JSON
query_params['format'] = 'json'
# Format the data in an easier-to-parse option
query_params['formatversion'] = 2
###Output
_____no_output_____
###Markdown
Make the request.
###Code
json_response = requests.get(url = query_url, params = query_params).json()
###Output
_____no_output_____
###Markdown
Inspect this `json_response`. This returns a dictionary with both "continue" and "query" keys. The continue indicates there are more than 500 revisions present in the article's history and provides an index for the next query to pick up from. The query contains the revision history we care about—buried a bit in a nested data structure of lists and dictionaries, but we eventually get to the "revisions" list of dictionaries with the revision histories.
###Code
revisions = json_response['query']['pages'][0]['revisions']
revisions[:3]
###Output
_____no_output_____
###Markdown
Convert to a DataFrame.
###Code
rev_df = pd.DataFrame(revisions)
rev_df.head()
###Output
_____no_output_____
###Markdown
Plot out how the size of the article changed over the first 500 revisions.
###Code
ax = rev_df.plot(y='size',legend=False)
ax.set_ylabel('Size (bytes)')
ax.set_xlabel('Revision')
ax.set_xlim((0,500))
###Output
_____no_output_____
###Markdown
Or count how many times an editor made a contribution.
###Code
rev_df['user'].value_counts().head()
###Output
_____no_output_____
###Markdown
There are many other parts of the very powerful Wikipedia API and scraping these APIs exposes much more metadata than parsing the HTML of these webpages, while also being easier on the servers hosting it. I will share a notebook that has functions for retrieving and parsing content, revisions, pageviews, and other information. Scraping RedditReddit also hosts a lot of detailed behavioral data that could be of interest to social scientists. As was the case with Wikipedia, our naïve inclination may be to develop scrapers and parsers to extract this information, but Reddit will give much of it to you for free through their API!You can retrieve a few different types of entities from Reddit's API: sub-reddits, submissions, comments, and redditors. Many of these are interoperable: a sub-reddit contains submissions contributed by redditors with comments from other redditors.We will use a wrapper library to communicate with the Reddit API called [Python Reddit API Wrapper](https://praw.readthedocs.io/en/latest/) or `praw`. Afterwards, we can import `praw`.
###Code
import praw
###Output
_____no_output_____
###Markdown
We then need to authenticate with Reddit to get access to the API. Typically you can just enter the client ID, client secret, password, username, *etc*. as strings. 1. You will need to create an account on Reddit. After you have created an account and logged in, go to https://www.reddit.com/prefs/apps/. 2. Scroll down and click the "create app" button at the bottom. Provide a basic name, description, and enter a URL for your homepage (or just use http://www.ucla.edu).3. You will need the client ID (the string of characters beneath the name of your app) as well as the secret (the other string of characters) as well as your username and password.4. I had to change to a script app to get this to work. 5. You can make up a user-agent string, but include your username as good practice for the sysadmins to track you down if you break things.![Image from Cody Buntain](http://www.cs.umd.edu/~cbuntain/inst728e/reddit_screens/1-003a.png)You'll create an API connector object (`r`) below that will authenticate with the API and handle making the requests.
###Code
#r = praw.Reddit(client_id='your application id',
#client_secret='your application secret',
#password='your account password',
#user_agent='scraping script by /u/youraccountname',
#username='your account name')
###Output
_____no_output_____
###Markdown
You can confirm that this authentication process worked by making a simple request like printing your username.
###Code
#print(r.user.me())
###Output
langholz-stat
###Markdown
I'm going to read them in from a local file ("login.json") so that I post this notebook on the internet in the future without compromising my account security. This won't work for you, so just skip this step.
###Code
# Load my credentials from a local disk so I don't show the world
with open('reddit_login.json','r') as f:
r_creds = json.load(f)
# Create an authenticated reddit instance using the creds
r = praw.Reddit(client_id = r_creds['client_id'],
client_secret = r_creds['client_secret'],
password = r_creds['password'],
user_agent = r_creds['user_agent'],
username = r_creds['username'])
# Make sure your reddit instance works
print(r.user.me())
###Output
langholz-stat
###Markdown
Sub-redditsNow print the top 25 stories in /r/news.[Documentation for the Subreddit model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html). Create a `news_subreddit` object to store the various attributes about this sub-reddit.
###Code
news_subreddit = r.subreddit('news')
###Output
_____no_output_____
###Markdown
The `news_subreddit` has a number of attributes and methods you can call on it. The time the sub-reddit was founded.
###Code
news_subreddit.created_utc
###Output
_____no_output_____
###Markdown
That's formatted in a UNIX timecode (seconds since 1 January 1970), but we can convert it into a more readable timestamp with `datetime`'s `utcfromtimestamp`.
###Code
print(datetime.utcfromtimestamp(news_subreddit.created_utc))
###Output
2008-01-25 06:49:25
###Markdown
There are other attributes such as the number of subscribers, current active users, as well as the description of the sub-reddit.
###Code
'{0:,}'.format(news_subreddit.subscribers)
news_subreddit.over18
news_subreddit.active_user_count
print(news_subreddit.description)
###Output
>* **[/r/inthenews](/r/inthenews?hl)**
>#
>* **[/r/worldnews](/r/worldnews?hl)**
>#
>* **[/r/politics](/r/politics?hl)**
>#
>* **[new comments](/r/news/comments?hl)**
1. **[Post all analysis/opinion/politics articles to /r/InTheNews](/r/InTheNews)**
> [](http://goo.gl/R6as4?ri)
> [](http://goo.gl/gBldE?ri)
> [](http://goo.gl/u5EZN?ri)
> [](http://goo.gl/exK8j?ri)
> [](http://www.reddit.com/r/news?ri)
> [](http://www.reddit.com/r/restorethefourth?ri)
Want to talk?
Follow [@rslashnews on Twitter](https://twitter.com/rslashnews)
See a post that violates the rules below? Had your post stuck in the spam filter? Have a question about policy? Just want to give feedback? [Send the mod team a message](http://www.reddit.com/message/compose?to=%2Fr%2Fnews).
---
Submit all self- & meta-posts to /r/inthenews
Your post will likely be removed if it:
- is not news
- is an opinion/analysis or advocacy piece.
- primarily concerns politics.
- has a title not taken from the article.
- has a pay wall or steals content.
- covers an already-submitted story.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/), especially regarding personal info.
Your comment will likely be removed if it:
- advocates or celebrates the death of another person
- is racist, sexist, vitriolic, or overly crude.
- is unnecessarily rude or provocative.
- is a cheap and distracting joke or meme.
- is responding to spam.
- violates [reddit's site-wide rules](http://www.reddit.com/rules/).
Extreme or repeat offenders will be banned.
**\>\>\>[Expanded Rules](https://www.reddit.com/r/news/about/rules/)<<<**
---
If your post doesn't fit, consider [finding an appropriate news article on that story](http://www.reddit.com/r/news/wiki/recommendedsources) to submit instead, or submitting yours to lower moderation subreddits:
[/r/inthenews](/r/inthenews) - all news-related content
[/r/AnythingGoesNews](/r/AnythingGoesNews) - unrestricted news
[/r/truereddit](/r/truereddit) - insightful articles
/r/self - any self-post
/r/misc, /r/redditdotcom - anything
or other news subreddits:
[/r/worldnews](/r/worldnews) - from outside the USA only
[/r/SyrianCivilWar](/r/syriancivilwar) - about the conflict in Syria
[/r/MidEastRegionalWar](/r/mideastregionalwar) - on MidEast conflict
[/r/UpliftingNews](/r/upliftingnews) - uplifting
[/r/SavedYouAClick](/r/savedyouaclick) - making media more straightforward
or subreddits for other topics:
[/r/FoodForThought](/r/FoodForThought) - discussion-worthy long form articles about interesting subjects
[/r/politics](/r/politics) - for shouting about politics
[/r/moderatepolitics](/r/ModeratePolitics) - less shouting
[/r/politicaldiscussion](/r/PoliticalDiscussion) - even less shouting
[/r/geopolitics](/r/geopolitics) - intl. politics and geography
[/r/entertainment](/r/entertainment) - Justin Bieber updates, etc.
or check out the [200 most active subreddits, categorized by content](http://redd.it/1f7hqc) and the [full list of subreddits by subscribers](http://redditmetrics.com/top).
---
Recommendations:
/r/full_news
/r/qualitynews
/r/neutralnews
/r/worldevents
---
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
[submit news article](http://www.reddit.com/r/news/submit)
[submit something else](http://www.reddit.com/r/misc/submit)
[submit analysis/opinion article](http://www.reddit.com/r/inthenews/submit)
###Markdown
The rules of the sub-reddit are available as a method `.rules()` which returns a list of dictionaries of rule objects.
###Code
news_subreddit.rules()['rules']
###Output
_____no_output_____
###Markdown
When were each of these rules created? Loop through each of the rules and print the "short_name" of the rule and the rule timestamp.
###Code
for rule in news_subreddit.rules()['rules']:
created = rule['created_utc']
print(rule['short_name'], datetime.utcfromtimestamp(created))
###Output
Not news 2016-01-26 06:24:11
Opinion/analysis or advocacy piece 2016-01-26 06:27:59
Politics 2016-01-26 06:31:33
Title not from article/editorialized title 2016-01-26 06:35:51
Paywall or is blogspam/steals content 2016-01-26 06:40:33
Covers an already-submitted story 2016-01-26 06:44:40
Racist, sexist, vitriolic, or overly crude 2016-01-26 06:47:09
Unnecessarily rude or provocative 2016-01-26 06:49:35
Cheap or distracting joke or meme 2016-01-26 06:51:12
Breaks sitewide rules, witchhunting 2016-01-26 06:56:47
###Markdown
We can also get a list of the moderators for this subreddit.
###Code
mod_list = []
for mod in news_subreddit.moderator():
mod_list.append(mod.name)
mod_list
###Output
_____no_output_____
###Markdown
SubmissionsWe can get a list of submissions to a sub-reddit using [a few different methods](https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html).* `.controversial()`* `.hot()`* `.new()`* `.rising()`* `.search()`* `.top()`Here we will use the `.top()` method to get the top 25 submissions on the /r/news subreddit from the past 12 months.[Documentation for the Submission model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
###Code
top25_news = r.subreddit('news').top('year',limit=25)
###Output
_____no_output_____
###Markdown
`top25_news` is a `ListingGenerator` object, which is a special [generator](https://www.dataquest.io/blog/python-generators-tutorial/) class defined by PRAW. It does not actually go out and get the data at this stage. There's not much you can do to look inside this `ListingGenerator` other than loop through and perform operations. In this case, lets add each submission to a list of `top25_submissions`.
###Code
top25_submissions = []
for submission in r.subreddit('news').top('year',limit=25):
top25_submissions.append(submission)
###Output
_____no_output_____
###Markdown
We can inspect the first (top) `Submission` object.
###Code
first_submission = top25_submissions[0]
first_submission
###Output
_____no_output_____
###Markdown
Use the `dir` function to see the other methods and attributes inside this first top `Submission` object. (There are a lot of other "hidden" attributes and methods that use the "\_" which we can ignore with this list comprehension.)
###Code
[i for i in dir(first_submission) if '_' not in i]
###Output
_____no_output_____
###Markdown
`vars` may be even more helpful.
###Code
vars(first_submission)
###Output
_____no_output_____
###Markdown
We can extract the features of each submission, store them in a dictionary, and save to an external list. This step will take a while (approximately one second per submission) because we make an API call for each submission in the `ListingGenerator` returned by the `r.subreddit('news').top('year',limit=25)` we're looping through.
###Code
submission_stats = []
for submission in r.subreddit('news').top('year',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
submission_stats.append(d)
###Output
_____no_output_____
###Markdown
We can turn `submission_stats` into a pandas DataFrame.
###Code
top25_df = pd.DataFrame(submission_stats)
top25_df.head()
###Output
_____no_output_____
###Markdown
Plot out the relationship between score and number of comments.
###Code
ax = top25_df.plot.scatter(x='score',y='num_comments',s=50,c='k',alpha=.5)
ax.set_xlim((0,200000))
ax.set_ylim((0,16000))
###Output
_____no_output_____
###Markdown
CommentsThis is a simple Reddit submission: [What is a dataset that you can't believe is available to the public?](https://www.reddit.com/r/datasets/comments/akb4mr/what_is_a_dataset_that_you_cant_believe_is/). We can inspect the comments in this simple submission.[Documentation for Comment model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/comment.html).
###Code
cant_believe = r.submission(id='akb4mr')
print("This submission was made on {0}.".format(datetime.utcfromtimestamp(cant_believe.created_utc)))
print("There are {0:,} comments.".format(cant_believe.num_comments))
###Output
This submission was made on 2019-01-27 10:59:04.
There are 37 comments.
###Markdown
We can inspect these comments, working from the [Comment Extraction and Parsing](https://praw.readthedocs.io/en/latest/tutorials/comments.html) tutorial in PRAW.
###Code
cant_believe.comments.replace_more(limit=None)
for comment in cant_believe.comments.list():
print(comment.body)
###Output
State voter files. You can see whether every single registered voter voted in a given election. There’s actually a political science [paper](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=gerber+turnout+pressure&btnG=#d=gs_qabs&u=%23p%3D0F49X22wGIcJ) where the researchers threatened to send letters to all your neighbors with information about which people in the neighborhood had voted to see if it would increase turnout. It did.
First and last name of every US person who has renounced their citizenship each quarter: [https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate](https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate)
The SEC published "Apache log files that record and store user access statistics for the SEC.gov website" for 2003 through 2017: https://www.sec.gov/dera/data/edgar-log-file-data-set.html
[Enron email dataset](https://www.cs.cmu.edu/~./enron/)
ICIJ Offshore Leaks Database
[https://offshoreleaks.icij.org/pages/database](https://offshoreleaks.icij.org/pages/database)
If open directories count, 'Pemiblanc' seems pretty alarming.
[https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/](https://www.troyhunt.com/the-111-million-pemiblanc-credential-stuffing-list/)
You can scrape searches from the Baidu chinese search engine. You can forexample count the number of times trade war was searched.
The NPI database lists personal information about every single physician in the United States. Not only is it public, doctors are required to keep their information updated in the database.
Although not a true dataset, the Department of Energy has pretty detailed data (mostly in PDFs) showing exactly where and how uranium moves from mine sites, to processing facilities, to reactors. Down to the ships and trucks used, the routes taken, etc. it’s really cool info. You can get a detailed image of how the global nuclear industry operates.
Social Security Death Master File
There is a website that pretty much lists everyone's former addresses, phone numbers, and even leaves vague trashy (libel) warnings like 'criminal record, sex offender registered or past debts on record, pay 30 dollars to see it'. So if you were late on a single payment, like pretty much anyone, you're a pedo, and here are your addresses and phone numbers, and every one of your family members as well.
States that have government transparency websites. You can literally find out the salaries of K-12 teachers, high school teachers, government workers, and even professors.
Vaccine Adverse Event Reporting data - unverified, self-reported data available to the public from the CDC. What could go wrong?
[https://wonder.cdc.gov/VAERS.html](https://wonder.cdc.gov/VAERS.html)
[removed]
Pipl
that is an excellent response i'm surprised that's legal. I guess the purpose is so that people can check that their vote was counted and/or see if someone used their name to vote for them if they didn't vote?
How data is being applied in politics on an international basis today.
[The Rise of the Weaponized AI Propaganda Machine](https://medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine-86dac61668b)
According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.”
But researchers across the technology and media ecosystem who have been following Cambridge Analytica’s political messaging activities have unearthed an expansive, adaptive online network that automates the manipulation of voters at a scale never before seen in political messaging.
Do you have an example of such dataset?
Are there docs on how to do this?
Sounds like google trends right?
Like someone else pointed out, this is like Google trends. What would set this way apart was if you could somehow find out who searched what and when like by their IP or something
Do you have a source that does not require signing up for an account such as ancestry?
What is link?
For my own playground I'm scraping RateMyProfessor to determine very loosely whether my professors salaries are lining up with their rate my professor scores (quality) and for fun, their address using county GIS data to create a heatmap of Rate My Professors scores. Won't ever see the light of day, more of a "welcome to a real world Tableau project".
Mostly it’s legal because political campaigns want the data. They use it to target voters they think are likely to vote for them. Sort of a conflict of interest, right?
It’s even creepier when you realize there are companies like Catalist that link voter files with consumer datasets. High level political campaigns know what sorts of cars you drive, etc.
Fortunately most of this stuff is completely useless for political campaigns. Although they have access to lots of data, the actual models they end up using don’t include any of it. Cambridge Analytica is mostly [bullshit](https://www.washingtonpost.com/news/monkey-cage/wp/2018/03/23/four-and-a-half-reasons-not-to-worry-that-cambridge-analytica-skewed-the-2016-election/?noredirect=on&utm_term=.ca1fa0ce6a83) marketing.
You have to either request them from the state or purchase access to a national voter file. Requirements are different for each state. The CA voter file is free, you can request it with [this](https://www.co.siskiyou.ca.us/sites/default/files/CLK-20180223_CaliforniaVoterRegistrationFileRequest.pdf) form. Other states charge anywhere from $50 to several hundred dollars.
National voter files maintained by companies like Catalist can cost as much as 30k per year 😳
I never talked about who searched what, just counting the searches for a topic. Which in my mind is suprising given the propensity for the Chinese government to want to put away certain types of information. Hence my comment that the information could be modified and therefore to take with a grain of salt. So yea, its like someone else said, similar to google trends (but also good luck finding the link if you dont read chinese or know very precisely what it is called, so your welcome :)
[https://www.reddit.com/r/bigquery/comments/76e3o3/public\_dataset\_social\_security\_death\_master\_file/](https://www.reddit.com/r/bigquery/comments/76e3o3/public_dataset_social_security_death_master_file/)
mylife.com and/or peekyou
It's so many things wrong with it. It's like the ultimate stalking tool. It even scrapes photos of people as well, it's got about everything on the US population.
Which I think is an interesting side issue -- publicly available datasets that are really only useful for particular interested parties.
Maybe, seems like many elections around the world are going toward hardline conservatives.
Ah, this is on me for thinking public == free. Thanks for the reply, and the link.
Thanks, got message "unable to find dataset" when I clicked on Big Query link.
Yeah, I’m a political scientist that uses this data occasionally so I won’t complain about it too much!
Some are definitely free. CA, UT and NC are free off the top of my head, but there are others.
###Markdown
Each comment has a lot of metadata we can preserve.
###Code
cant_believe_comment_metadata = []
for comment in cant_believe.comments.list():
print(comment)
if not comment.collapsed: # Skip collapsed/deleted comments
d = {}
d['id'] = comment.id
d['parent_id'] = comment.parent_id
d['body'] = comment.body
d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
#d['author'] = comment.author.name
cant_believe_comment_metadata.append(d)
###Output
ef3kluj
ef3cp0s
ef4iy3r
ef4fe4q
ef4ykjj
ef3glgq
ef50l6m
ef5h6gw
ef4y2h3
ef3k0s5
ef56bbg
ef7k3iz
ef4xb43
ei8dcty
ef414ov
ef57qap
ef54a1l
ef5g6lv
ef5satl
efbc7zt
ef57xrj
ef3o2n4
ej3basg
ef496rc
ef5c2qk
ef5bu9y
efbhq6o
egbxvft
ef595hr
ef4a3xl
ef5dnjk
ef5c2gp
egbzydh
ef4a7qm
ef5cbig
###Markdown
Convert to a DataFrame.
###Code
cant_believe_df = pd.DataFrame(cant_believe_comment_metadata)
# How long is the comment
cant_believe_df['comment_length'] = cant_believe_df['body'].str.len()
cant_believe_df.head()
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have lower scores?
###Code
sb.catplot(x='depth',y='score',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
Do comments deeper in this comment tree have shorter lengths?
###Code
sb.catplot(x='depth',y='comment_length',data=cant_believe_df,kind='bar',color='lightblue')
###Output
_____no_output_____
###Markdown
RedditorsA Redditor is a user and we can get meta-data about the account as well as the history of the user's comments and submissions from the API.[Documentation for the Redditor model in PRAW](https://praw.readthedocs.io/en/latest/code_overview/models/redditor.html).How much link and comment karma does this user have?
###Code
spez = r.redditor('spez')
print("Link karma: {0:,}".format(spez.link_karma))
print("Comment karma: {0:,}".format(spez.comment_karma))
###Output
Link karma: 114,865
Comment karma: 691,729
###Markdown
Interestingly, Reddit flags the users who are employees of Reddit as well as if accounts have verified email addresses.
###Code
spez.is_employee
spez.has_verified_email
###Output
_____no_output_____
###Markdown
We can also get the time this user's account was created.
###Code
datetime.utcfromtimestamp(spez.created_utc)
###Output
_____no_output_____
###Markdown
We can also get information about individual redditors' submissions and comment histories. Here we will use u/spez (the CEO of Reddit), get his top-voted submissions, and loop through them to get the data for each submission.
###Code
spez_submissions = []
for submission in r.redditor('spez').submissions.top('all',limit=25):
d = {}
d['id'] = submission.id
d['title'] = submission.title
d['num_comments'] = submission.num_comments
d['score'] = submission.score
d['upvote_ratio'] = submission.upvote_ratio
d['date'] = datetime.utcfromtimestamp(submission.created_utc)
d['domain'] = submission.domain
d['gilded'] = submission.gilded
d['num_crossposts'] = submission.num_crossposts
d['nsfw'] = submission.over_18
d['author'] = submission.author.name
spez_submissions.append(d)
###Output
_____no_output_____
###Markdown
Again we can turn this list of dictionaries into a DataFrame to do substantive data analysis.
###Code
pd.DataFrame(spez_submissions).head()
###Output
_____no_output_____
###Markdown
We can also get all the comments made by an editor.
###Code
spez_comments = []
for comment in r.redditor('spez').comments.top('all',limit=25):
d = {}
d['id'] = comment.id
d['body'] = comment.body
#d['depth'] = comment.depth
d['edited'] = comment.edited
d['score'] = comment.score
d['date'] = datetime.utcfromtimestamp(comment.created_utc)
d['submission_id'] = comment.submission.id
d['submission_title'] = comment.submission.title
d['subreddit'] = comment.subreddit.display_name
d['author'] = comment.author.name
spez_comments.append(d)
pd.DataFrame(spez_comments).head()
###Output
_____no_output_____
###Markdown
This user's top comments are mostly focused in the /r/announcements subreddit.
###Code
pd.DataFrame(spez_comments)['subreddit'].value_counts()
###Output
_____no_output_____ |
VGG/vgg16.ipynb | ###Markdown
###Code
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import SGD
###Output
_____no_output_____
###Markdown
Defining the Model as per the Original Paper
###Code
model = Sequential();
# 1st Convolutional Block
model.add(Conv2D(input_shape=(224, 224, 3), filters=64, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# 2nd Convolutional Block
model.add(Conv2D(filters=128, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=128, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# 3rd Convolutional Block
model.add(Conv2D(filters=256, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=256, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=256, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# 4th Convolutional Block
model.add(Conv2D(filters=512, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=512, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=512, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# 5th Convolutional Block
model.add(Conv2D(filters=512, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=512, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(filters=512, kernel_size=(3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# 1st Dense Layer
model.add(Flatten())
model.add(Dense(4096))
model.add(Activation('relu'))
model.add(Dropout(0.5))
# 2nd Dense Layer
model.add(Dense(4096))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1000))
model.add(Activation('softmax'))
model.summary()
model.compile(loss=categorical_crossentropy,
optimizer=SGD(learning_rate=0.01),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Considering the data to be present in TRAIN_DATA_LOCATION and VALIDATION_DATA_LOCATION directories and running them through data generators to perform live data augumentation during the training process
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_dir = 'TRAIN_DATA_LOCATION'
valid_dir = 'VALIDATION_DATA_LOCATION'
BATCH_SIZE = 32
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=10,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1)
train_generator = train_datagen.flow_from_directory(train_dir,
target_size=(224, 224),
color_mode='rgb',
batch_size=BATCH_SIZE,
seed=1,
shuffle=True,
class_mode='categorical')
valid_datagen = ImageDataGenerator(rescale=1.0/255.0)
valid_generator = valid_datagen.flow_from_directory(valid_dir,
target_size=(224, 224),
color_mode='rgb',
batch_size=BATCH_SIZE,
seed=7,
shuffle=True,
class_mode='categorical')
train_num = train_generator.samples
###Output
_____no_output_____
###Markdown
Training the Model
###Code
import datetime
log_dir = 'logs/fit/' + datetime.datetime.now().strftime('%Y%m%d-%H%M%S')
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callback_list = [tensorboard_callback]
model.fit(train_generator,
epochs=1,
steps_per_epoch=train_num // BATCH_SIZE,
validation_data=valid_generator,
validation_steps=valid_num // BATCH_SIZE,
callbacks=callback_list,
verbose=1)
model.save('vgg16.h5')
###Output
_____no_output_____
###Markdown
Visualizing the performance using Tensorboard
###Code
%load_ext tensorboard
%tensorboard --logdir logs/fit
###Output
_____no_output_____
###Markdown
Prediction
###Code
x_valid, label_batch = next(iter(valid_generator))
prediction_values = model.predict_classes(x_valid)
print(prediction_values)
###Output
_____no_output_____ |
Modulo1/Clase4_AjusteCurvas.ipynb | ###Markdown
Ajuste de curvas> El **ajuste de curvas** es el proceso de construir una curva (función), que sea el mejor ajuste a una serie de puntos. Las curvas ajustadas pueden ser usadas como asistencia en la visualización de datos, para inferir valores de una función donde no hay datos disponibles, y para resumir la relación entre variables.**Referencia**:- https://en.wikipedia.org/wiki/Curve_fitting___ 0. IntroducciónConsideremos un polinomio de grado uno:$$y = \beta_1 x + \beta_0.$$Esta es una **línea recta** que tiene pendiente $\beta_1$. Sabemos que habrá una línea conectando dos puntos cualesquiera. Por tanto, *una ecuación polinómica de primer grado es un ajuste perfecto entre dos puntos*.Si consideramos ahora un polinomio de segundo grado,$$y = \beta_2 x^2 + \beta_1 x + \beta_0,$$este se ajustará exactamente a tres puntos. Si aumentamos el grado de la función a la de un polinomio de tercer grado, obtenemos:$$y = \beta_3 x^3 + \beta_2 x^2 + \beta_1 x + \beta_0,$$que se ajustará a cuatro puntos.**Ejemplos**1. Encontrar la línea recta que pasa exactamente por los puntos $(0,1)$ y $(1,0)$.2. Encontrar la parábola que pasa exactamente por los puntos $(-1,1)$, $(0,0)$ y $(1,1)$.**Solución**1. Consideramos $y=\beta_1 x + \beta_0$. Evaluando en el punto $(0,1)$, obtenemos $\beta_1(0) + \beta_0 = 1$. Ahora, evaluando en el punto $(1,0)$, obtenemos $\beta_1(1) + \beta_0 = 0$. De esta manera,$$\left[\begin{array}{cc} 1 & 0 \\ 1 & 1\end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1\end{array}\right]=\left[\begin{array}{c} 1 \\ 0\end{array}\right].$$Resolviendo, $\beta_0=-\beta_1=1$.
###Code
# Importar numpy y el matplotlib.pyplot
# Encontrar beta_0 y beta_1 resolviendo el sistema
# Graficar la recta encontrada junto con los puntos
###Output
_____no_output_____
###Markdown
2. Consideramos $y=\beta_2 x^2 + \beta_1 x + \beta_0$. Evaluando en el punto $(-1,1)$, obtenemos $\beta_2(-1)^2 + \beta_1(-1) + \beta_0 = 1$. Ahora, evaluando en el punto $(0,0)$, obtenemos $\beta_2(0)^2 + \beta_1(0) + \beta_0 = 0$. Finalmente, evaluando en el punto $(1,1)$, obtenemos $\beta_2(1)^2 + \beta_1(1) + \beta_0 = 1$. De esta manera,$$\left[\begin{array}{ccc} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \end{array}\right]=\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right].$$Resolviendo, $\beta_0=\beta_1=0$ y $\beta_2=1$.
###Code
# Encontrar beta_0, beta_1 y beta_2
# Graficar la parabola junto con los puntos
###Output
_____no_output_____
###Markdown
¿Qué tienen en común los anteriores problemas?Las curvas están completamente determinadas por los puntos (datos limpios, suficientes y necesarios).Esto se traduce en que, al llevar el problema a un sistema de ecuaciones lineales, existe una única solución: **no hay necesidad, ni se puede optimizar nada**.¿Tendremos datos así de '*bonitos*' en la vida real?La realidad es que los datos que encontraremos en nuestra vida profesional se parecen más a esto...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
# Graficar
###Output
_____no_output_____
###Markdown
¿Cómo ajustamos una curva a esto? 1. Problema básicoConsideramos que tenemos un conjunto de n pares ordenados de datos $(x_i,y_i)$, para $i=1,2,3,\dots,n$. ¿Cuál es la recta que mejor se ajusta a estos datos?Consideramos entonces ajustes de la forma $\hat{f}(x) = \beta_0+\beta_1 x = \left[1 \quad x\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \end{array}\right]=\left[1 \quad x\right]\boldsymbol{\beta}$ (lineas rectas).Para decir '*mejor*', tenemos que definir algún sentido en que una recta se ajuste *mejor* que otra.**Mínimos cuadrados**: el objetivo es seleccionar los coeficientes $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$, de forma que la función evaluada en los puntos $x_i$ ($\hat{f}(x_i)$) aproxime los valores correspondientes $y_i$.La formulación por mínimos cuadrados, encuentra los $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$ que minimiza$$\frac{1}{2n}\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-(\beta_0+ \beta_1x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-\left[1 \quad x_i\right]\boldsymbol{\beta})^2=\frac{1}{2n}\left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2,$$donde $\boldsymbol{y}=\left[y_1\quad\dots\quad y_n\right]^T$, y $\boldsymbol{X}=\left[\begin{array}{ccc}1 & x_1\\ \vdots & \vdots \\ 1 & x_n\end{array}\right].$ Esto es,$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2$$ Notar que el problema anterior no es de programación lineal, ¿porqué?Para llevar a cabo la anterior minimización, la librería `SciPy` en su módulo `optimize` contiene la función `minimize`.
###Code
# Importar el módulo optimize de la librería scipy
# Función minimize
###Output
_____no_output_____
###Markdown
Parámetros importantes:- fun: función $f(x)$, se debe definir antes de llamar minimize, como `def f(x): ... return ...`- x0: valor inicial. En una función no lineal, en general, hay múltiples mínimos. Dependiendo de la semilla caerá en uno de esos mínimos. Se ingresa como $x0 = \text{np.array}([x_{01},\dots,x_{0n}])$.- bounds: como en linprog.- constraints: funciones que definen las restricciones $g_i(x)$ y $h_j(x)$. Se definen igual que $f(x)$ y se ingresan como {'ineq': g_i, 'eq': h_j}. Primero debemos construir la función objetivo y la semilla inicial:
###Code
# Definir funcion objetivo y punto inicial
# Mostrar
###Output
_____no_output_____
###Markdown
¿Qué tan bien luce el ajuste?
###Code
# Coeficientes \beta_0 y \beta_1
# Grafica de los puntos y la recta ajustada
###Output
_____no_output_____
###Markdown
Note que la pendiente es aproximadamente $2$ y el intercepto es aproximadamente $10$.La anterior idea se puede extender a ajuste polinomial... 2. Ajuste polinomialAhora, considere el siguiente conjunto de datos...
###Code
# Generamos 100 puntos ruidosos a partir de una senoidal
###Output
_____no_output_____
###Markdown
2.1. ¿Se ajustará bien una recta?
###Code
# Definir funcion objetivo y semilla
# Resolver
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con recta**
###Code
# Mostrar coeficientes
# Graficar
###Output
_____no_output_____
###Markdown
2.2. La recta no es buen ajuste... ¿Se ajustará bien una parabola?
###Code
# Definir funcion objetivo y semilla
# Resolver
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con parábola**
###Code
# Mostrar coeficientes
# Graficar recta y parabola ajustadas
###Output
_____no_output_____
###Markdown
2.3. Tampoco. Quizá un polinomio cúbico...
###Code
# Definir funcion objetivo y semilla
# Resolver
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con cúbica**
###Code
# Mostrar coeficientes
# Graficar recta, parabola y cubica
###Output
_____no_output_____
###Markdown
Mucho mejor. Entonces, ¿mientras más se suba el orden mejor la aproximación? 2.4. Ajustemos un polinomio de grado 7...
###Code
# Definimos funcion objetivo y semilla
# Resolvemos
###Output
_____no_output_____
###Markdown
**De nuevo, veamos $\beta$**
###Code
# Mostrar coeficientes
###Output
_____no_output_____
###Markdown
**¡Cuidado! OVERFITTING...**Observar el tamaño de algunos coeficientes. Cuando los coeficientes son grandes, ¿qué pasa?
###Code
# Grafica de ajustes
###Output
_____no_output_____
###Markdown
Es conveniente ver el error como función del orden del polinomio... **selección de modelos**
###Code
# Función objetivo ajuste polinomio grado N
# Error cuadratico
###Output
_____no_output_____
###Markdown
En efecto, parece que con $3$ es suficiente. ¿Cómo prevenir el *overfitting* sin importar el orden del modelo? 3. RegularizaciónVimos que la solución de mínimos cuadrados es:$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2.$$Sin embargo, si crecemos el orden del modelo hay overfitting y algunos coeficientes óptimos $\boldsymbol{\beta}$ crecen muchísimo. Que un coeficiente sea muy grande, significa que se le da mucha importancia a alguna característica (que quizá sea ruido... no sirve para predecir).La regularización consiste en penalizar la magnitud de los coeficientes $\boldsymbol{\beta}$ en el problema de optimización, para que no crezcan tanto. 3.1. Ridge$$\boldsymbol{\beta}^{ridge} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|^2$$ 3.2. Lasso$$\boldsymbol{\beta}^{lasso} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|_1$$La norma 1 no es más que la suma de los valores absolutos de las componentes $\left|\left|\boldsymbol{\beta}\right|\right|_1=\sum_{j=0}^m\left|\beta_j\right|$. 4. Ajuste robustoAhora, consideremos de nuevo el caso de la línea recta con un par de puntos atípicos al inicio y al final...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
# Graficar
###Output
_____no_output_____
###Markdown
Solucionamos el problema normalmente... Si estos puntos que parecen ser atípicos, hacen parte de una 'mala medición', vemos que el ajuste que obtenemos a los otros puntos es muy pobre...**¿Cómo podemos evitar esto?** La respuesta es [*ajuste robusto*](https://en.wikipedia.org/wiki/Huber_loss).
###Code
def huber(a, d):
if np.abs(a) <= d:
return a**2
else:
return d * (2 * np.abs(a) - d)
###Output
_____no_output_____
###Markdown
Mejor... 5. TareaLa siguiente celda lee datos correspondientes a tamaños $x$ ($ft^2$) y precios $y$ (USD) de casas en Portland, Oregon.1. Graficar estos datos poniendo los precios en el eje $y$ y los tamaños en el eje $x$.2. Ajustar polinomios de grado 1 hasta grado 5.3. Graficar el error cuadrático acumulado contra el número de términos, y elegir un polinomio que ajuste bien y su grado sea el menor posible.4. Supongamos que un amigo tuyo tiene una casa de $1250 ft^2$. Según tu modelo, ¿en cuánto podría vender dicha casa?Abrir un nuevo notebook, llamado `Tarea3_ApellidoNombre` y subirlo a canvas en el espacio habilitado.
###Code
import pandas as pd
data = pd.read_csv("housing_prices.csv")
x = data['size'].values
y = data['price'].values
###Output
_____no_output_____
###Markdown
Ajuste de curvas> El **ajuste de curvas** es el proceso de construir una curva (función), que sea el mejor ajuste a una serie de puntos. Las curvas ajustadas pueden ser usadas como asistencia en la visualización de datos, para inferir valores de una función donde no hay datos disponibles, y para resumir la relación entre variables.**Referencia**:- https://en.wikipedia.org/wiki/Curve_fitting___ 0. IntroducciónConsideremos un polinomio de grado uno:$$y = \beta_1 x + \beta_0.$$Esta es una **línea recta** que tiene pendiente $\beta_1$. Sabemos que habrá una línea conectando dos puntos cualesquiera. Por tanto, *una ecuación polinómica de primer grado es un ajuste perfecto entre dos puntos*.Si consideramos ahora un polinomio de segundo grado,$$y = \beta_2 x^2 + \beta_1 x + \beta_0,$$este se ajustará exactamente a tres puntos. Si aumentamos el grado de la función a la de un polinomio de tercer grado, obtenemos:$$y = \beta_3 x^3 + \beta_2 x^2 + \beta_1 x + \beta_0,$$que se ajustará a cuatro puntos.**Ejemplos**1. Encontrar la línea recta que pasa exactamente por los puntos $(0,1)$ y $(1,0)$.2. Encontrar la parábola que pasa exactamente por los puntos $(-1,1)$, $(0,0)$ y $(1,1)$.**Solución**1. Consideramos $y=\beta_1 x + \beta_0$. Evaluando en el punto $(0,1)$, obtenemos $\beta_1(0) + \beta_0 = 1$. Ahora, evaluando en el punto $(1,0)$, obtenemos $\beta_1(1) + \beta_0 = 0$. De esta manera,$$\left[\begin{array}{cc} 1 & 0 \\ 1 & 1\end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1\end{array}\right]=\left[\begin{array}{c} 1 \\ 0\end{array}\right].$$Resolviendo, $\beta_0=-\beta_1=1$.
###Code
# Importar numpy y el matplotlib.pyplot
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
help(np.linalg.solve)
# Encontrar beta_0 y beta_1 resolviendo el sistema
A = np.array([[1, 0],
[1, 1]])
h = np.array([1, 0])
# h = A^{-1} * h
#beta = np.linalg.solve(A, h)
beta = np.linalg.inv(A).dot(h)
beta
# Graficar la recta encontrada junto con los puntos
plt.figure(figsize=(6, 4))
plt.plot(0, 1, 'ro', ms=10, label='$(0, 1)$')
plt.plot(1, 0, 'ro', ms=10, label='$(1, 0)$')
x_num = np.linspace(-1, 2)
y_num = beta[0] + beta[1] * x_num
plt.plot(x_num, y_num, 'b', lw=3,
label=f'$y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2. Consideramos $y=\beta_2 x^2 + \beta_1 x + \beta_0$. Evaluando en el punto $(-1,1)$, obtenemos $\beta_2(-1)^2 + \beta_1(-1) + \beta_0 = 1$. Ahora, evaluando en el punto $(0,0)$, obtenemos $\beta_2(0)^2 + \beta_1(0) + \beta_0 = 0$. Finalmente, evaluando en el punto $(1,1)$, obtenemos $\beta_2(1)^2 + \beta_1(1) + \beta_0 = 1$. De esta manera,$$\left[\begin{array}{ccc} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \end{array}\right]=\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right].$$Resolviendo, $\beta_0=\beta_1=0$ y $\beta_2=1$.
###Code
# Encontrar beta_0, beta_1 y beta_2
A = np.array([[1, -1, 1],
[1, 0, 0],
[1, 1, 1]])
h = np.array([1, 0, 1])
beta = np.linalg.solve(A, h)
beta
# Graficar la parabola junto con los puntos
plt.figure(figsize=(6, 4))
plt.plot(-1, 1, 'ro', ms=10, label='$(-1, 1)$')
plt.plot(0, 0, 'ro', ms=10, label='$(0, 0)$')
plt.plot(1, 1, 'ro', ms=10, label='$(1, 1)$')
x_num = np.linspace(-2, 2)
y_num = beta[0] + beta[1] * x_num + beta[2] * x_num**2
plt.plot(x_num, y_num, 'b', lw=3,
label=f'$y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$ + {np.round(beta[2], 2)}$x^2$')
plt.axvline(x=0, c='k', ls='--')
plt.axhline(y=0, c='k', ls='--')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
¿Qué tienen en común los anteriores problemas?Las curvas están completamente determinadas por los puntos (datos limpios, suficientes y necesarios).Esto se traduce en que, al llevar el problema a un sistema de ecuaciones lineales, existe una única solución: **no hay necesidad, ni se puede optimizar nada**.¿Tendremos datos así de '*bonitos*' en la vida real?La realidad es que los datos que encontraremos en nuestra vida profesional se parecen más a esto...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
N = 100
x = np.linspace(0, 10, N)
# y = ecn. recta + ruido
y = 10 + 2 * x + np.random.normal(loc=0, scale=2, size=(N,))
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
¿Cómo ajustamos una curva a esto? 1. Problema básicoConsideramos que tenemos un conjunto de n pares ordenados de datos $(x_i,y_i)$, para $i=1,2,3,\dots,n$. ¿Cuál es la recta que mejor se ajusta a estos datos?Consideramos entonces ajustes de la forma $\hat{f}(x) = \beta_0+\beta_1 x = \left[1 \quad x\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \end{array}\right]=\left[1 \quad x\right]\boldsymbol{\beta}$ (lineas rectas).Para decir '*mejor*', tenemos que definir algún sentido en que una recta se ajuste *mejor* que otra.**Mínimos cuadrados**: el objetivo es seleccionar los coeficientes $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$, de forma que la función evaluada en los puntos $x_i$ ($\hat{f}(x_i)$) aproxime los valores correspondientes $y_i$.La formulación por mínimos cuadrados, encuentra los $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$ que minimiza$$\frac{1}{2n}\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-(\beta_0+ \beta_1x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-\left[1 \quad x_i\right]\boldsymbol{\beta})^2=\frac{1}{2n}\left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2,$$donde $\boldsymbol{y}=\left[y_1\quad\dots\quad y_n\right]^T$, y $\boldsymbol{X}=\left[\begin{array}{ccc}1 & x_1\\ \vdots & \vdots \\ 1 & x_n\end{array}\right].$ Esto es,$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2$$ Notar que el problema anterior no es de programación lineal, ¿porqué?Para llevar a cabo la anterior minimización, la librería `SciPy` en su módulo `optimize` contiene la función `minimize`.
###Code
# Importar el módulo optimize de la librería scipy
from scipy import optimize as opt
# Función minimize
help(opt.minimize)
###Output
Help on function minimize in module scipy.optimize._minimize:
minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)
Minimization of scalar function of one or more variables.
Parameters
----------
fun : callable
The objective function to be minimized.
``fun(x, *args) -> float``
where x is an 1-D array with shape (n,) and `args`
is a tuple of the fixed parameters needed to completely
specify the function.
x0 : ndarray, shape (n,)
Initial guess. Array of real elements of size (n,),
where 'n' is the number of independent variables.
args : tuple, optional
Extra arguments passed to the objective function and its
derivatives (`fun`, `jac` and `hess` functions).
method : str or callable, optional
Type of solver. Should be one of
- 'Nelder-Mead' :ref:`(see here) <optimize.minimize-neldermead>`
- 'Powell' :ref:`(see here) <optimize.minimize-powell>`
- 'CG' :ref:`(see here) <optimize.minimize-cg>`
- 'BFGS' :ref:`(see here) <optimize.minimize-bfgs>`
- 'Newton-CG' :ref:`(see here) <optimize.minimize-newtoncg>`
- 'L-BFGS-B' :ref:`(see here) <optimize.minimize-lbfgsb>`
- 'TNC' :ref:`(see here) <optimize.minimize-tnc>`
- 'COBYLA' :ref:`(see here) <optimize.minimize-cobyla>`
- 'SLSQP' :ref:`(see here) <optimize.minimize-slsqp>`
- 'trust-constr':ref:`(see here) <optimize.minimize-trustconstr>`
- 'dogleg' :ref:`(see here) <optimize.minimize-dogleg>`
- 'trust-ncg' :ref:`(see here) <optimize.minimize-trustncg>`
- 'trust-exact' :ref:`(see here) <optimize.minimize-trustexact>`
- 'trust-krylov' :ref:`(see here) <optimize.minimize-trustkrylov>`
- custom - a callable object (added in version 0.14.0),
see below for description.
If not given, chosen to be one of ``BFGS``, ``L-BFGS-B``, ``SLSQP``,
depending if the problem has constraints or bounds.
jac : {callable, '2-point', '3-point', 'cs', bool}, optional
Method for computing the gradient vector. Only for CG, BFGS,
Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg, trust-krylov,
trust-exact and trust-constr. If it is a callable, it should be a
function that returns the gradient vector:
``jac(x, *args) -> array_like, shape (n,)``
where x is an array with shape (n,) and `args` is a tuple with
the fixed parameters. Alternatively, the keywords
{'2-point', '3-point', 'cs'} select a finite
difference scheme for numerical estimation of the gradient. Options
'3-point' and 'cs' are available only to 'trust-constr'.
If `jac` is a Boolean and is True, `fun` is assumed to return the
gradient along with the objective function. If False, the gradient
will be estimated using '2-point' finite difference estimation.
hess : {callable, '2-point', '3-point', 'cs', HessianUpdateStrategy}, optional
Method for computing the Hessian matrix. Only for Newton-CG, dogleg,
trust-ncg, trust-krylov, trust-exact and trust-constr. If it is
callable, it should return the Hessian matrix:
``hess(x, *args) -> {LinearOperator, spmatrix, array}, (n, n)``
where x is a (n,) ndarray and `args` is a tuple with the fixed
parameters. LinearOperator and sparse matrix returns are
allowed only for 'trust-constr' method. Alternatively, the keywords
{'2-point', '3-point', 'cs'} select a finite difference scheme
for numerical estimation. Or, objects implementing
`HessianUpdateStrategy` interface can be used to approximate
the Hessian. Available quasi-Newton methods implementing
this interface are:
- `BFGS`;
- `SR1`.
Whenever the gradient is estimated via finite-differences,
the Hessian cannot be estimated with options
{'2-point', '3-point', 'cs'} and needs to be
estimated using one of the quasi-Newton strategies.
Finite-difference options {'2-point', '3-point', 'cs'} and
`HessianUpdateStrategy` are available only for 'trust-constr' method.
hessp : callable, optional
Hessian of objective function times an arbitrary vector p. Only for
Newton-CG, trust-ncg, trust-krylov, trust-constr.
Only one of `hessp` or `hess` needs to be given. If `hess` is
provided, then `hessp` will be ignored. `hessp` must compute the
Hessian times an arbitrary vector:
``hessp(x, p, *args) -> ndarray shape (n,)``
where x is a (n,) ndarray, p is an arbitrary vector with
dimension (n,) and `args` is a tuple with the fixed
parameters.
bounds : sequence or `Bounds`, optional
Bounds on variables for L-BFGS-B, TNC, SLSQP and
trust-constr methods. There are two ways to specify the bounds:
1. Instance of `Bounds` class.
2. Sequence of ``(min, max)`` pairs for each element in `x`. None
is used to specify no bound.
constraints : {Constraint, dict} or List of {Constraint, dict}, optional
Constraints definition (only for COBYLA, SLSQP and trust-constr).
Constraints for 'trust-constr' are defined as a single object or a
list of objects specifying constraints to the optimization problem.
Available constraints are:
- `LinearConstraint`
- `NonlinearConstraint`
Constraints for COBYLA, SLSQP are defined as a list of dictionaries.
Each dictionary with fields:
type : str
Constraint type: 'eq' for equality, 'ineq' for inequality.
fun : callable
The function defining the constraint.
jac : callable, optional
The Jacobian of `fun` (only for SLSQP).
args : sequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to
be zero whereas inequality means that it is to be non-negative.
Note that COBYLA only supports inequality constraints.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific
options.
options : dict, optional
A dictionary of solver options. All methods accept the following
generic options:
maxiter : int
Maximum number of iterations to perform. Depending on the
method each iteration may use several function evaluations.
disp : bool
Set to True to print convergence messages.
For method-specific options, see :func:`show_options()`.
callback : callable, optional
Called after each iteration. For 'trust-constr' it is a callable with
the signature:
``callback(xk, OptimizeResult state) -> bool``
where ``xk`` is the current parameter vector. and ``state``
is an `OptimizeResult` object, with the same fields
as the ones from the return. If callback returns True
the algorithm execution is terminated.
For all the other methods, the signature is:
``callback(xk)``
where ``xk`` is the current parameter vector.
Returns
-------
res : OptimizeResult
The optimization result represented as a ``OptimizeResult`` object.
Important attributes are: ``x`` the solution array, ``success`` a
Boolean flag indicating if the optimizer exited successfully and
``message`` which describes the cause of the termination. See
`OptimizeResult` for a description of other attributes.
See also
--------
minimize_scalar : Interface to minimization algorithms for scalar
univariate functions
show_options : Additional options accepted by the solvers
Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is *BFGS*.
**Unconstrained minimization**
Method :ref:`Nelder-Mead <optimize.minimize-neldermead>` uses the
Simplex algorithm [1]_, [2]_. This algorithm is robust in many
applications. However, if numerical computation of derivative can be
trusted, other algorithms using the first and/or second derivatives
information might be preferred for their better performance in
general.
Method :ref:`Powell <optimize.minimize-powell>` is a modification
of Powell's method [3]_, [4]_ which is a conjugate direction
method. It performs sequential one-dimensional minimizations along
each vector of the directions set (`direc` field in `options` and
`info`), which is updated at each iteration of the main
minimization loop. The function need not be differentiable, and no
derivatives are taken.
Method :ref:`CG <optimize.minimize-cg>` uses a nonlinear conjugate
gradient algorithm by Polak and Ribiere, a variant of the
Fletcher-Reeves method described in [5]_ pp. 120-122. Only the
first derivatives are used.
Method :ref:`BFGS <optimize.minimize-bfgs>` uses the quasi-Newton
method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) [5]_
pp. 136. It uses the first derivatives only. BFGS has proven good
performance even for non-smooth optimizations. This method also
returns an approximation of the Hessian inverse, stored as
`hess_inv` in the OptimizeResult object.
Method :ref:`Newton-CG <optimize.minimize-newtoncg>` uses a
Newton-CG algorithm [5]_ pp. 168 (also known as the truncated
Newton method). It uses a CG method to the compute the search
direction. See also *TNC* method for a box-constrained
minimization with a similar algorithm. Suitable for large-scale
problems.
Method :ref:`dogleg <optimize.minimize-dogleg>` uses the dog-leg
trust-region algorithm [5]_ for unconstrained minimization. This
algorithm requires the gradient and Hessian; furthermore the
Hessian is required to be positive definite.
Method :ref:`trust-ncg <optimize.minimize-trustncg>` uses the
Newton conjugate gradient trust-region algorithm [5]_ for
unconstrained minimization. This algorithm requires the gradient
and either the Hessian or a function that computes the product of
the Hessian with a given vector. Suitable for large-scale problems.
Method :ref:`trust-krylov <optimize.minimize-trustkrylov>` uses
the Newton GLTR trust-region algorithm [14]_, [15]_ for unconstrained
minimization. This algorithm requires the gradient
and either the Hessian or a function that computes the product of
the Hessian with a given vector. Suitable for large-scale problems.
On indefinite problems it requires usually less iterations than the
`trust-ncg` method and is recommended for medium and large-scale problems.
Method :ref:`trust-exact <optimize.minimize-trustexact>`
is a trust-region method for unconstrained minimization in which
quadratic subproblems are solved almost exactly [13]_. This
algorithm requires the gradient and the Hessian (which is
*not* required to be positive definite). It is, in many
situations, the Newton method to converge in fewer iteraction
and the most recommended for small and medium-size problems.
**Bound-Constrained minimization**
Method :ref:`L-BFGS-B <optimize.minimize-lbfgsb>` uses the L-BFGS-B
algorithm [6]_, [7]_ for bound constrained minimization.
Method :ref:`TNC <optimize.minimize-tnc>` uses a truncated Newton
algorithm [5]_, [8]_ to minimize a function with variables subject
to bounds. This algorithm uses gradient information; it is also
called Newton Conjugate-Gradient. It differs from the *Newton-CG*
method described above as it wraps a C implementation and allows
each variable to be given upper and lower bounds.
**Constrained Minimization**
Method :ref:`COBYLA <optimize.minimize-cobyla>` uses the
Constrained Optimization BY Linear Approximation (COBYLA) method
[9]_, [10]_, [11]_. The algorithm is based on linear
approximations to the objective function and each constraint. The
method wraps a FORTRAN implementation of the algorithm. The
constraints functions 'fun' may return either a single number
or an array or list of numbers.
Method :ref:`SLSQP <optimize.minimize-slsqp>` uses Sequential
Least SQuares Programming to minimize a function of several
variables with any combination of bounds, equality and inequality
constraints. The method wraps the SLSQP Optimization subroutine
originally implemented by Dieter Kraft [12]_. Note that the
wrapper handles infinite values in bounds by converting them into
large floating values.
Method :ref:`trust-constr <optimize.minimize-trustconstr>` is a
trust-region algorithm for constrained optimization. It swiches
between two implementations depending on the problem definition.
It is the most versatile constrained minimization algorithm
implemented in SciPy and the most appropriate for large-scale problems.
For equality constrained problems it is an implementation of Byrd-Omojokun
Trust-Region SQP method described in [17]_ and in [5]_, p. 549. When
inequality constraints are imposed as well, it swiches to the trust-region
interior point method described in [16]_. This interior point algorithm,
in turn, solves inequality constraints by introducing slack variables
and solving a sequence of equality-constrained barrier problems
for progressively smaller values of the barrier parameter.
The previously described equality constrained SQP method is
used to solve the subproblems with increasing levels of accuracy
as the iterate gets closer to a solution.
**Finite-Difference Options**
For Method :ref:`trust-constr <optimize.minimize-trustconstr>`
the gradient and the Hessian may be approximated using
three finite-difference schemes: {'2-point', '3-point', 'cs'}.
The scheme 'cs' is, potentially, the most accurate but it
requires the function to correctly handles complex inputs and to
be differentiable in the complex plane. The scheme '3-point' is more
accurate than '2-point' but requires twice as much operations.
**Custom minimizers**
It may be useful to pass a custom minimization method, for example
when using a frontend to this method such as `scipy.optimize.basinhopping`
or a different library. You can simply pass a callable as the ``method``
parameter.
The callable is called as ``method(fun, x0, args, **kwargs, **options)``
where ``kwargs`` corresponds to any other parameters passed to `minimize`
(such as `callback`, `hess`, etc.), except the `options` dict, which has
its contents also passed as `method` parameters pair by pair. Also, if
`jac` has been passed as a bool type, `jac` and `fun` are mangled so that
`fun` returns just the function values and `jac` is converted to a function
returning the Jacobian. The method shall return an `OptimizeResult`
object.
The provided `method` callable must be able to accept (and possibly ignore)
arbitrary parameters; the set of parameters accepted by `minimize` may
expand in future versions and then these parameters will be passed to
the method. You can find an example in the scipy.optimize tutorial.
.. versionadded:: 0.11.0
References
----------
.. [1] Nelder, J A, and R Mead. 1965. A Simplex Method for Function
Minimization. The Computer Journal 7: 308-13.
.. [2] Wright M H. 1996. Direct search methods: Once scorned, now
respectable, in Numerical Analysis 1995: Proceedings of the 1995
Dundee Biennial Conference in Numerical Analysis (Eds. D F
Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK.
191-208.
.. [3] Powell, M J D. 1964. An efficient method for finding the minimum of
a function of several variables without calculating derivatives. The
Computer Journal 7: 155-162.
.. [4] Press W, S A Teukolsky, W T Vetterling and B P Flannery.
Numerical Recipes (any edition), Cambridge University Press.
.. [5] Nocedal, J, and S J Wright. 2006. Numerical Optimization.
Springer New York.
.. [6] Byrd, R H and P Lu and J. Nocedal. 1995. A Limited Memory
Algorithm for Bound Constrained Optimization. SIAM Journal on
Scientific and Statistical Computing 16 (5): 1190-1208.
.. [7] Zhu, C and R H Byrd and J Nocedal. 1997. L-BFGS-B: Algorithm
778: L-BFGS-B, FORTRAN routines for large scale bound constrained
optimization. ACM Transactions on Mathematical Software 23 (4):
550-560.
.. [8] Nash, S G. Newton-Type Minimization Via the Lanczos Method.
1984. SIAM Journal of Numerical Analysis 21: 770-778.
.. [9] Powell, M J D. A direct search optimization method that models
the objective and constraint functions by linear interpolation.
1994. Advances in Optimization and Numerical Analysis, eds. S. Gomez
and J-P Hennart, Kluwer Academic (Dordrecht), 51-67.
.. [10] Powell M J D. Direct search algorithms for optimization
calculations. 1998. Acta Numerica 7: 287-336.
.. [11] Powell M J D. A view of algorithms for optimization without
derivatives. 2007.Cambridge University Technical Report DAMTP
2007/NA03
.. [12] Kraft, D. A software package for sequential quadratic
programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace
Center -- Institute for Flight Mechanics, Koln, Germany.
.. [13] Conn, A. R., Gould, N. I., and Toint, P. L.
Trust region methods. 2000. Siam. pp. 169-200.
.. [14] F. Lenders, C. Kirches, A. Potschka: "trlib: A vector-free
implementation of the GLTR method for iterative solution of
the trust region problem", https://arxiv.org/abs/1611.04718
.. [15] N. Gould, S. Lucidi, M. Roma, P. Toint: "Solving the
Trust-Region Subproblem using the Lanczos Method",
SIAM J. Optim., 9(2), 504--525, (1999).
.. [16] Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal. 1999.
An interior point algorithm for large-scale nonlinear programming.
SIAM Journal on Optimization 9.4: 877-900.
.. [17] Lalee, Marucha, Jorge Nocedal, and Todd Plantega. 1998. On the
implementation of an algorithm for large-scale equality constrained
optimization. SIAM Journal on Optimization 8.3: 682-706.
Examples
--------
Let us consider the problem of minimizing the Rosenbrock function. This
function (and its respective derivatives) is implemented in `rosen`
(resp. `rosen_der`, `rosen_hess`) in the `scipy.optimize`.
>>> from scipy.optimize import minimize, rosen, rosen_der
A simple application of the *Nelder-Mead* method is:
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6)
>>> res.x
array([ 1., 1., 1., 1., 1.])
Now using the *BFGS* algorithm, using the first derivative and a few
options:
>>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
... options={'gtol': 1e-6, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 26
Function evaluations: 31
Gradient evaluations: 31
>>> res.x
array([ 1., 1., 1., 1., 1.])
>>> print(res.message)
Optimization terminated successfully.
>>> res.hess_inv
array([[ 0.00749589, 0.01255155, 0.02396251, 0.04750988, 0.09495377], # may vary
[ 0.01255155, 0.02510441, 0.04794055, 0.09502834, 0.18996269],
[ 0.02396251, 0.04794055, 0.09631614, 0.19092151, 0.38165151],
[ 0.04750988, 0.09502834, 0.19092151, 0.38341252, 0.7664427 ],
[ 0.09495377, 0.18996269, 0.38165151, 0.7664427, 1.53713523]])
Next, consider a minimization problem with several constraints (namely
Example 16.4 from [5]_). The objective function is:
>>> fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
There are three constraints defined as:
>>> cons = ({'type': 'ineq', 'fun': lambda x: x[0] - 2 * x[1] + 2},
... {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
... {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2})
And variables must be positive, hence the following bounds:
>>> bnds = ((0, None), (0, None))
The optimization problem is solved using the SLSQP method as:
>>> res = minimize(fun, (2, 0), method='SLSQP', bounds=bnds,
... constraints=cons)
It should converge to the theoretical solution (1.4 ,1.7).
###Markdown
Parámetros importantes:- fun: función $f(x)$, se debe definir antes de llamar minimize, como `def f(x): ... return ...`- x0: valor inicial. En una función no lineal, en general, hay múltiples mínimos. Dependiendo de la semilla caerá en uno de esos mínimos. Se ingresa como $x0 = \text{np.array}([x_{01},\dots,x_{0n}])$.- bounds: como en linprog.- constraints: funciones que definen las restricciones $g_i(x)$ y $h_j(x)$. Se definen igual que $f(x)$ y se ingresan como {'ineq': g_i, 'eq': h_j}. Primero debemos construir la función objetivo y la semilla inicial:
###Code
# Definir funcion objetivo y punto inicial
def min_sq(beta, x_points, y_points):
n = len(x_points)
recta = beta[0] + beta[1] * x_points
return (1 / (2 * n)) * ((y_points - recta)**2).sum()
beta_ini = [0, 0]
solucion = opt.minimize(fun=min_sq,
x0=beta_ini,
args=(x, y))
# Mostrar
solucion
###Output
_____no_output_____
###Markdown
¿Qué tan bien luce el ajuste?
###Code
# Coeficientes \beta_0 y \beta_1
beta = solucion.x
beta
# Grafica de los puntos y la recta ajustada
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
Note que la pendiente es aproximadamente $2$ y el intercepto es aproximadamente $10$.La anterior idea se puede extender a ajuste polinomial... 2. Ajuste polinomialAhora, considere el siguiente conjunto de datos...
###Code
# Generamos 100 puntos ruidosos a partir de una senoidal
N = 100
x = np.linspace(0, 1, N)
y = np.sin(2 * np.pi * x) + np.random.normal(loc=0, scale=0.3, size=(N,))
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2.1. ¿Se ajustará bien una recta?
###Code
# Definir funcion objetivo y semilla
def min_sq_1(beta, x_points, y_points):
n = len(x_points)
recta = beta[0] + beta[1] * x_points
return (1 / (2 * n)) * ((y_points - recta)**2).sum()
beta_ini_1 = [0, 0]
# Resolver
solucion_1 = opt.minimize(fun=min_sq_1,
x0=beta_ini_1,
args=(x, y))
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con recta**
###Code
# Mostrar coeficientes
beta_1 = solucion_1.x
beta_1
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2.2. La recta no es buen ajuste... ¿Se ajustará bien una parabola?
###Code
# Definir funcion objetivo y semilla
def min_sq_2(beta, x_points, y_points):
n = len(x_points)
parabola = beta[0] + beta[1] * x_points + beta[2] * x_points**2
return (1 / (2 * n)) * ((y_points - parabola)**2).sum()
beta_ini_2 = [0, 0, 0]
# Resolver
solucion_2 = opt.minimize(fun=min_sq_2,
x0=beta_ini_2,
args=(x, y))
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con parábola**
###Code
# Mostrar coeficientes
beta_2 = solucion.x
beta_2
# Graficar recta y parabola ajustadas
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit1 = beta_1[0] + beta_1[1] * x
plt.plot(x, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x + beta_2[2] * x**2
plt.plot(x, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2.3. Tampoco. Quizá un polinomio cúbico...
###Code
# Definir funcion objetivo y semilla
def min_sq_3(beta, x_points, y_points):
n = len(x_points)
cubico = beta[0] + beta[1] * x_points + beta[2] * x_points**2 + beta[3] * x_points**3
return (1 / (2 * n)) * ((y_points - cubico)**2).sum()
beta_ini_3 = [0, 0, 0, 0]
# Resolver
solucion_3 = opt.minimize(fun=min_sq_3,
x0=beta_ini_3,
args=(x, y))
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con cúbica**
###Code
# Mostrar coeficientes
beta_3 = solucion_3.x
beta_3
# Graficar recta, parabola y cubica
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit1 = beta_1[0] + beta_1[1] * x
plt.plot(x, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x + beta_2[2] * x**2
plt.plot(x, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
y_fit3 = beta_3[0] + beta_3[1] * x + beta_3[2] * x**2 + beta_3[3] * x**3
plt.plot(x, y_fit3, lw=3,
label=f'Polinomio cúbico ajustado: '
f'$y=${np.round(beta_3[0], 2)} + {np.round(beta_3[1], 2)}$x$ + {np.round(beta_3[2], 2)}$x^2$ + '
f'{np.round(beta_3[3], 2)}$x^3$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
###Output
_____no_output_____
###Markdown
Mucho mejor. Entonces, ¿mientras más se suba el orden mejor la aproximación? 2.4. Ajustemos un polinomio de grado 7...
###Code
# Definimos funcion objetivo y semilla
def min_sq_7(beta, x_points, y_points):
n = len(x_points)
poli_7 = np.array([beta[i] * x_points**i for i in range(8)]).sum(axis=0)
return (1 / (2 * n)) * ((y_points - poli_7)**2).sum()
beta_ini_7 = np.zeros(8)
# Resolvemos
solucion_7 = opt.minimize(fun=min_sq_7,
x0=beta_ini_7,
args=(x, y))
###Output
_____no_output_____
###Markdown
**De nuevo, veamos $\beta$**
###Code
beta_1
beta_2
beta_3
# Mostrar coeficientes
beta_7 = solucion_7.x
beta_7
###Output
_____no_output_____
###Markdown
**¡Cuidado! OVERFITTING...**Observar el tamaño de algunos coeficientes. Cuando los coeficientes son grandes, ¿qué pasa?
###Code
# Grafica de ajustes
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit1 = beta_1[0] + beta_1[1] * x
plt.plot(x, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x + beta_2[2] * x**2
plt.plot(x, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
y_fit3 = beta_3[0] + beta_3[1] * x + beta_3[2] * x**2 + beta_3[3] * x**3
plt.plot(x, y_fit3, lw=3,
label=f'Polinomio cúbico ajustado: '
f'$y=${np.round(beta_3[0], 2)} + {np.round(beta_3[1], 2)}$x$ + {np.round(beta_3[2], 2)}$x^2$ + '
f'{np.round(beta_3[3], 2)}$x^3$')
y_fit7 = np.array([beta_7[i] * x**i for i in range(8)]).sum(axis=0)
plt.plot(x, y_fit7, lw=3, label=f'Polinomio de grado 7 ajustado')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
solucion_1
solucion_2
solucion_7
###Output
_____no_output_____
###Markdown
Es conveniente ver el error como función del orden del polinomio... **selección de modelos**
###Code
# Función objetivo ajuste polinomio grado N
def min_sq_N(beta, x_points, y_points, N):
n = len(x_points)
poli_N = np.array([beta[i] * x_points**i for i in range(N + 1)]).sum(axis=0)
return (1 / (2 * n)) * ((y_points - poli_N)**2).sum()
error = []
for i in range(1, 10):
beta_ini = np.zeros(i + 1)
solucion = opt.minimize(fun=min_sq_N, x0=beta_ini, args=(x, y, i))
error.append(solucion.fun)
# Error cuadratico
plt.figure(figsize=(6, 4))
plt.plot(range(1, 10), error)
plt.xlabel('Orden del polinomio ajustado')
plt.ylabel('Error de mínimos cuadrados')
plt.grid()
###Output
_____no_output_____
###Markdown
En efecto, parece que con $3$ es suficiente.
###Code
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
x_num = np.linspace(-0.1, 1.2)
y_fit1 = beta_1[0] + beta_1[1] * x_num
plt.plot(x_num, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x_num + beta_2[2] * x_num**2
plt.plot(x_num, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
y_fit3 = beta_3[0] + beta_3[1] * x_num + beta_3[2] * x_num**2 + beta_3[3] * x_num**3
plt.plot(x_num, y_fit3, lw=3,
label=f'Polinomio cúbico ajustado: '
f'$y=${np.round(beta_3[0], 2)} + {np.round(beta_3[1], 2)}$x$ + {np.round(beta_3[2], 2)}$x^2$ + '
f'{np.round(beta_3[3], 2)}$x^3$')
y_fit7 = np.array([beta_7[i] * x_num**i for i in range(8)]).sum(axis=0)
plt.plot(x_num, y_fit7, '--', lw=3, label=f'Polinomio de grado 7 ajustado')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
beta_3
beta_7
###Output
_____no_output_____
###Markdown
¿Cómo prevenir el *overfitting* sin importar el orden del modelo? 3. RegularizaciónVimos que la solución de mínimos cuadrados es:$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2.$$Sin embargo, si crecemos el orden del modelo hay overfitting y algunos coeficientes óptimos $\boldsymbol{\beta}$ crecen muchísimo. Que un coeficiente sea muy grande, significa que se le da mucha importancia a alguna característica (que quizá sea ruido... no sirve para predecir).La regularización consiste en penalizar la magnitud de los coeficientes $\boldsymbol{\beta}$ en el problema de optimización, para que no crezcan tanto. 3.1. Ridge$$\boldsymbol{\beta}^{ridge} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|^2$$
###Code
def min_sq_N_ridge(beta, x_points, y_points, N, l):
n = len(x_points)
poli_N = np.array([beta[i] * x_points**i for i in range(N + 1)]).sum(axis=0)
return (1 / (2 * n)) * ((y_points - poli_N)**2).sum() + l * np.linalg.norm(beta)**2
solucion = opt.minimize(fun=min_sq_N_ridge,
x0=np.zeros(8),
args=(x, y, 7, 0.0003))
beta_7_ridge = solucion.x
solucion = opt.minimize(fun=min_sq_N_ridge,
x0=np.zeros(4),
args=(x, y, 3, 0.00003))
beta_3_ridge = solucion.x
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
x_num = np.linspace(-0.1, 1.1)
y_fit7 = np.array([beta_7[i] * x_num**i for i in range(8)]).sum(axis=0)
plt.plot(x_num, y_fit7, '--', lw=3, label=f'Polinomio de grado 7 ajustado')
y_fit7_ridge = np.array([beta_7_ridge[i] * x_num**i for i in range(8)]).sum(axis=0)
plt.plot(x_num, y_fit7_ridge, '--', lw=3, label=f'Polinomio de grado 7 regularizado ajustado')
y_fit3_ridge = np.array([beta_3_ridge[i] * x_num**i for i in range(4)]).sum(axis=0)
plt.plot(x_num, y_fit3_ridge, '--', lw=3, label=f'Polinomio de grado 3 regularizado ajustado')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
beta_7
beta_7_ridge
beta_3
beta_3_ridge
###Output
_____no_output_____
###Markdown
3.2. Lasso$$\boldsymbol{\beta}^{lasso} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|_1$$La norma 1 no es más que la suma de los valores absolutos de las componentes $\left|\left|\boldsymbol{\beta}\right|\right|_1=\sum_{j=0}^m\left|\beta_j\right|$. 4. Ajuste robustoAhora, consideremos de nuevo el caso de la línea recta con un par de puntos atípicos al inicio y al final...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
N = 20
x = np.linspace(0, 10, N)
# y = ecn. recta + ruido
y = 10 + 2 * x + np.random.normal(loc=0, scale=2, size=(N,))
y[0] = 30
y[-1] = 10
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
Solucionamos el problema normalmente...
###Code
solucion = opt.minimize(fun=min_sq_1,
x0=np.zeros(2),
args=(x, y))
beta = solucion.x
beta
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1))
plt.grid()
###Output
_____no_output_____
###Markdown
Si estos puntos que parecen ser atípicos, hacen parte de una 'mala medición', vemos que el ajuste que obtenemos a los otros puntos es muy pobre...**¿Cómo podemos evitar esto?** La respuesta es [*ajuste robusto*](https://en.wikipedia.org/wiki/Huber_loss).
###Code
def huber(a, d):
if np.abs(a) <= d:
return a**2
else:
return d * (2 * np.abs(a) - d)
def min_sq_rob(beta, x_points, y_points):
n = len(x_points)
recta = beta[0] + beta[1] * x_points
return (1 / (2 * n)) * (np.vectorize(huber)(y_points - recta, 5)).sum()
solucion = opt.minimize(fun=min_sq_rob,
x0=np.zeros(2),
args=(x, y))
beta_rob = solucion.x
beta_rob
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
y_fit_rob = beta_rob[0] + beta_rob[1] * x
plt.plot(x, y_fit_rob, 'g', lw=3,
label=f'Recta ajustada robusta: $y=${np.round(beta_rob[0], 2)} + {np.round(beta_rob[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1))
plt.grid()
###Output
_____no_output_____
###Markdown
Mejor... 5. TareaLa siguiente celda lee datos correspondientes a tamaños $x$ ($ft^2$) y precios $y$ (USD) de casas en Portland, Oregon.1. Graficar estos datos poniendo los precios en el eje $y$ y los tamaños en el eje $x$.2. Ajustar polinomios de grado 1 hasta grado 5.3. Graficar el error cuadrático acumulado contra el número de términos, y elegir un polinomio que ajuste bien y su grado sea el menor posible.4. Supongamos que un amigo tuyo tiene una casa de $1250 ft^2$. Según tu modelo, ¿en cuánto podría vender dicha casa?Abrir un nuevo notebook, llamado `Tarea3_ApellidoNombre` y subirlo a canvas en el espacio habilitado.
###Code
import pandas as pd
data = pd.read_csv("housing_prices.csv")
x = data['size'].values
y = data['price'].values
x
import numpy as np
x_grafica = np.sort(x)
y
###Output
_____no_output_____
###Markdown
Ajuste de curvas> El **ajuste de curvas** es el proceso de construir una curva (función), que sea el mejor ajuste a una serie de puntos. Las curvas ajustadas pueden ser usadas como asistencia en la visualización de datos, para inferir valores de una función donde no hay datos disponibles, y para resumir la relación entre variables.**Referencia**:- https://en.wikipedia.org/wiki/Curve_fitting___ 0. IntroducciónConsideremos un polinomio de grado uno:$$y = \beta_1 x + \beta_0.$$Esta es una **línea recta** que tiene pendiente $\beta_1$. Sabemos que habrá una línea conectando dos puntos cualesquiera. Por tanto, *una ecuación polinómica de primer grado es un ajuste perfecto entre dos puntos*.Si consideramos ahora un polinomio de segundo grado,$$y = \beta_2 x^2 + \beta_1 x + \beta_0,$$este se ajustará exactamente a tres puntos. Si aumentamos el grado de la función a la de un polinomio de tercer grado, obtenemos:$$y = \beta_3 x^3 + \beta_2 x^2 + \beta_1 x + \beta_0,$$que se ajustará a cuatro puntos.**Ejemplos**1. Encontrar la línea recta que pasa exactamente por los puntos $(0,1)$ y $(1,0)$.2. Encontrar la parábola que pasa exactamente por los puntos $(-1,1)$, $(0,0)$ y $(1,1)$.**Solución**1. Consideramos $y=\beta_1 x + \beta_0$. Evaluando en el punto $(0,1)$, obtenemos $\beta_1(0) + \beta_0 = 1$. Ahora, evaluando en el punto $(1,0)$, obtenemos $\beta_1(1) + \beta_0 = 0$. De esta manera,$$\left[\begin{array}{cc} 1 & 0 \\ 1 & 1\end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1\end{array}\right]=\left[\begin{array}{c} 1 \\ 0\end{array}\right].$$Resolviendo, $\beta_0=-\beta_1=1$.
###Code
# Importar numpy y el matplotlib.pyplot
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
# Encontrar beta_0 y beta_1 resolviendo el sistema
A = np.array([[1, 0],
[1, 1]])
b = np.array([1, 0])
# beta = np.linalg.inv(A).dot(b)
beta = np.linalg.solve(A, b)
beta
# Graficar la recta encontrada junto con los puntos
plt.figure(figsize=(6, 4))
plt.plot(1, 0, 'ro', label="(1, 0)")
plt.plot(0, 1, 'bo', label="(0, 1)")
x = np.linspace(-0.1, 1.1)
y = beta[0] + beta[1] * x
plt.plot(x,
y,
'k',
lw=2, # lw: Grosor de la línea: Line width
label="Recta ajustada")
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
2. Consideramos $y=\beta_2 x^2 + \beta_1 x + \beta_0$. Evaluando en el punto $(-1,1)$, obtenemos $\beta_2(-1)^2 + \beta_1(-1) + \beta_0 = 1$. Ahora, evaluando en el punto $(0,0)$, obtenemos $\beta_2(0)^2 + \beta_1(0) + \beta_0 = 0$. Finalmente, evaluando en el punto $(1,1)$, obtenemos $\beta_2(1)^2 + \beta_1(1) + \beta_0 = 1$. De esta manera,$$\left[\begin{array}{ccc} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \end{array}\right]=\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right].$$Resolviendo, $\beta_0=\beta_1=0$ y $\beta_2=1$.
###Code
# Encontrar beta_0, beta_1 y beta_2
A = np.array([[1, -1, 1],
[1, 0, 0],
[1, 1, 1]])
b = np.array([1, 0, 1])
beta = np.dot(np.linalg.inv(A), b)
beta
# Graficar la parabola junto con los puntos
plt.figure(figsize=(6, 4))
plt.plot(-1, 1, 'ro', label="(-1, 1)")
plt.plot(0, 0, 'bo', label="(0, 0)")
plt.plot(1, 1, 'go', label="(1, 1)")
x = np.linspace(-1.1, 1.1)
y = beta[0] + beta[1] * x + beta[2] * x**2
plt.plot(x,
y,
'k',
lw=2, # lw: Grosor de la línea: Line width
label="Parábola ajustada")
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
¿Qué tienen en común los anteriores problemas?Las curvas están completamente determinadas por los puntos (datos limpios, suficientes y necesarios).Esto se traduce en que, al llevar el problema a un sistema de ecuaciones lineales, existe una única solución: **no hay necesidad, ni se puede optimizar nada**.¿Tendremos datos así de '*bonitos*' en la vida real?La realidad es que los datos que encontraremos en nuestra vida profesional se parecen más a esto...
###Code
np.random.normal(0, 0.3, (100,))
np.random.normal(0, 0.3, (5, 5))
# Crear un conjunto de puntos ruidosos a partir de una recta
x = np.linspace(0, 10, 100)
y = 10 + 2 * x + np.random.normal(0, 1.5, (100,))
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4) # ms: Tamaño de puntos: Marker size
plt.grid()
###Output
_____no_output_____
###Markdown
¿Cómo ajustamos una curva a esto? 1. Problema básicoConsideramos que tenemos un conjunto de n pares ordenados de datos $(x_i,y_i)$, para $i=1,2,3,\dots,n$. ¿Cuál es la recta que mejor se ajusta a estos datos?Consideramos entonces ajustes de la forma $\hat{f}(x) = \beta_0+\beta_1 x = \left[1 \quad x\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \end{array}\right]=\left[1 \quad x\right]\boldsymbol{\beta}$ (lineas rectas).Para decir '*mejor*', tenemos que definir algún sentido en que una recta se ajuste *mejor* que otra.**Mínimos cuadrados**: el objetivo es seleccionar los coeficientes $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$, de forma que la función evaluada en los puntos $x_i$ ($\hat{f}(x_i)$) aproxime los valores correspondientes $y_i$.La formulación por mínimos cuadrados, encuentra los $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$ que minimiza$$\frac{1}{2n}\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-(\beta_0+ \beta_1x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-\left[1 \quad x_i\right]\boldsymbol{\beta})^2=\frac{1}{2n}\left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2,$$donde $\boldsymbol{y}=\left[y_1\quad\dots\quad y_n\right]^T$, y $\boldsymbol{X}=\left[\begin{array}{ccc}1 & x_1\\ \vdots & \vdots \\ 1 & x_n\end{array}\right].$ Esto es,$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2$$ Notar que el problema anterior no es de programación lineal, ¿porqué?Para llevar a cabo la anterior minimización, la librería `SciPy` en su módulo `optimize` contiene la función `minimize`.
###Code
# Importar el módulo optimize de la librería scipy
from scipy.optimize import minimize
# Función minimize
help(minimize)
###Output
Help on function minimize in module scipy.optimize._minimize:
minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)
Minimization of scalar function of one or more variables.
Parameters
----------
fun : callable
The objective function to be minimized.
``fun(x, *args) -> float``
where x is an 1-D array with shape (n,) and `args`
is a tuple of the fixed parameters needed to completely
specify the function.
x0 : ndarray, shape (n,)
Initial guess. Array of real elements of size (n,),
where 'n' is the number of independent variables.
args : tuple, optional
Extra arguments passed to the objective function and its
derivatives (`fun`, `jac` and `hess` functions).
method : str or callable, optional
Type of solver. Should be one of
- 'Nelder-Mead' :ref:`(see here) <optimize.minimize-neldermead>`
- 'Powell' :ref:`(see here) <optimize.minimize-powell>`
- 'CG' :ref:`(see here) <optimize.minimize-cg>`
- 'BFGS' :ref:`(see here) <optimize.minimize-bfgs>`
- 'Newton-CG' :ref:`(see here) <optimize.minimize-newtoncg>`
- 'L-BFGS-B' :ref:`(see here) <optimize.minimize-lbfgsb>`
- 'TNC' :ref:`(see here) <optimize.minimize-tnc>`
- 'COBYLA' :ref:`(see here) <optimize.minimize-cobyla>`
- 'SLSQP' :ref:`(see here) <optimize.minimize-slsqp>`
- 'trust-constr':ref:`(see here) <optimize.minimize-trustconstr>`
- 'dogleg' :ref:`(see here) <optimize.minimize-dogleg>`
- 'trust-ncg' :ref:`(see here) <optimize.minimize-trustncg>`
- 'trust-exact' :ref:`(see here) <optimize.minimize-trustexact>`
- 'trust-krylov' :ref:`(see here) <optimize.minimize-trustkrylov>`
- custom - a callable object (added in version 0.14.0),
see below for description.
If not given, chosen to be one of ``BFGS``, ``L-BFGS-B``, ``SLSQP``,
depending if the problem has constraints or bounds.
jac : {callable, '2-point', '3-point', 'cs', bool}, optional
Method for computing the gradient vector. Only for CG, BFGS,
Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg, trust-krylov,
trust-exact and trust-constr. If it is a callable, it should be a
function that returns the gradient vector:
``jac(x, *args) -> array_like, shape (n,)``
where x is an array with shape (n,) and `args` is a tuple with
the fixed parameters. Alternatively, the keywords
{'2-point', '3-point', 'cs'} select a finite
difference scheme for numerical estimation of the gradient. Options
'3-point' and 'cs' are available only to 'trust-constr'.
If `jac` is a Boolean and is True, `fun` is assumed to return the
gradient along with the objective function. If False, the gradient
will be estimated using '2-point' finite difference estimation.
hess : {callable, '2-point', '3-point', 'cs', HessianUpdateStrategy}, optional
Method for computing the Hessian matrix. Only for Newton-CG, dogleg,
trust-ncg, trust-krylov, trust-exact and trust-constr. If it is
callable, it should return the Hessian matrix:
``hess(x, *args) -> {LinearOperator, spmatrix, array}, (n, n)``
where x is a (n,) ndarray and `args` is a tuple with the fixed
parameters. LinearOperator and sparse matrix returns are
allowed only for 'trust-constr' method. Alternatively, the keywords
{'2-point', '3-point', 'cs'} select a finite difference scheme
for numerical estimation. Or, objects implementing
`HessianUpdateStrategy` interface can be used to approximate
the Hessian. Available quasi-Newton methods implementing
this interface are:
- `BFGS`;
- `SR1`.
Whenever the gradient is estimated via finite-differences,
the Hessian cannot be estimated with options
{'2-point', '3-point', 'cs'} and needs to be
estimated using one of the quasi-Newton strategies.
Finite-difference options {'2-point', '3-point', 'cs'} and
`HessianUpdateStrategy` are available only for 'trust-constr' method.
hessp : callable, optional
Hessian of objective function times an arbitrary vector p. Only for
Newton-CG, trust-ncg, trust-krylov, trust-constr.
Only one of `hessp` or `hess` needs to be given. If `hess` is
provided, then `hessp` will be ignored. `hessp` must compute the
Hessian times an arbitrary vector:
``hessp(x, p, *args) -> ndarray shape (n,)``
where x is a (n,) ndarray, p is an arbitrary vector with
dimension (n,) and `args` is a tuple with the fixed
parameters.
bounds : sequence or `Bounds`, optional
Bounds on variables for L-BFGS-B, TNC, SLSQP and
trust-constr methods. There are two ways to specify the bounds:
1. Instance of `Bounds` class.
2. Sequence of ``(min, max)`` pairs for each element in `x`. None
is used to specify no bound.
constraints : {Constraint, dict} or List of {Constraint, dict}, optional
Constraints definition (only for COBYLA, SLSQP and trust-constr).
Constraints for 'trust-constr' are defined as a single object or a
list of objects specifying constraints to the optimization problem.
Available constraints are:
- `LinearConstraint`
- `NonlinearConstraint`
Constraints for COBYLA, SLSQP are defined as a list of dictionaries.
Each dictionary with fields:
type : str
Constraint type: 'eq' for equality, 'ineq' for inequality.
fun : callable
The function defining the constraint.
jac : callable, optional
The Jacobian of `fun` (only for SLSQP).
args : sequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to
be zero whereas inequality means that it is to be non-negative.
Note that COBYLA only supports inequality constraints.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific
options.
options : dict, optional
A dictionary of solver options. All methods accept the following
generic options:
maxiter : int
Maximum number of iterations to perform. Depending on the
method each iteration may use several function evaluations.
disp : bool
Set to True to print convergence messages.
For method-specific options, see :func:`show_options()`.
callback : callable, optional
Called after each iteration. For 'trust-constr' it is a callable with
the signature:
``callback(xk, OptimizeResult state) -> bool``
where ``xk`` is the current parameter vector. and ``state``
is an `OptimizeResult` object, with the same fields
as the ones from the return. If callback returns True
the algorithm execution is terminated.
For all the other methods, the signature is:
``callback(xk)``
where ``xk`` is the current parameter vector.
Returns
-------
res : OptimizeResult
The optimization result represented as a ``OptimizeResult`` object.
Important attributes are: ``x`` the solution array, ``success`` a
Boolean flag indicating if the optimizer exited successfully and
``message`` which describes the cause of the termination. See
`OptimizeResult` for a description of other attributes.
See also
--------
minimize_scalar : Interface to minimization algorithms for scalar
univariate functions
show_options : Additional options accepted by the solvers
Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is *BFGS*.
**Unconstrained minimization**
Method :ref:`Nelder-Mead <optimize.minimize-neldermead>` uses the
Simplex algorithm [1]_, [2]_. This algorithm is robust in many
applications. However, if numerical computation of derivative can be
trusted, other algorithms using the first and/or second derivatives
information might be preferred for their better performance in
general.
Method :ref:`Powell <optimize.minimize-powell>` is a modification
of Powell's method [3]_, [4]_ which is a conjugate direction
method. It performs sequential one-dimensional minimizations along
each vector of the directions set (`direc` field in `options` and
`info`), which is updated at each iteration of the main
minimization loop. The function need not be differentiable, and no
derivatives are taken.
Method :ref:`CG <optimize.minimize-cg>` uses a nonlinear conjugate
gradient algorithm by Polak and Ribiere, a variant of the
Fletcher-Reeves method described in [5]_ pp. 120-122. Only the
first derivatives are used.
Method :ref:`BFGS <optimize.minimize-bfgs>` uses the quasi-Newton
method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) [5]_
pp. 136. It uses the first derivatives only. BFGS has proven good
performance even for non-smooth optimizations. This method also
returns an approximation of the Hessian inverse, stored as
`hess_inv` in the OptimizeResult object.
Method :ref:`Newton-CG <optimize.minimize-newtoncg>` uses a
Newton-CG algorithm [5]_ pp. 168 (also known as the truncated
Newton method). It uses a CG method to the compute the search
direction. See also *TNC* method for a box-constrained
minimization with a similar algorithm. Suitable for large-scale
problems.
Method :ref:`dogleg <optimize.minimize-dogleg>` uses the dog-leg
trust-region algorithm [5]_ for unconstrained minimization. This
algorithm requires the gradient and Hessian; furthermore the
Hessian is required to be positive definite.
Method :ref:`trust-ncg <optimize.minimize-trustncg>` uses the
Newton conjugate gradient trust-region algorithm [5]_ for
unconstrained minimization. This algorithm requires the gradient
and either the Hessian or a function that computes the product of
the Hessian with a given vector. Suitable for large-scale problems.
Method :ref:`trust-krylov <optimize.minimize-trustkrylov>` uses
the Newton GLTR trust-region algorithm [14]_, [15]_ for unconstrained
minimization. This algorithm requires the gradient
and either the Hessian or a function that computes the product of
the Hessian with a given vector. Suitable for large-scale problems.
On indefinite problems it requires usually less iterations than the
`trust-ncg` method and is recommended for medium and large-scale problems.
Method :ref:`trust-exact <optimize.minimize-trustexact>`
is a trust-region method for unconstrained minimization in which
quadratic subproblems are solved almost exactly [13]_. This
algorithm requires the gradient and the Hessian (which is
*not* required to be positive definite). It is, in many
situations, the Newton method to converge in fewer iteraction
and the most recommended for small and medium-size problems.
**Bound-Constrained minimization**
Method :ref:`L-BFGS-B <optimize.minimize-lbfgsb>` uses the L-BFGS-B
algorithm [6]_, [7]_ for bound constrained minimization.
Method :ref:`TNC <optimize.minimize-tnc>` uses a truncated Newton
algorithm [5]_, [8]_ to minimize a function with variables subject
to bounds. This algorithm uses gradient information; it is also
called Newton Conjugate-Gradient. It differs from the *Newton-CG*
method described above as it wraps a C implementation and allows
each variable to be given upper and lower bounds.
**Constrained Minimization**
Method :ref:`COBYLA <optimize.minimize-cobyla>` uses the
Constrained Optimization BY Linear Approximation (COBYLA) method
[9]_, [10]_, [11]_. The algorithm is based on linear
approximations to the objective function and each constraint. The
method wraps a FORTRAN implementation of the algorithm. The
constraints functions 'fun' may return either a single number
or an array or list of numbers.
Method :ref:`SLSQP <optimize.minimize-slsqp>` uses Sequential
Least SQuares Programming to minimize a function of several
variables with any combination of bounds, equality and inequality
constraints. The method wraps the SLSQP Optimization subroutine
originally implemented by Dieter Kraft [12]_. Note that the
wrapper handles infinite values in bounds by converting them into
large floating values.
Method :ref:`trust-constr <optimize.minimize-trustconstr>` is a
trust-region algorithm for constrained optimization. It swiches
between two implementations depending on the problem definition.
It is the most versatile constrained minimization algorithm
implemented in SciPy and the most appropriate for large-scale problems.
For equality constrained problems it is an implementation of Byrd-Omojokun
Trust-Region SQP method described in [17]_ and in [5]_, p. 549. When
inequality constraints are imposed as well, it swiches to the trust-region
interior point method described in [16]_. This interior point algorithm,
in turn, solves inequality constraints by introducing slack variables
and solving a sequence of equality-constrained barrier problems
for progressively smaller values of the barrier parameter.
The previously described equality constrained SQP method is
used to solve the subproblems with increasing levels of accuracy
as the iterate gets closer to a solution.
**Finite-Difference Options**
For Method :ref:`trust-constr <optimize.minimize-trustconstr>`
the gradient and the Hessian may be approximated using
three finite-difference schemes: {'2-point', '3-point', 'cs'}.
The scheme 'cs' is, potentially, the most accurate but it
requires the function to correctly handles complex inputs and to
be differentiable in the complex plane. The scheme '3-point' is more
accurate than '2-point' but requires twice as much operations.
**Custom minimizers**
It may be useful to pass a custom minimization method, for example
when using a frontend to this method such as `scipy.optimize.basinhopping`
or a different library. You can simply pass a callable as the ``method``
parameter.
The callable is called as ``method(fun, x0, args, **kwargs, **options)``
where ``kwargs`` corresponds to any other parameters passed to `minimize`
(such as `callback`, `hess`, etc.), except the `options` dict, which has
its contents also passed as `method` parameters pair by pair. Also, if
`jac` has been passed as a bool type, `jac` and `fun` are mangled so that
`fun` returns just the function values and `jac` is converted to a function
returning the Jacobian. The method shall return an `OptimizeResult`
object.
The provided `method` callable must be able to accept (and possibly ignore)
arbitrary parameters; the set of parameters accepted by `minimize` may
expand in future versions and then these parameters will be passed to
the method. You can find an example in the scipy.optimize tutorial.
.. versionadded:: 0.11.0
References
----------
.. [1] Nelder, J A, and R Mead. 1965. A Simplex Method for Function
Minimization. The Computer Journal 7: 308-13.
.. [2] Wright M H. 1996. Direct search methods: Once scorned, now
respectable, in Numerical Analysis 1995: Proceedings of the 1995
Dundee Biennial Conference in Numerical Analysis (Eds. D F
Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK.
191-208.
.. [3] Powell, M J D. 1964. An efficient method for finding the minimum of
a function of several variables without calculating derivatives. The
Computer Journal 7: 155-162.
.. [4] Press W, S A Teukolsky, W T Vetterling and B P Flannery.
Numerical Recipes (any edition), Cambridge University Press.
.. [5] Nocedal, J, and S J Wright. 2006. Numerical Optimization.
Springer New York.
.. [6] Byrd, R H and P Lu and J. Nocedal. 1995. A Limited Memory
Algorithm for Bound Constrained Optimization. SIAM Journal on
Scientific and Statistical Computing 16 (5): 1190-1208.
.. [7] Zhu, C and R H Byrd and J Nocedal. 1997. L-BFGS-B: Algorithm
778: L-BFGS-B, FORTRAN routines for large scale bound constrained
optimization. ACM Transactions on Mathematical Software 23 (4):
550-560.
.. [8] Nash, S G. Newton-Type Minimization Via the Lanczos Method.
1984. SIAM Journal of Numerical Analysis 21: 770-778.
.. [9] Powell, M J D. A direct search optimization method that models
the objective and constraint functions by linear interpolation.
1994. Advances in Optimization and Numerical Analysis, eds. S. Gomez
and J-P Hennart, Kluwer Academic (Dordrecht), 51-67.
.. [10] Powell M J D. Direct search algorithms for optimization
calculations. 1998. Acta Numerica 7: 287-336.
.. [11] Powell M J D. A view of algorithms for optimization without
derivatives. 2007.Cambridge University Technical Report DAMTP
2007/NA03
.. [12] Kraft, D. A software package for sequential quadratic
programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace
Center -- Institute for Flight Mechanics, Koln, Germany.
.. [13] Conn, A. R., Gould, N. I., and Toint, P. L.
Trust region methods. 2000. Siam. pp. 169-200.
.. [14] F. Lenders, C. Kirches, A. Potschka: "trlib: A vector-free
implementation of the GLTR method for iterative solution of
the trust region problem", https://arxiv.org/abs/1611.04718
.. [15] N. Gould, S. Lucidi, M. Roma, P. Toint: "Solving the
Trust-Region Subproblem using the Lanczos Method",
SIAM J. Optim., 9(2), 504--525, (1999).
.. [16] Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal. 1999.
An interior point algorithm for large-scale nonlinear programming.
SIAM Journal on Optimization 9.4: 877-900.
.. [17] Lalee, Marucha, Jorge Nocedal, and Todd Plantega. 1998. On the
implementation of an algorithm for large-scale equality constrained
optimization. SIAM Journal on Optimization 8.3: 682-706.
Examples
--------
Let us consider the problem of minimizing the Rosenbrock function. This
function (and its respective derivatives) is implemented in `rosen`
(resp. `rosen_der`, `rosen_hess`) in the `scipy.optimize`.
>>> from scipy.optimize import minimize, rosen, rosen_der
A simple application of the *Nelder-Mead* method is:
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6)
>>> res.x
array([ 1., 1., 1., 1., 1.])
Now using the *BFGS* algorithm, using the first derivative and a few
options:
>>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
... options={'gtol': 1e-6, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 26
Function evaluations: 31
Gradient evaluations: 31
>>> res.x
array([ 1., 1., 1., 1., 1.])
>>> print(res.message)
Optimization terminated successfully.
>>> res.hess_inv
array([[ 0.00749589, 0.01255155, 0.02396251, 0.04750988, 0.09495377], # may vary
[ 0.01255155, 0.02510441, 0.04794055, 0.09502834, 0.18996269],
[ 0.02396251, 0.04794055, 0.09631614, 0.19092151, 0.38165151],
[ 0.04750988, 0.09502834, 0.19092151, 0.38341252, 0.7664427 ],
[ 0.09495377, 0.18996269, 0.38165151, 0.7664427, 1.53713523]])
Next, consider a minimization problem with several constraints (namely
Example 16.4 from [5]_). The objective function is:
>>> fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
There are three constraints defined as:
>>> cons = ({'type': 'ineq', 'fun': lambda x: x[0] - 2 * x[1] + 2},
... {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
... {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2})
And variables must be positive, hence the following bounds:
>>> bnds = ((0, None), (0, None))
The optimization problem is solved using the SLSQP method as:
>>> res = minimize(fun, (2, 0), method='SLSQP', bounds=bnds,
... constraints=cons)
It should converge to the theoretical solution (1.4 ,1.7).
###Markdown
Parámetros importantes:- fun: función $f(x)$, se debe definir antes de llamar minimize, como `def f(x): ... return ...`- x0: valor inicial. En una función no lineal, en general, hay múltiples mínimos. Dependiendo de la semilla caerá en uno de esos mínimos. Se ingresa como $x0 = \text{np.array}([x_{01},\dots,x_{0n}])$.- bounds: como en linprog.- constraints: funciones que definen las restricciones $g_i(x)$ y $h_j(x)$. Se definen igual que $f(x)$ y se ingresan como {'ineq': g_i, 'eq': h_j}. Primero debemos construir la función objetivo y la semilla inicial: $$\min_{\beta} \frac{1}{2n}\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2$$$\hat{f}(x) = \beta_0+\beta_1 x$
###Code
# Definir funcion objetivo y punto inicial
def error_sq(beta, x, y):
n = len(x)
f = beta[0] + beta[1] * x
return ((y - f)**2).sum() / (2 * n)
beta_ini = [0, 0]
solucion1 = minimize(fun=error_sq,
x0=beta_ini,
args=(x, y))
solucion1
# Mostrar
beta = solucion1.x
beta
###Output
_____no_output_____
###Markdown
¿Qué tan bien luce el ajuste?
###Code
# Grafica de los puntos y la recta ajustada
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4, # ms: Tamaño de puntos: Marker size
label='Puntos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'k', lw=3, label="Recta ajustada")
plt.legend()
plt.grid()
R_sq = 1 - np.var(y - y_fit) / np.var(y)
R_sq
###Output
_____no_output_____
###Markdown
Note que la pendiente es aproximadamente $2$ y el intercepto es aproximadamente $10$.La anterior idea se puede extender a ajuste polinomial... 2. Ajuste polinomialAhora, considere el siguiente conjunto de datos...
###Code
# Generamos 100 puntos ruidosos a partir de una senoidal
n = 100
x = np.linspace(0, 1, n)
y = np.sin(2 * np.pi * x) + np.random.normal(0, 0.25, (n,))
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4) # ms: Tamaño de puntos: Marker size
plt.grid()
###Output
_____no_output_____
###Markdown
2.1. ¿Se ajustará bien una recta?
###Code
# Definir funcion objetivo y semilla
def error_sq1(beta, x, y):
n = len(x)
f = beta[0] + beta[1] * x
return ((y - f)**2).sum() / (2 * n)
beta_ini1 = [0, 0]
# Resolver
solucion1 = minimize(fun=error_sq1,
x0=beta_ini1,
args=(x, y))
solucion1
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con recta**
###Code
# Mostrar coeficientes
beta1 = solucion1.x
beta1
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4, # ms: Tamaño de puntos: Marker size
label='Puntos')
y_fit = beta1[0] + beta1[1] * x
plt.plot(x, y_fit, 'k', lw=3, label="Recta ajustada")
plt.legend()
plt.grid()
Rsq_poly1 = 1 - np.var(y_fit - y) / np.var(y)
Rsq_poly1
###Output
_____no_output_____
###Markdown
2.2. La recta no es buen ajuste... ¿Se ajustará bien una parabola?
###Code
# Definir funcion objetivo y semilla
def error_sq2(beta, x, y):
n = len(x)
f = beta[0] + beta[1] * x + beta[2] * x**2
return ((y - f)**2).sum() / (2 * n)
beta_ini2 = [0, 0, 0]
# Resolver
solucion2 = minimize(fun=error_sq2,
x0=beta_ini2,
args=(x, y))
solucion2
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con parábola**
###Code
# Mostrar coeficientes
beta2 = solucion2.x
beta2
# Graficar recta y parabola ajustadas
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4, # ms: Tamaño de puntos: Marker size
label='Puntos')
y_fit1 = beta1[0] + beta1[1] * x
plt.plot(x, y_fit1, 'k', lw=3, label="Recta ajustada")
y_fit2 = beta2[0] + beta2[1] * x + beta2[2] * x**2
plt.plot(x, y_fit2, 'b', lw=3, label="Parábola ajustada")
plt.legend()
plt.grid()
beta1, beta2
###Output
_____no_output_____
###Markdown
2.3. Tampoco. Quizá un polinomio cúbico...
###Code
beta = [0, 1, 2, 3]
np.concatenate([beta[i] * x.reshape((len(x), 1))**i for i in range(3 + 1)], axis=1).sum(axis=1)
# Definir funcion objetivo y semilla
def error_sq(beta, x, y, N):
n = len(x)
f = np.concatenate([beta[i] * x.reshape((len(x), 1))**i for i in range(N + 1)], axis=1).sum(axis=1)
return ((y - f)**2).sum() / (2 * n)
beta_ini3 = np.zeros((4,))
# Resolver
solucion3 = minimize(fun=error_sq,
x0=beta_ini3,
args=(x, y, 3))
solucion3
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con cúbica**
###Code
beta2
# Mostrar coeficientes
beta3 = solucion3.x
beta3
# Graficar recta, parabola y cubica
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4, # ms: Tamaño de puntos: Marker size
label='Puntos')
y_fit1 = beta1[0] + beta1[1] * x
plt.plot(x, y_fit1, 'k', lw=3, label="Recta ajustada")
y_fit2 = beta2[0] + beta2[1] * x + beta2[2] * x**2
plt.plot(x, y_fit2, 'b', lw=3, label="Parábola ajustada")
y_fit3 = np.concatenate([beta3[i] * x.reshape((len(x), 1))**i for i in range(3 + 1)], axis=1).sum(axis=1)
plt.plot(x, y_fit3, 'g', lw=3, label="Polinomio cúbico ajustado")
plt.legend()
plt.grid()
Rsq_poly3 = 1 - np.var(y - y_fit3) / np.var(y)
Rsq_poly3
###Output
_____no_output_____
###Markdown
Mucho mejor. Entonces, ¿mientras más se suba el orden mejor la aproximación? 2.5. Ajustemos un polinomio de grado 7...
###Code
# Definimos funcion objetivo y semilla
solucion7 = minimize(fun=error_sq,
x0=np.zeros((8,)),
args=(x, y, 7)
)
solucion7
###Output
_____no_output_____
###Markdown
**De nuevo, veamos $\beta$**
###Code
# Resolvemos
beta7 = solucion7.x
beta7
###Output
_____no_output_____
###Markdown
**¡Cuidado! OVERFITTING...**Observar el tamaño de algunos coeficientes. Cuando los coeficientes son grandes, ¿qué pasa?
###Code
# Grafica de ajustes
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4, # ms: Tamaño de puntos: Marker size
label='Puntos')
x_fit = np.linspace(-0.2, 1.1)
y_fit1 = beta1[0] + beta1[1] * x_fit
plt.plot(x_fit, y_fit1, 'k', lw=3, label="Recta ajustada")
y_fit2 = beta2[0] + beta2[1] * x_fit + beta2[2] * x_fit**2
plt.plot(x_fit, y_fit2, 'b', lw=3, label="Parábola ajustada")
y_fit3 = np.concatenate([beta3[i] * x_fit.reshape((len(x_fit), 1))**i for i in range(3 + 1)], axis=1).sum(axis=1)
plt.plot(x_fit, y_fit3, 'g', lw=3, label="Polinomio cúbico ajustado")
y_fit7 = np.concatenate([beta7[i] * x_fit.reshape((len(x_fit), 1))**i for i in range(7 + 1)], axis=1).sum(axis=1)
plt.plot(x_fit, y_fit7, 'm', lw=3, label="Polinomio grado 7 ajustado")
plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1))
plt.grid()
Rsq_poly7 = 1 - np.var(y - y_fit7) / np.var(y)
Rsq_poly7
Rsq_poly3
###Output
_____no_output_____
###Markdown
Es conveniente ver el error como función del orden del polinomio... **selección de modelos**
###Code
error_cuadratico_medio = []
for N in range(1, 11):
solucion = minimize(fun=error_sq,
x0=np.zeros((N + 1,)),
args=(x, y, N))
beta = solucion.x
y_fit = np.concatenate([beta[i] * x.reshape((len(x), 1))**i for i in range(N + 1)], axis=1).sum(axis=1)
error_cuadratico_medio.append(((y - y_fit)**2).mean())
# Error cuadratico
plt.plot(range(1, 11), error_cuadratico_medio, '*')
plt.grid()
plt.xlabel("Grado del polinomio $N$")
plt.ylabel("Error cuadrático medio")
###Output
_____no_output_____
###Markdown
En efecto, parece que con $3$ es suficiente.
###Code
beta3
beta7
###Output
_____no_output_____
###Markdown
¿Cómo prevenir el *overfitting* sin importar el orden del modelo? 3. RegularizaciónVimos que la solución de mínimos cuadrados es:$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2.$$Sin embargo, si crecemos el orden del modelo hay overfitting y algunos coeficientes óptimos $\boldsymbol{\beta}$ crecen muchísimo. Que un coeficiente sea muy grande, significa que se le da mucha importancia a alguna característica (que quizá sea ruido... no sirve para predecir).La regularización consiste en penalizar la magnitud de los coeficientes $\boldsymbol{\beta}$ en el problema de optimización, para que no crezcan tanto. 3.1. Ridge$$\boldsymbol{\beta}^{ridge} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|^2$$
###Code
def error_sq_ridge(beta, x, y, N, l):
n = len(x)
f = np.concatenate([beta[i] * x.reshape((len(x), 1))**i for i in range(N + 1)], axis=1).sum(axis=1)
return ((y - f)**2).sum() / (2 * n) + l * np.linalg.norm(beta[1:])**2
solucion7_ridge = minimize(fun=error_sq_ridge,
x0=np.zeros((8,)),
args=(x, y, 7, 0.0001))
beta7_ridge = solucion7_ridge.x
beta7_ridge
beta7
# Grafica de ajustes
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'+r',
ms=4, # ms: Tamaño de puntos: Marker size
label='Puntos')
x_fit = np.linspace(-0.2, 1.1)
y_fit1 = beta1[0] + beta1[1] * x_fit
plt.plot(x_fit, y_fit1, 'k', lw=3, label="Recta ajustada")
y_fit2 = beta2[0] + beta2[1] * x_fit + beta2[2] * x_fit**2
plt.plot(x_fit, y_fit2, 'b', lw=3, label="Parábola ajustada")
y_fit3 = np.concatenate([beta3[i] * x_fit.reshape((len(x_fit), 1))**i for i in range(3 + 1)], axis=1).sum(axis=1)
plt.plot(x_fit, y_fit3, 'g', lw=3, label="Polinomio cúbico ajustado")
y_fit7 = np.concatenate([beta7[i] * x_fit.reshape((len(x_fit), 1))**i for i in range(7 + 1)], axis=1).sum(axis=1)
plt.plot(x_fit, y_fit7, 'm', lw=3, label="Polinomio grado 7 ajustado")
y_fit7_ridge = np.concatenate([beta7_ridge[i] * x_fit.reshape((len(x_fit), 1))**i for i in range(7 + 1)], axis=1).sum(axis=1)
plt.plot(x_fit, y_fit7_ridge, 'y', lw=3, label="Polinomio grado 7 ajustado - regularización Ridge")
plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1))
plt.grid()
###Output
_____no_output_____
###Markdown
3.2. Lasso$$\boldsymbol{\beta}^{lasso} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|_1$$La norma 1 no es más que la suma de los valores absolutos de las componentes $\left|\left|\boldsymbol{\beta}\right|\right|_1=\sum_{j=0}^m\left|\beta_j\right|$.
###Code
help(np.linalg.norm)
###Output
Help on function norm in module numpy.linalg:
norm(x, ord=None, axis=None, keepdims=False)
Matrix or vector norm.
This function is able to return one of eight different matrix norms,
or one of an infinite number of vector norms (described below), depending
on the value of the ``ord`` parameter.
Parameters
----------
x : array_like
Input array. If `axis` is None, `x` must be 1-D or 2-D, unless `ord`
is None. If both `axis` and `ord` are None, the 2-norm of
``x.ravel`` will be returned.
ord : {non-zero int, inf, -inf, 'fro', 'nuc'}, optional
Order of the norm (see table under ``Notes``). inf means numpy's
`inf` object. The default is None.
axis : {None, int, 2-tuple of ints}, optional.
If `axis` is an integer, it specifies the axis of `x` along which to
compute the vector norms. If `axis` is a 2-tuple, it specifies the
axes that hold 2-D matrices, and the matrix norms of these matrices
are computed. If `axis` is None then either a vector norm (when `x`
is 1-D) or a matrix norm (when `x` is 2-D) is returned. The default
is None.
.. versionadded:: 1.8.0
keepdims : bool, optional
If this is set to True, the axes which are normed over are left in the
result as dimensions with size one. With this option the result will
broadcast correctly against the original `x`.
.. versionadded:: 1.10.0
Returns
-------
n : float or ndarray
Norm of the matrix or vector(s).
Notes
-----
For values of ``ord <= 0``, the result is, strictly speaking, not a
mathematical 'norm', but it may still be useful for various numerical
purposes.
The following norms can be calculated:
===== ============================ ==========================
ord norm for matrices norm for vectors
===== ============================ ==========================
None Frobenius norm 2-norm
'fro' Frobenius norm --
'nuc' nuclear norm --
inf max(sum(abs(x), axis=1)) max(abs(x))
-inf min(sum(abs(x), axis=1)) min(abs(x))
0 -- sum(x != 0)
1 max(sum(abs(x), axis=0)) as below
-1 min(sum(abs(x), axis=0)) as below
2 2-norm (largest sing. value) as below
-2 smallest singular value as below
other -- sum(abs(x)**ord)**(1./ord)
===== ============================ ==========================
The Frobenius norm is given by [1]_:
:math:`||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2}`
The nuclear norm is the sum of the singular values.
References
----------
.. [1] G. H. Golub and C. F. Van Loan, *Matrix Computations*,
Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15
Examples
--------
>>> from numpy import linalg as LA
>>> a = np.arange(9) - 4
>>> a
array([-4, -3, -2, ..., 2, 3, 4])
>>> b = a.reshape((3, 3))
>>> b
array([[-4, -3, -2],
[-1, 0, 1],
[ 2, 3, 4]])
>>> LA.norm(a)
7.745966692414834
>>> LA.norm(b)
7.745966692414834
>>> LA.norm(b, 'fro')
7.745966692414834
>>> LA.norm(a, np.inf)
4.0
>>> LA.norm(b, np.inf)
9.0
>>> LA.norm(a, -np.inf)
0.0
>>> LA.norm(b, -np.inf)
2.0
>>> LA.norm(a, 1)
20.0
>>> LA.norm(b, 1)
7.0
>>> LA.norm(a, -1)
-4.6566128774142013e-010
>>> LA.norm(b, -1)
6.0
>>> LA.norm(a, 2)
7.745966692414834
>>> LA.norm(b, 2)
7.3484692283495345
>>> LA.norm(a, -2)
0.0
>>> LA.norm(b, -2)
1.8570331885190563e-016 # may vary
>>> LA.norm(a, 3)
5.8480354764257312 # may vary
>>> LA.norm(a, -3)
0.0
Using the `axis` argument to compute vector norms:
>>> c = np.array([[ 1, 2, 3],
... [-1, 1, 4]])
>>> LA.norm(c, axis=0)
array([ 1.41421356, 2.23606798, 5. ])
>>> LA.norm(c, axis=1)
array([ 3.74165739, 4.24264069])
>>> LA.norm(c, ord=1, axis=1)
array([ 6., 6.])
Using the `axis` argument to compute matrix norms:
>>> m = np.arange(8).reshape(2,2,2)
>>> LA.norm(m, axis=(1,2))
array([ 3.74165739, 11.22497216])
>>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])
(3.7416573867739413, 11.224972160321824)
###Markdown
4. Ajuste robustoAhora, consideremos de nuevo el caso de la línea recta con un par de puntos atípicos al inicio y al final...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
x = np.linspace(0, 10, 100)
y = 10 + 2 * x + np.random.normal(0, 1.5, (100,))
y[0] = 50
y[50] = 100
y[-1] = 0
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'or',
ms=6) # ms: Tamaño de puntos: Marker size
plt.grid()
###Output
_____no_output_____
###Markdown
Solucionamos el problema normalmente...
###Code
def error_sq(beta, x, y, N):
n = len(x)
f = np.concatenate([beta[i] * x.reshape((len(x), 1))**i for i in range(N + 1)], axis=1).sum(axis=1)
return ((y - f)**2).sum() / (2 * n)
solucion_atipicos = minimize(fun=error_sq,
x0=np.zeros((2,)),
args=(x, y, 1))
solucion_atipicos
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'or',
ms=6) # ms: Tamaño de puntos: Marker size
beta_atipicos = solucion_atipicos.x
y_fit = beta_atipicos[0] + beta_atipicos[1] * x
plt.plot(x, y_fit)
plt.grid()
#plt.axis([0, 10, 10, 30])
###Output
_____no_output_____
###Markdown
Si estos puntos que parecen ser atípicos, hacen parte de una 'mala medición', vemos que el ajuste que obtenemos a los otros puntos es muy pobre...**¿Cómo podemos evitar esto?** La respuesta es [*ajuste robusto*](https://en.wikipedia.org/wiki/Huber_loss).
###Code
def huber(a, d):
if np.abs(a) <= d:
return a**2
else:
return d * (2 * np.abs(a) - d)
def error_sq_huber(beta, x, y, N):
n = len(x)
f = np.concatenate([beta[i] * x.reshape((len(x), 1))**i for i in range(N + 1)], axis=1).sum(axis=1)
return (np.vectorize(huber)(y - f, 1)).sum() / (2 * n)
solucion_robusto = minimize(fun=error_sq_huber,
x0=np.zeros((2,)),
args=(x, y, 1))
solucion_robusto
plt.figure(figsize=(6, 4))
plt.plot(x,
y,
'or',
ms=6) # ms: Tamaño de puntos: Marker size
beta_atipicos = solucion_atipicos.x
y_fit = beta_atipicos[0] + beta_atipicos[1] * x
plt.plot(x, y_fit, lw=4, label='Ajuste común')
beta_robusto = solucion_robusto.x
y_fit_robusto = beta_robusto[0] + beta_robusto[1] * x
plt.plot(x, y_fit_robusto, lw=4, label='Ajuste robusto')
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Mejor... 5. TareaLa siguiente celda lee datos correspondientes a tamaños $x$ ($ft^2$) y precios $y$ (USD) de casas en Portland, Oregon.1. Graficar estos datos poniendo los precios en el eje $y$ y los tamaños en el eje $x$.2. Ajustar polinomios de grado 1 hasta grado 5.3. Graficar el error cuadrático medio contra el grado del polinomio, y elegir un polinomio que ajuste bien y su grado sea el menor posible.4. Supongamos que un amigo tuyo tiene una casa de $1250 ft^2$. Según tu modelo, ¿en cuánto podría vender dicha casa?Abrir un nuevo notebook, llamado `Tarea3_ApellidoNombre` y subirlo a canvas en el espacio habilitado.
###Code
import pandas as pd
data = pd.read_csv("housing_prices.csv")
x = data['size'].values
y = data['price'].values
x
y
###Output
_____no_output_____
###Markdown
Ajuste de curvas> El **ajuste de curvas** es el proceso de construir una curva (función), que sea el mejor ajuste a una serie de puntos. Las curvas ajustadas pueden ser usadas como asistencia en la visualización de datos, para inferir valores de una función donde no hay datos disponibles, y para resumir la relación entre variables.**Referencia**:- https://en.wikipedia.org/wiki/Curve_fitting___ 0. IntroducciónConsideremos un polinomio de grado uno:$$y = \beta_1 x + \beta_0.$$Esta es una **línea recta** que tiene pendiente $\beta_1$. Sabemos que habrá una línea conectando dos puntos cualesquiera. Por tanto, *una ecuación polinómica de primer grado es un ajuste perfecto entre dos puntos*.Si consideramos ahora un polinomio de segundo grado,$$y = \beta_2 x^2 + \beta_1 x + \beta_0,$$este se ajustará exactamente a tres puntos. Si aumentamos el grado de la función a la de un polinomio de tercer grado, obtenemos:$$y = \beta_3 x^3 + \beta_2 x^2 + \beta_1 x + \beta_0,$$que se ajustará a cuatro puntos.**Ejemplos**1. Encontrar la línea recta que pasa exactamente por los puntos $(0,1)$ y $(1,0)$.2. Encontrar la parábola que pasa exactamente por los puntos $(-1,1)$, $(0,0)$ y $(1,1)$.**Solución**1. Consideramos $y=\beta_1 x + \beta_0$. Evaluando en el punto $(0,1)$, obtenemos $\beta_1(0) + \beta_0 = 1$. Ahora, evaluando en el punto $(1,0)$, obtenemos $\beta_1(1) + \beta_0 = 0$. De esta manera,$$\left[\begin{array}{cc} 1 & 0 \\ 1 & 1\end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1\end{array}\right]=\left[\begin{array}{c} 1 \\ 0\end{array}\right].$$Resolviendo, $\beta_0=-\beta_1=1$.
###Code
# Importar numpy y el matplotlib.pyplot
# Encontrar beta_0 y beta_1 resolviendo el sistema
# Graficar la recta encontrada junto con los puntos
###Output
_____no_output_____
###Markdown
2. Consideramos $y=\beta_2 x^2 + \beta_1 x + \beta_0$. Evaluando en el punto $(-1,1)$, obtenemos $\beta_2(-1)^2 + \beta_1(-1) + \beta_0 = 1$. Ahora, evaluando en el punto $(0,0)$, obtenemos $\beta_2(0)^2 + \beta_1(0) + \beta_0 = 0$. Finalmente, evaluando en el punto $(1,1)$, obtenemos $\beta_2(1)^2 + \beta_1(1) + \beta_0 = 1$. De esta manera,$$\left[\begin{array}{ccc} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \end{array}\right]=\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right].$$Resolviendo, $\beta_0=\beta_1=0$ y $\beta_2=1$.
###Code
# Encontrar beta_0, beta_1 y beta_2
# Graficar la parabola junto con los puntos
###Output
_____no_output_____
###Markdown
¿Qué tienen en común los anteriores problemas?Las curvas están completamente determinadas por los puntos (datos limpios, suficientes y necesarios).Esto se traduce en que, al llevar el problema a un sistema de ecuaciones lineales, existe una única solución: **no hay necesidad, ni se puede optimizar nada**.¿Tendremos datos así de '*bonitos*' en la vida real?La realidad es que los datos que encontraremos en nuestra vida profesional se parecen más a esto...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
# Graficar
###Output
_____no_output_____
###Markdown
¿Cómo ajustamos una curva a esto? 1. Problema básicoConsideramos que tenemos un conjunto de n pares ordenados de datos $(x_i,y_i)$, para $i=1,2,3,\dots,n$. ¿Cuál es la recta que mejor se ajusta a estos datos?Consideramos entonces ajustes de la forma $\hat{f}(x) = \beta_0+\beta_1 x = \left[1 \quad x\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \end{array}\right]=\left[1 \quad x\right]\boldsymbol{\beta}$ (lineas rectas).Para decir '*mejor*', tenemos que definir algún sentido en que una recta se ajuste *mejor* que otra.**Mínimos cuadrados**: el objetivo es seleccionar los coeficientes $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$, de forma que la función evaluada en los puntos $x_i$ ($\hat{f}(x_i)$) aproxime los valores correspondientes $y_i$.La formulación por mínimos cuadrados, encuentra los $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$ que minimiza$$\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2=\sum_{i=1}^{n}(y_i-\left[1 \quad x_i\right]\boldsymbol{\beta})^2=\left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2,$$donde $\boldsymbol{y}=\left[y_1\quad\dots\quad y_n\right]^T$, y $\boldsymbol{X}=\left[\begin{array}{ccc}1 & x_1\\ \vdots & \vdots \\ 1 & x_n\end{array}\right].$ Esto es,$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2$$ Notar que el problema anterior no es de programación lineal, ¿porqué?Para llevar a cabo la anterior minimización, la librería `SciPy` en su módulo `optimize` contiene la función `minimize`.
###Code
# Importar el módulo optimize de la librería scipy
# Función minimize
###Output
_____no_output_____
###Markdown
Parámetros importantes:- fun: función $f(x)$, se debe definir antes de llamar minimize, como `def f(x): ... return ...`- x0: valor inicial. En una función no lineal, en general, hay múltiples mínimos. Dependiendo de la semilla caerá en uno de esos mínimos. Se ingresa como $x0 = \text{np.array}([x_{01},\dots,x_{0n}])$.- bounds: como en linprog.- constraints: funciones que definen las restricciones $g_i(x)$ y $h_j(x)$. Se definen igual que $f(x)$ y se ingresan como {'ineq': g_i, 'eq': h_j}. Primero debemos construir la función objetivo y la semilla inicial:
###Code
# Definir funcion objetivo y punto inicial
# Mostrar
###Output
_____no_output_____
###Markdown
¿Qué tan bien luce el ajuste?
###Code
# Coeficientes \beta_0 y \beta_1
# Grafica de los puntos y la recta ajustada
###Output
_____no_output_____
###Markdown
Note que la pendiente es aproximadamente $10$ y el intercepto es aproximadamente $2$.La anterior idea se puede extender a ajuste polinomial... 2. Ajuste polinomialAhora, considere el siguiente conjunto de datos...
###Code
# Generamos 100 puntos ruidosos a partir de una senoidal
###Output
_____no_output_____
###Markdown
2.1. ¿Se ajustará bien una recta?
###Code
# Definir funcion objetivo y semilla
# Resolver
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con recta**
###Code
# Mostrar coeficientes
# Graficar
###Output
_____no_output_____
###Markdown
2.2. La recta no es buen ajuste... ¿Se ajustará bien una parabola?
###Code
# Definir funcion objetivo y semilla
# Resolver
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con parábola**
###Code
# Mostrar coeficientes
# Graficar recta y parabola ajustadas
###Output
_____no_output_____
###Markdown
2.3. Tampoco. Quizá un polinomio cúbico...
###Code
# Definir funcion objetivo y semilla
# Resolver
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con cúbica**
###Code
# Mostrar coeficientes
# Graficar recta, parabola y cubica
###Output
_____no_output_____
###Markdown
Mucho mejor. Entonces, ¿mientras más se suba el orden mejor la aproximación? 2.4. Ajustemos un polinomio de grado 7...
###Code
# Definimos funcion objetivo y semilla
# Resolvemos
###Output
_____no_output_____
###Markdown
**De nuevo, veamos $\beta$**
###Code
# Mostrar coeficientes
###Output
_____no_output_____
###Markdown
**¡Cuidado! OVERFITTING...**Observar el tamaño de algunos coeficientes. Cuando los coeficientes son grandes, ¿qué pasa?
###Code
# Grafica de ajustes
###Output
_____no_output_____
###Markdown
Es conveniente ver el error como función del orden del polinomio... **selección de modelos**
###Code
# Error cuadratico
###Output
_____no_output_____
###Markdown
En efecto, parece que con $3$ es suficiente. ¿Cómo prevenir el *overfitting* sin importar el orden del modelo? 3. RegularizaciónVimos que la solución de mínimos cuadrados es:$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2.$$Sin embargo, si crecemos el orden del modelo hay overfitting y algunos coeficientes óptimos $\boldsymbol{\beta}$ crecen muchísimo. Que un coeficiente sea muy grande, significa que se le da mucha importancia a alguna característica (que quizá sea ruido... no sirve para predecir).La regularización consiste en penalizar la magnitud de los coeficientes $\boldsymbol{\beta}$ en el problema de optimización, para que no crezcan tanto. 3.1. Ridge$$\boldsymbol{\beta}^{ridge} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|^2$$ 3.2. Lasso$$\boldsymbol{\beta}^{lasso} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|_1$$La norma 1 no es más que la suma de los valores absolutos de las componentes $\left|\left|\boldsymbol{\beta}\right|\right|_1=\sum_{j=0}^m\left|\beta_j\right|$. 4. Ajuste robustoAhora, consideremos de nuevo el caso de la línea recta con un par de puntos atípicos al inicio y al final... Solucionamos el problema normalmente... Si estos puntos que parecen ser atípicos, hacen parte de una 'mala medición', vemos que el ajuste que obtenemos a los otros puntos es muy pobre...**¿Cómo podemos evitar esto?** La respuesta es [*ajuste robusto*](https://en.wikipedia.org/wiki/Huber_loss).
###Code
def huber(a, d):
if np.abs(a) <= d:
return a**2
else:
return d * (2 * np.abs(a) - d)
###Output
_____no_output_____
###Markdown
Mejor... 5. TareaLa siguiente celda lee datos correspondientes a tamaños $x$ ($ft^2$) y precios $y$ (USD) de casas en Portland, Oregon.1. Graficar estos datos poniendo los precios en el eje $y$ y los tamaños en el eje $x$.2. Ajustar polinomios de grado 1 hasta grado 5.3. Graficar el error cuadrático acumulado contra el número de términos, y elegir un polinomio que ajuste bien y su grado sea el menor posible.4. Supongamos que un amigo tuyo tiene una casa de $1250 \$ft^2$. Según tu modelo, ¿en cuánto podría vender dicha casa?Abrir un nuevo notebook, llamado `Tarea3_ApellidoNombre` y subirlo a canvas en el espacio habilitado.
###Code
import pandas as pd
data = pd.read_csv("housing_prices.csv")
x = data['size'].values
y = data['price'].values
###Output
_____no_output_____
###Markdown
Ajuste de curvas> El **ajuste de curvas** es el proceso de construir una curva (función), que sea el mejor ajuste a una serie de puntos. Las curvas ajustadas pueden ser usadas como asistencia en la visualización de datos, para inferir valores de una función donde no hay datos disponibles, y para resumir la relación entre variables.**Referencia**:- https://en.wikipedia.org/wiki/Curve_fitting___ 0. IntroducciónConsideremos un polinomio de grado uno:$$y = \beta_1 x + \beta_0.$$Esta es una **línea recta** que tiene pendiente $\beta_1$. Sabemos que habrá una línea conectando dos puntos cualesquiera. Por tanto, *una ecuación polinómica de primer grado es un ajuste perfecto entre dos puntos*.Si consideramos ahora un polinomio de segundo grado,$$y = \beta_2 x^2 + \beta_1 x + \beta_0,$$este se ajustará exactamente a tres puntos. Si aumentamos el grado de la función a la de un polinomio de tercer grado, obtenemos:$$y = \beta_3 x^3 + \beta_2 x^2 + \beta_1 x + \beta_0,$$que se ajustará a cuatro puntos.**Ejemplos**1. Encontrar la línea recta que pasa exactamente por los puntos $(0,1)$ y $(1,0)$.2. Encontrar la parábola que pasa exactamente por los puntos $(-1,1)$, $(0,0)$ y $(1,1)$.**Solución**1. Consideramos $y=\beta_1 x + \beta_0$. Evaluando en el punto $(0,1)$, obtenemos $\beta_1(0) + \beta_0 = 1$. Ahora, evaluando en el punto $(1,0)$, obtenemos $\beta_1(1) + \beta_0 = 0$. De esta manera,$$\left[\begin{array}{cc} 1 & 0 \\ 1 & 1\end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1\end{array}\right]=\left[\begin{array}{c} 1 \\ 0\end{array}\right].$$Resolviendo, $\beta_0=-\beta_1=1$.
###Code
# Importar numpy y el matplotlib.pyplot
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
help(np.linalg.solve)
# Encontrar beta_0 y beta_1 resolviendo el sistema
A = np.array([[1, 0],
[1, 1]])
h = np.array([1, 0])
# h = A^{-1} * h
#beta = np.linalg.solve(A, h)
beta = np.linalg.inv(A).dot(h)
beta
# Graficar la recta encontrada junto con los puntos
plt.figure(figsize=(6, 4))
plt.plot(0, 1, 'ro', ms=10, label='$(0, 1)$')
plt.plot(1, 0, 'ro', ms=10, label='$(1, 0)$')
x_num = np.linspace(-1, 2)
y_num = beta[0] + beta[1] * x_num
plt.plot(x_num, y_num, 'b', lw=3,
label=f'$y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2. Consideramos $y=\beta_2 x^2 + \beta_1 x + \beta_0$. Evaluando en el punto $(-1,1)$, obtenemos $\beta_2(-1)^2 + \beta_1(-1) + \beta_0 = 1$. Ahora, evaluando en el punto $(0,0)$, obtenemos $\beta_2(0)^2 + \beta_1(0) + \beta_0 = 0$. Finalmente, evaluando en el punto $(1,1)$, obtenemos $\beta_2(1)^2 + \beta_1(1) + \beta_0 = 1$. De esta manera,$$\left[\begin{array}{ccc} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \end{array}\right]=\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right].$$Resolviendo, $\beta_0=\beta_1=0$ y $\beta_2=1$.
###Code
# Encontrar beta_0, beta_1 y beta_2
A = np.array([[1, -1, 1],
[1, 0, 0],
[1, 1, 1]])
h = np.array([1, 0, 1])
beta = np.linalg.solve(A, h)
beta
# Graficar la parabola junto con los puntos
plt.figure(figsize=(6, 4))
plt.plot(-1, 1, 'ro', ms=10, label='$(-1, 1)$')
plt.plot(0, 0, 'ro', ms=10, label='$(0, 0)$')
plt.plot(1, 1, 'ro', ms=10, label='$(1, 1)$')
x_num = np.linspace(-2, 2)
y_num = beta[0] + beta[1] * x_num + beta[2] * x_num**2
plt.plot(x_num, y_num, 'b', lw=3,
label=f'$y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$ + {np.round(beta[2], 2)}$x^2$')
plt.axvline(x=0, c='k', ls='--')
plt.axhline(y=0, c='k', ls='--')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
¿Qué tienen en común los anteriores problemas?Las curvas están completamente determinadas por los puntos (datos limpios, suficientes y necesarios).Esto se traduce en que, al llevar el problema a un sistema de ecuaciones lineales, existe una única solución: **no hay necesidad, ni se puede optimizar nada**.¿Tendremos datos así de '*bonitos*' en la vida real?La realidad es que los datos que encontraremos en nuestra vida profesional se parecen más a esto...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
N = 100
x = np.linspace(0, 10, N)
# y = ecn. recta + ruido
y = 10 + 2 * x + np.random.normal(loc=0, scale=2, size=(N,))
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
¿Cómo ajustamos una curva a esto? 1. Problema básicoConsideramos que tenemos un conjunto de n pares ordenados de datos $(x_i,y_i)$, para $i=1,2,3,\dots,n$. ¿Cuál es la recta que mejor se ajusta a estos datos?Consideramos entonces ajustes de la forma $\hat{f}(x) = \beta_0+\beta_1 x = \left[1 \quad x\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \end{array}\right]=\left[1 \quad x\right]\boldsymbol{\beta}$ (lineas rectas).Para decir '*mejor*', tenemos que definir algún sentido en que una recta se ajuste *mejor* que otra.**Mínimos cuadrados**: el objetivo es seleccionar los coeficientes $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$, de forma que la función evaluada en los puntos $x_i$ ($\hat{f}(x_i)$) aproxime los valores correspondientes $y_i$.La formulación por mínimos cuadrados, encuentra los $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$ que minimiza$$\frac{1}{2n}\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-(\beta_0+ \beta_1x_i))^2=\frac{1}{2n}\sum_{i=1}^{n}(y_i-\left[1 \quad x_i\right]\boldsymbol{\beta})^2=\frac{1}{2n}\left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2,$$donde $\boldsymbol{y}=\left[y_1\quad\dots\quad y_n\right]^T$, y $\boldsymbol{X}=\left[\begin{array}{ccc}1 & x_1\\ \vdots & \vdots \\ 1 & x_n\end{array}\right].$ Esto es,$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2$$ Notar que el problema anterior no es de programación lineal, ¿porqué?Para llevar a cabo la anterior minimización, la librería `SciPy` en su módulo `optimize` contiene la función `minimize`.
###Code
# Importar el módulo optimize de la librería scipy
from scipy import optimize as opt
# Función minimize
help(opt.minimize)
###Output
Help on function minimize in module scipy.optimize._minimize:
minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)
Minimization of scalar function of one or more variables.
Parameters
----------
fun : callable
The objective function to be minimized.
``fun(x, *args) -> float``
where x is an 1-D array with shape (n,) and `args`
is a tuple of the fixed parameters needed to completely
specify the function.
x0 : ndarray, shape (n,)
Initial guess. Array of real elements of size (n,),
where 'n' is the number of independent variables.
args : tuple, optional
Extra arguments passed to the objective function and its
derivatives (`fun`, `jac` and `hess` functions).
method : str or callable, optional
Type of solver. Should be one of
- 'Nelder-Mead' :ref:`(see here) <optimize.minimize-neldermead>`
- 'Powell' :ref:`(see here) <optimize.minimize-powell>`
- 'CG' :ref:`(see here) <optimize.minimize-cg>`
- 'BFGS' :ref:`(see here) <optimize.minimize-bfgs>`
- 'Newton-CG' :ref:`(see here) <optimize.minimize-newtoncg>`
- 'L-BFGS-B' :ref:`(see here) <optimize.minimize-lbfgsb>`
- 'TNC' :ref:`(see here) <optimize.minimize-tnc>`
- 'COBYLA' :ref:`(see here) <optimize.minimize-cobyla>`
- 'SLSQP' :ref:`(see here) <optimize.minimize-slsqp>`
- 'trust-constr':ref:`(see here) <optimize.minimize-trustconstr>`
- 'dogleg' :ref:`(see here) <optimize.minimize-dogleg>`
- 'trust-ncg' :ref:`(see here) <optimize.minimize-trustncg>`
- 'trust-exact' :ref:`(see here) <optimize.minimize-trustexact>`
- 'trust-krylov' :ref:`(see here) <optimize.minimize-trustkrylov>`
- custom - a callable object (added in version 0.14.0),
see below for description.
If not given, chosen to be one of ``BFGS``, ``L-BFGS-B``, ``SLSQP``,
depending if the problem has constraints or bounds.
jac : {callable, '2-point', '3-point', 'cs', bool}, optional
Method for computing the gradient vector. Only for CG, BFGS,
Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg, trust-krylov,
trust-exact and trust-constr. If it is a callable, it should be a
function that returns the gradient vector:
``jac(x, *args) -> array_like, shape (n,)``
where x is an array with shape (n,) and `args` is a tuple with
the fixed parameters. Alternatively, the keywords
{'2-point', '3-point', 'cs'} select a finite
difference scheme for numerical estimation of the gradient. Options
'3-point' and 'cs' are available only to 'trust-constr'.
If `jac` is a Boolean and is True, `fun` is assumed to return the
gradient along with the objective function. If False, the gradient
will be estimated using '2-point' finite difference estimation.
hess : {callable, '2-point', '3-point', 'cs', HessianUpdateStrategy}, optional
Method for computing the Hessian matrix. Only for Newton-CG, dogleg,
trust-ncg, trust-krylov, trust-exact and trust-constr. If it is
callable, it should return the Hessian matrix:
``hess(x, *args) -> {LinearOperator, spmatrix, array}, (n, n)``
where x is a (n,) ndarray and `args` is a tuple with the fixed
parameters. LinearOperator and sparse matrix returns are
allowed only for 'trust-constr' method. Alternatively, the keywords
{'2-point', '3-point', 'cs'} select a finite difference scheme
for numerical estimation. Or, objects implementing
`HessianUpdateStrategy` interface can be used to approximate
the Hessian. Available quasi-Newton methods implementing
this interface are:
- `BFGS`;
- `SR1`.
Whenever the gradient is estimated via finite-differences,
the Hessian cannot be estimated with options
{'2-point', '3-point', 'cs'} and needs to be
estimated using one of the quasi-Newton strategies.
Finite-difference options {'2-point', '3-point', 'cs'} and
`HessianUpdateStrategy` are available only for 'trust-constr' method.
hessp : callable, optional
Hessian of objective function times an arbitrary vector p. Only for
Newton-CG, trust-ncg, trust-krylov, trust-constr.
Only one of `hessp` or `hess` needs to be given. If `hess` is
provided, then `hessp` will be ignored. `hessp` must compute the
Hessian times an arbitrary vector:
``hessp(x, p, *args) -> ndarray shape (n,)``
where x is a (n,) ndarray, p is an arbitrary vector with
dimension (n,) and `args` is a tuple with the fixed
parameters.
bounds : sequence or `Bounds`, optional
Bounds on variables for L-BFGS-B, TNC, SLSQP and
trust-constr methods. There are two ways to specify the bounds:
1. Instance of `Bounds` class.
2. Sequence of ``(min, max)`` pairs for each element in `x`. None
is used to specify no bound.
constraints : {Constraint, dict} or List of {Constraint, dict}, optional
Constraints definition (only for COBYLA, SLSQP and trust-constr).
Constraints for 'trust-constr' are defined as a single object or a
list of objects specifying constraints to the optimization problem.
Available constraints are:
- `LinearConstraint`
- `NonlinearConstraint`
Constraints for COBYLA, SLSQP are defined as a list of dictionaries.
Each dictionary with fields:
type : str
Constraint type: 'eq' for equality, 'ineq' for inequality.
fun : callable
The function defining the constraint.
jac : callable, optional
The Jacobian of `fun` (only for SLSQP).
args : sequence, optional
Extra arguments to be passed to the function and Jacobian.
Equality constraint means that the constraint function result is to
be zero whereas inequality means that it is to be non-negative.
Note that COBYLA only supports inequality constraints.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific
options.
options : dict, optional
A dictionary of solver options. All methods accept the following
generic options:
maxiter : int
Maximum number of iterations to perform. Depending on the
method each iteration may use several function evaluations.
disp : bool
Set to True to print convergence messages.
For method-specific options, see :func:`show_options()`.
callback : callable, optional
Called after each iteration. For 'trust-constr' it is a callable with
the signature:
``callback(xk, OptimizeResult state) -> bool``
where ``xk`` is the current parameter vector. and ``state``
is an `OptimizeResult` object, with the same fields
as the ones from the return. If callback returns True
the algorithm execution is terminated.
For all the other methods, the signature is:
``callback(xk)``
where ``xk`` is the current parameter vector.
Returns
-------
res : OptimizeResult
The optimization result represented as a ``OptimizeResult`` object.
Important attributes are: ``x`` the solution array, ``success`` a
Boolean flag indicating if the optimizer exited successfully and
``message`` which describes the cause of the termination. See
`OptimizeResult` for a description of other attributes.
See also
--------
minimize_scalar : Interface to minimization algorithms for scalar
univariate functions
show_options : Additional options accepted by the solvers
Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is *BFGS*.
**Unconstrained minimization**
Method :ref:`Nelder-Mead <optimize.minimize-neldermead>` uses the
Simplex algorithm [1]_, [2]_. This algorithm is robust in many
applications. However, if numerical computation of derivative can be
trusted, other algorithms using the first and/or second derivatives
information might be preferred for their better performance in
general.
Method :ref:`Powell <optimize.minimize-powell>` is a modification
of Powell's method [3]_, [4]_ which is a conjugate direction
method. It performs sequential one-dimensional minimizations along
each vector of the directions set (`direc` field in `options` and
`info`), which is updated at each iteration of the main
minimization loop. The function need not be differentiable, and no
derivatives are taken.
Method :ref:`CG <optimize.minimize-cg>` uses a nonlinear conjugate
gradient algorithm by Polak and Ribiere, a variant of the
Fletcher-Reeves method described in [5]_ pp. 120-122. Only the
first derivatives are used.
Method :ref:`BFGS <optimize.minimize-bfgs>` uses the quasi-Newton
method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) [5]_
pp. 136. It uses the first derivatives only. BFGS has proven good
performance even for non-smooth optimizations. This method also
returns an approximation of the Hessian inverse, stored as
`hess_inv` in the OptimizeResult object.
Method :ref:`Newton-CG <optimize.minimize-newtoncg>` uses a
Newton-CG algorithm [5]_ pp. 168 (also known as the truncated
Newton method). It uses a CG method to the compute the search
direction. See also *TNC* method for a box-constrained
minimization with a similar algorithm. Suitable for large-scale
problems.
Method :ref:`dogleg <optimize.minimize-dogleg>` uses the dog-leg
trust-region algorithm [5]_ for unconstrained minimization. This
algorithm requires the gradient and Hessian; furthermore the
Hessian is required to be positive definite.
Method :ref:`trust-ncg <optimize.minimize-trustncg>` uses the
Newton conjugate gradient trust-region algorithm [5]_ for
unconstrained minimization. This algorithm requires the gradient
and either the Hessian or a function that computes the product of
the Hessian with a given vector. Suitable for large-scale problems.
Method :ref:`trust-krylov <optimize.minimize-trustkrylov>` uses
the Newton GLTR trust-region algorithm [14]_, [15]_ for unconstrained
minimization. This algorithm requires the gradient
and either the Hessian or a function that computes the product of
the Hessian with a given vector. Suitable for large-scale problems.
On indefinite problems it requires usually less iterations than the
`trust-ncg` method and is recommended for medium and large-scale problems.
Method :ref:`trust-exact <optimize.minimize-trustexact>`
is a trust-region method for unconstrained minimization in which
quadratic subproblems are solved almost exactly [13]_. This
algorithm requires the gradient and the Hessian (which is
*not* required to be positive definite). It is, in many
situations, the Newton method to converge in fewer iteraction
and the most recommended for small and medium-size problems.
**Bound-Constrained minimization**
Method :ref:`L-BFGS-B <optimize.minimize-lbfgsb>` uses the L-BFGS-B
algorithm [6]_, [7]_ for bound constrained minimization.
Method :ref:`TNC <optimize.minimize-tnc>` uses a truncated Newton
algorithm [5]_, [8]_ to minimize a function with variables subject
to bounds. This algorithm uses gradient information; it is also
called Newton Conjugate-Gradient. It differs from the *Newton-CG*
method described above as it wraps a C implementation and allows
each variable to be given upper and lower bounds.
**Constrained Minimization**
Method :ref:`COBYLA <optimize.minimize-cobyla>` uses the
Constrained Optimization BY Linear Approximation (COBYLA) method
[9]_, [10]_, [11]_. The algorithm is based on linear
approximations to the objective function and each constraint. The
method wraps a FORTRAN implementation of the algorithm. The
constraints functions 'fun' may return either a single number
or an array or list of numbers.
Method :ref:`SLSQP <optimize.minimize-slsqp>` uses Sequential
Least SQuares Programming to minimize a function of several
variables with any combination of bounds, equality and inequality
constraints. The method wraps the SLSQP Optimization subroutine
originally implemented by Dieter Kraft [12]_. Note that the
wrapper handles infinite values in bounds by converting them into
large floating values.
Method :ref:`trust-constr <optimize.minimize-trustconstr>` is a
trust-region algorithm for constrained optimization. It swiches
between two implementations depending on the problem definition.
It is the most versatile constrained minimization algorithm
implemented in SciPy and the most appropriate for large-scale problems.
For equality constrained problems it is an implementation of Byrd-Omojokun
Trust-Region SQP method described in [17]_ and in [5]_, p. 549. When
inequality constraints are imposed as well, it swiches to the trust-region
interior point method described in [16]_. This interior point algorithm,
in turn, solves inequality constraints by introducing slack variables
and solving a sequence of equality-constrained barrier problems
for progressively smaller values of the barrier parameter.
The previously described equality constrained SQP method is
used to solve the subproblems with increasing levels of accuracy
as the iterate gets closer to a solution.
**Finite-Difference Options**
For Method :ref:`trust-constr <optimize.minimize-trustconstr>`
the gradient and the Hessian may be approximated using
three finite-difference schemes: {'2-point', '3-point', 'cs'}.
The scheme 'cs' is, potentially, the most accurate but it
requires the function to correctly handles complex inputs and to
be differentiable in the complex plane. The scheme '3-point' is more
accurate than '2-point' but requires twice as much operations.
**Custom minimizers**
It may be useful to pass a custom minimization method, for example
when using a frontend to this method such as `scipy.optimize.basinhopping`
or a different library. You can simply pass a callable as the ``method``
parameter.
The callable is called as ``method(fun, x0, args, **kwargs, **options)``
where ``kwargs`` corresponds to any other parameters passed to `minimize`
(such as `callback`, `hess`, etc.), except the `options` dict, which has
its contents also passed as `method` parameters pair by pair. Also, if
`jac` has been passed as a bool type, `jac` and `fun` are mangled so that
`fun` returns just the function values and `jac` is converted to a function
returning the Jacobian. The method shall return an `OptimizeResult`
object.
The provided `method` callable must be able to accept (and possibly ignore)
arbitrary parameters; the set of parameters accepted by `minimize` may
expand in future versions and then these parameters will be passed to
the method. You can find an example in the scipy.optimize tutorial.
.. versionadded:: 0.11.0
References
----------
.. [1] Nelder, J A, and R Mead. 1965. A Simplex Method for Function
Minimization. The Computer Journal 7: 308-13.
.. [2] Wright M H. 1996. Direct search methods: Once scorned, now
respectable, in Numerical Analysis 1995: Proceedings of the 1995
Dundee Biennial Conference in Numerical Analysis (Eds. D F
Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK.
191-208.
.. [3] Powell, M J D. 1964. An efficient method for finding the minimum of
a function of several variables without calculating derivatives. The
Computer Journal 7: 155-162.
.. [4] Press W, S A Teukolsky, W T Vetterling and B P Flannery.
Numerical Recipes (any edition), Cambridge University Press.
.. [5] Nocedal, J, and S J Wright. 2006. Numerical Optimization.
Springer New York.
.. [6] Byrd, R H and P Lu and J. Nocedal. 1995. A Limited Memory
Algorithm for Bound Constrained Optimization. SIAM Journal on
Scientific and Statistical Computing 16 (5): 1190-1208.
.. [7] Zhu, C and R H Byrd and J Nocedal. 1997. L-BFGS-B: Algorithm
778: L-BFGS-B, FORTRAN routines for large scale bound constrained
optimization. ACM Transactions on Mathematical Software 23 (4):
550-560.
.. [8] Nash, S G. Newton-Type Minimization Via the Lanczos Method.
1984. SIAM Journal of Numerical Analysis 21: 770-778.
.. [9] Powell, M J D. A direct search optimization method that models
the objective and constraint functions by linear interpolation.
1994. Advances in Optimization and Numerical Analysis, eds. S. Gomez
and J-P Hennart, Kluwer Academic (Dordrecht), 51-67.
.. [10] Powell M J D. Direct search algorithms for optimization
calculations. 1998. Acta Numerica 7: 287-336.
.. [11] Powell M J D. A view of algorithms for optimization without
derivatives. 2007.Cambridge University Technical Report DAMTP
2007/NA03
.. [12] Kraft, D. A software package for sequential quadratic
programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace
Center -- Institute for Flight Mechanics, Koln, Germany.
.. [13] Conn, A. R., Gould, N. I., and Toint, P. L.
Trust region methods. 2000. Siam. pp. 169-200.
.. [14] F. Lenders, C. Kirches, A. Potschka: "trlib: A vector-free
implementation of the GLTR method for iterative solution of
the trust region problem", https://arxiv.org/abs/1611.04718
.. [15] N. Gould, S. Lucidi, M. Roma, P. Toint: "Solving the
Trust-Region Subproblem using the Lanczos Method",
SIAM J. Optim., 9(2), 504--525, (1999).
.. [16] Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal. 1999.
An interior point algorithm for large-scale nonlinear programming.
SIAM Journal on Optimization 9.4: 877-900.
.. [17] Lalee, Marucha, Jorge Nocedal, and Todd Plantega. 1998. On the
implementation of an algorithm for large-scale equality constrained
optimization. SIAM Journal on Optimization 8.3: 682-706.
Examples
--------
Let us consider the problem of minimizing the Rosenbrock function. This
function (and its respective derivatives) is implemented in `rosen`
(resp. `rosen_der`, `rosen_hess`) in the `scipy.optimize`.
>>> from scipy.optimize import minimize, rosen, rosen_der
A simple application of the *Nelder-Mead* method is:
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6)
>>> res.x
array([ 1., 1., 1., 1., 1.])
Now using the *BFGS* algorithm, using the first derivative and a few
options:
>>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
... options={'gtol': 1e-6, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 26
Function evaluations: 31
Gradient evaluations: 31
>>> res.x
array([ 1., 1., 1., 1., 1.])
>>> print(res.message)
Optimization terminated successfully.
>>> res.hess_inv
array([[ 0.00749589, 0.01255155, 0.02396251, 0.04750988, 0.09495377], # may vary
[ 0.01255155, 0.02510441, 0.04794055, 0.09502834, 0.18996269],
[ 0.02396251, 0.04794055, 0.09631614, 0.19092151, 0.38165151],
[ 0.04750988, 0.09502834, 0.19092151, 0.38341252, 0.7664427 ],
[ 0.09495377, 0.18996269, 0.38165151, 0.7664427, 1.53713523]])
Next, consider a minimization problem with several constraints (namely
Example 16.4 from [5]_). The objective function is:
>>> fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
There are three constraints defined as:
>>> cons = ({'type': 'ineq', 'fun': lambda x: x[0] - 2 * x[1] + 2},
... {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
... {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2})
And variables must be positive, hence the following bounds:
>>> bnds = ((0, None), (0, None))
The optimization problem is solved using the SLSQP method as:
>>> res = minimize(fun, (2, 0), method='SLSQP', bounds=bnds,
... constraints=cons)
It should converge to the theoretical solution (1.4 ,1.7).
###Markdown
Parámetros importantes:- fun: función $f(x)$, se debe definir antes de llamar minimize, como `def f(x): ... return ...`- x0: valor inicial. En una función no lineal, en general, hay múltiples mínimos. Dependiendo de la semilla caerá en uno de esos mínimos. Se ingresa como $x0 = \text{np.array}([x_{01},\dots,x_{0n}])$.- bounds: como en linprog.- constraints: funciones que definen las restricciones $g_i(x)$ y $h_j(x)$. Se definen igual que $f(x)$ y se ingresan como {'ineq': g_i, 'eq': h_j}. Primero debemos construir la función objetivo y la semilla inicial:
###Code
# Definir funcion objetivo y punto inicial
def min_sq(beta, x_points, y_points):
n = len(x_points)
recta = beta[0] + beta[1] * x_points
return (1 / (2 * n)) * ((y_points - recta)**2).sum()
beta_ini = [0, 0]
solucion = opt.minimize(fun=min_sq,
x0=beta_ini,
args=(x, y))
# Mostrar
solucion
###Output
_____no_output_____
###Markdown
¿Qué tan bien luce el ajuste?
###Code
# Coeficientes \beta_0 y \beta_1
beta = solucion.x
beta
# Grafica de los puntos y la recta ajustada
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
Note que la pendiente es aproximadamente $2$ y el intercepto es aproximadamente $10$.La anterior idea se puede extender a ajuste polinomial... 2. Ajuste polinomialAhora, considere el siguiente conjunto de datos...
###Code
# Generamos 100 puntos ruidosos a partir de una senoidal
N = 100
x = np.linspace(0, 1, N)
y = np.sin(2 * np.pi * x) + np.random.normal(loc=0, scale=0.3, size=(N,))
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2.1. ¿Se ajustará bien una recta?
###Code
# Definir funcion objetivo y semilla
def min_sq_1(beta, x_points, y_points):
n = len(x_points)
recta = beta[0] + beta[1] * x_points
return (1 / (2 * n)) * ((y_points - recta)**2).sum()
beta_ini_1 = [0, 0]
# Resolver
solucion_1 = opt.minimize(fun=min_sq_1,
x0=beta_ini_1,
args=(x, y))
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con recta**
###Code
# Mostrar coeficientes
beta_1 = solucion_1.x
beta_1
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2.2. La recta no es buen ajuste... ¿Se ajustará bien una parabola?
###Code
# Definir funcion objetivo y semilla
def min_sq_2(beta, x_points, y_points):
n = len(x_points)
parabola = beta[0] + beta[1] * x_points + beta[2] * x_points**2
return (1 / (2 * n)) * ((y_points - parabola)**2).sum()
beta_ini_2 = [0, 0, 0]
# Resolver
solucion_2 = opt.minimize(fun=min_sq_2,
x0=beta_ini_2,
args=(x, y))
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con parábola**
###Code
# Mostrar coeficientes
beta_2 = solucion.x
beta_2
# Graficar recta y parabola ajustadas
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit1 = beta_1[0] + beta_1[1] * x
plt.plot(x, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x + beta_2[2] * x**2
plt.plot(x, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
2.3. Tampoco. Quizá un polinomio cúbico...
###Code
# Definir funcion objetivo y semilla
def min_sq_3(beta, x_points, y_points):
n = len(x_points)
cubico = beta[0] + beta[1] * x_points + beta[2] * x_points**2 + beta[3] * x_points**3
return (1 / (2 * n)) * ((y_points - cubico)**2).sum()
beta_ini_3 = [0, 0, 0, 0]
# Resolver
solucion_3 = opt.minimize(fun=min_sq_3,
x0=beta_ini_3,
args=(x, y))
###Output
_____no_output_____
###Markdown
**Veamos $\beta$ para el ajuste con cúbica**
###Code
# Mostrar coeficientes
beta_3 = solucion_3.x
beta_3
# Graficar recta, parabola y cubica
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit1 = beta_1[0] + beta_1[1] * x
plt.plot(x, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x + beta_2[2] * x**2
plt.plot(x, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
y_fit3 = beta_3[0] + beta_3[1] * x + beta_3[2] * x**2 + beta_3[3] * x**3
plt.plot(x, y_fit3, lw=3,
label=f'Polinomio cúbico ajustado: '
f'$y=${np.round(beta_3[0], 2)} + {np.round(beta_3[1], 2)}$x$ + {np.round(beta_3[2], 2)}$x^2$ + '
f'{np.round(beta_3[3], 2)}$x^3$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
###Output
_____no_output_____
###Markdown
Mucho mejor. Entonces, ¿mientras más se suba el orden mejor la aproximación? 2.4. Ajustemos un polinomio de grado 7...
###Code
# Definimos funcion objetivo y semilla
def min_sq_7(beta, x_points, y_points):
n = len(x_points)
poli_7 = np.array([beta[i] * x_points**i for i in range(8)]).sum(axis=0)
return (1 / (2 * n)) * ((y_points - poli_7)**2).sum()
beta_ini_7 = np.zeros(8)
# Resolvemos
solucion_7 = opt.minimize(fun=min_sq_7,
x0=beta_ini_7,
args=(x, y))
###Output
_____no_output_____
###Markdown
**De nuevo, veamos $\beta$**
###Code
beta_1
beta_2
beta_3
# Mostrar coeficientes
beta_7 = solucion_7.x
beta_7
###Output
_____no_output_____
###Markdown
**¡Cuidado! OVERFITTING...**Observar el tamaño de algunos coeficientes. Cuando los coeficientes son grandes, ¿qué pasa?
###Code
# Grafica de ajustes
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit1 = beta_1[0] + beta_1[1] * x
plt.plot(x, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x + beta_2[2] * x**2
plt.plot(x, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
y_fit3 = beta_3[0] + beta_3[1] * x + beta_3[2] * x**2 + beta_3[3] * x**3
plt.plot(x, y_fit3, lw=3,
label=f'Polinomio cúbico ajustado: '
f'$y=${np.round(beta_3[0], 2)} + {np.round(beta_3[1], 2)}$x$ + {np.round(beta_3[2], 2)}$x^2$ + '
f'{np.round(beta_3[3], 2)}$x^3$')
y_fit7 = np.array([beta_7[i] * x**i for i in range(8)]).sum(axis=0)
plt.plot(x, y_fit7, lw=3, label=f'Polinomio de grado 7 ajustado')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
solucion_1
solucion_2
solucion_7
###Output
_____no_output_____
###Markdown
Es conveniente ver el error como función del orden del polinomio... **selección de modelos**
###Code
# Función objetivo ajuste polinomio grado N
def min_sq_N(beta, x_points, y_points, N):
n = len(x_points)
poli_N = np.array([beta[i] * x_points**i for i in range(N + 1)]).sum(axis=0)
return (1 / (2 * n)) * ((y_points - poli_N)**2).sum()
error = []
for i in range(1, 10):
beta_ini = np.zeros(i + 1)
solucion = opt.minimize(fun=min_sq_N, x0=beta_ini, args=(x, y, i))
error.append(solucion.fun)
# Error cuadratico
plt.figure(figsize=(6, 4))
plt.plot(range(1, 10), error)
plt.xlabel('Orden del polinomio ajustado')
plt.ylabel('Error de mínimos cuadrados')
plt.grid()
###Output
_____no_output_____
###Markdown
En efecto, parece que con $3$ es suficiente.
###Code
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
x_num = np.linspace(-0.1, 1.2)
y_fit1 = beta_1[0] + beta_1[1] * x_num
plt.plot(x_num, y_fit1, lw=3,
label=f'Recta ajustada: $y=${np.round(beta_1[0], 2)} + {np.round(beta_1[1], 2)}$x$')
y_fit2 = beta_2[0] + beta_2[1] * x_num + beta_2[2] * x_num**2
plt.plot(x_num, y_fit2, lw=3,
label=f'Parabola ajustada: '
f'$y=${np.round(beta_2[0], 2)} + {np.round(beta_2[1], 2)}$x$ + {np.round(beta_2[2], 2)}$x^2$')
y_fit3 = beta_3[0] + beta_3[1] * x_num + beta_3[2] * x_num**2 + beta_3[3] * x_num**3
plt.plot(x_num, y_fit3, lw=3,
label=f'Polinomio cúbico ajustado: '
f'$y=${np.round(beta_3[0], 2)} + {np.round(beta_3[1], 2)}$x$ + {np.round(beta_3[2], 2)}$x^2$ + '
f'{np.round(beta_3[3], 2)}$x^3$')
y_fit7 = np.array([beta_7[i] * x_num**i for i in range(8)]).sum(axis=0)
plt.plot(x_num, y_fit7, '--', lw=3, label=f'Polinomio de grado 7 ajustado')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
beta_3
beta_7
###Output
_____no_output_____
###Markdown
¿Cómo prevenir el *overfitting* sin importar el orden del modelo? 3. RegularizaciónVimos que la solución de mínimos cuadrados es:$$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2.$$Sin embargo, si crecemos el orden del modelo hay overfitting y algunos coeficientes óptimos $\boldsymbol{\beta}$ crecen muchísimo. Que un coeficiente sea muy grande, significa que se le da mucha importancia a alguna característica (que quizá sea ruido... no sirve para predecir).La regularización consiste en penalizar la magnitud de los coeficientes $\boldsymbol{\beta}$ en el problema de optimización, para que no crezcan tanto. 3.1. Ridge$$\boldsymbol{\beta}^{ridge} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|^2$$
###Code
def min_sq_N_ridge(beta, x_points, y_points, N, l):
n = len(x_points)
poli_N = np.array([beta[i] * x_points**i for i in range(N + 1)]).sum(axis=0)
return (1 / (2 * n)) * ((y_points - poli_N)**2).sum() + l * np.linalg.norm(beta)**2
solucion = opt.minimize(fun=min_sq_N_ridge,
x0=np.zeros(8),
args=(x, y, 7, 0.0003))
beta_7_ridge = solucion.x
solucion = opt.minimize(fun=min_sq_N_ridge,
x0=np.zeros(4),
args=(x, y, 3, 0.00003))
beta_3_ridge = solucion.x
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
x_num = np.linspace(-0.1, 1.1)
y_fit7 = np.array([beta_7[i] * x_num**i for i in range(8)]).sum(axis=0)
plt.plot(x_num, y_fit7, '--', lw=3, label=f'Polinomio de grado 7 ajustado')
y_fit7_ridge = np.array([beta_7_ridge[i] * x_num**i for i in range(8)]).sum(axis=0)
plt.plot(x_num, y_fit7_ridge, '--', lw=3, label=f'Polinomio de grado 7 regularizado ajustado')
y_fit3_ridge = np.array([beta_3_ridge[i] * x_num**i for i in range(4)]).sum(axis=0)
plt.plot(x_num, y_fit3_ridge, '--', lw=3, label=f'Polinomio de grado 3 regularizado ajustado')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
plt.grid()
beta_7
beta_7_ridge
beta_3
beta_3_ridge
###Output
_____no_output_____
###Markdown
3.2. Lasso$$\boldsymbol{\beta}^{lasso} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|_1$$La norma 1 no es más que la suma de los valores absolutos de las componentes $\left|\left|\boldsymbol{\beta}\right|\right|_1=\sum_{j=0}^m\left|\beta_j\right|$. 4. Ajuste robustoAhora, consideremos de nuevo el caso de la línea recta con un par de puntos atípicos al inicio y al final...
###Code
# Crear un conjunto de puntos ruidosos a partir de una recta
N = 20
x = np.linspace(0, 10, N)
# y = ecn. recta + ruido
y = 10 + 2 * x + np.random.normal(loc=0, scale=2, size=(N,))
y[0] = 30
y[-1] = 10
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.grid()
###Output
_____no_output_____
###Markdown
Solucionamos el problema normalmente...
###Code
solucion = opt.minimize(fun=min_sq_1,
x0=np.zeros(2),
args=(x, y))
beta = solucion.x
beta
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1))
plt.grid()
###Output
_____no_output_____
###Markdown
Si estos puntos que parecen ser atípicos, hacen parte de una 'mala medición', vemos que el ajuste que obtenemos a los otros puntos es muy pobre...**¿Cómo podemos evitar esto?** La respuesta es [*ajuste robusto*](https://en.wikipedia.org/wiki/Huber_loss).
###Code
def huber(a, d):
if np.abs(a) <= d:
return a**2
else:
return d * (2 * np.abs(a) - d)
def min_sq_rob(beta, x_points, y_points):
n = len(x_points)
recta = beta[0] + beta[1] * x_points
return (1 / (2 * n)) * (np.vectorize(huber)(y_points - recta, 5)).sum()
solucion = opt.minimize(fun=min_sq_rob,
x0=np.zeros(2),
args=(x, y))
beta_rob = solucion.x
beta_rob
plt.figure(figsize=(6, 4))
plt.plot(x, y, 'xr', label='datos')
y_fit = beta[0] + beta[1] * x
plt.plot(x, y_fit, 'b', lw=3,
label=f'Recta ajustada: $y=${np.round(beta[0], 2)} + {np.round(beta[1], 2)}$x$')
y_fit_rob = beta_rob[0] + beta_rob[1] * x
plt.plot(x, y_fit_rob, 'g', lw=3,
label=f'Recta ajustada robusta: $y=${np.round(beta_rob[0], 2)} + {np.round(beta_rob[1], 2)}$x$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1))
plt.grid()
###Output
_____no_output_____
###Markdown
Mejor... 5. TareaLa siguiente celda lee datos correspondientes a tamaños $x$ ($ft^2$) y precios $y$ (USD) de casas en Portland, Oregon.1. Graficar estos datos poniendo los precios en el eje $y$ y los tamaños en el eje $x$.2. Ajustar polinomios de grado 1 hasta grado 5.3. Graficar el error cuadrático acumulado contra el número de términos, y elegir un polinomio que ajuste bien y su grado sea el menor posible.4. Supongamos que un amigo tuyo tiene una casa de $1250 ft^2$. Según tu modelo, ¿en cuánto podría vender dicha casa?Abrir un nuevo notebook, llamado `Tarea3_ApellidoNombre` y subirlo a canvas en el espacio habilitado.
###Code
import pandas as pd
data = pd.read_csv("housing_prices.csv")
x = data['size'].values
y = data['price'].values
x
y
###Output
_____no_output_____ |
Day30_MLR_pickle.ipynb | ###Markdown
Multiple Linear RegressionMultiple independent variables in Linear Regression
###Code
from sklearn.linear_model import LinearRegression
import pandas as pd
# Read csv file
car = pd.read_csv('./files/auto-mpg.csv', header=None, names=['mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'model year', 'origin', 'name'])
car.head()
# Set x, y data
x = car[['weight', 'cylinders']]
y = car[['mpg']]
# Split train & test data
from sklearn.model_selection import train_test_split
# Confirm split data
x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.85, random_state=0)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
# Set model object
lm = LinearRegression()
# Fit data
lm.fit(x_train, y_train)
# Predict y_train_hat with linear model already fitted
y_train_hat = lm.predict(x_train)
y_train_hat[:5]
# Compare with actual y_train value
y_train.iloc[:5]
# Get coefficient & intercept
lm.coef_, lm.intercept_
###Output
_____no_output_____
###Markdown
Above outcome means as follows:* **y (mpg) = - 0.0620024 * (weight) - 0.73757226 * (cylinder) + 45.89966215**
###Code
# R-squared for train data
lm.score(x_train, y_train)
# R-squared for whole data with already fitted linear model
# R-squared of self.predict(x) and y
lm.score(x, y)
# R-squared for test data with already fitted linear model
lm.score(x_test, y_test)
###Output
_____no_output_____
###Markdown
Dump Linear Model into PickleThis is the answer for the question "How can I save the model outcome and use it later without importing and run all things?"
###Code
import pickle
pickle.dump(lm, open('./storage/car_lm.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Load Pickle of Linear Model
###Code
pickle.load(open('./storage/car_lm.pkl', 'rb'))
###Output
_____no_output_____
###Markdown
It seems just a LinearRegression object(), but is exactly same with the fitted model above.
###Code
lm.predict(x_train)[:5] # The outcome is exactly same with the previous one
###Output
_____no_output_____ |
Capstone.ipynb | ###Markdown
IBM Data Science Capstone Project
###Code
# Libraries
import numpy as np
import pandas as pd
print('Hello Capstone Project Course!')
###Output
Hello Capstone Project Course!
###Markdown
This notebook will be mainly used for the capstone project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Capstone Project Capstone project guided by Coursera. All subprojects are given in one notebook.
###Code
import pandas as pd
import numpy as np
print('Hello Capstone Project Course')
###Output
Hello Capstone Project Course
###Markdown
Part 1. Clustering neighborhoods in Toronto
###Code
# Private information, which will be used in this notebook
key = ' ???? ' # Google Geocoder API key
CLIENT_ID = ' ???? ' # my Foursquare ID
CLIENT_SECRET = ' ???? ' # my Foursquare Secret
VERSION = '20180605' # Foursquare API version
LIMIT = 100 # A default Foursquare API limit value
###Output
_____no_output_____
###Markdown
Task 1. Create Dataframe.
###Code
from bs4 import BeautifulSoup as bs
import pandas as pd
import numpy as np
import requests
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
raw_data = requests.get(url).text
soup = bs(raw_data, 'html5lib')
table_contents=[]
table=soup.find('table')
for row in table.findAll('td'):
cell = {}
if (row.span.text=='Not assigned'):
pass
else:
cell['PostalCode'] = row.p.text[:3]
cell['Borough'] = (row.span.text).split('(')[0]
cell['Neighborhood'] = (((((row.span.text).split('(')[1]).strip(')')).replace(' /',',')).replace(')',' ')).strip(' ')
table_contents.append(cell)
df = pd.DataFrame(table_contents)
df['Borough']=df['Borough'].replace({'Downtown TorontoStn A PO Boxes25 The Esplanade':'Downtown Toronto Stn A',
'East TorontoBusiness reply mail Processing Centre969 Eastern':'East Toronto Business',
'EtobicokeNorthwest':'Etobicoke Northwest','East YorkEast Toronto':'East York/East Toronto',
'MississaugaCanada Post Gateway Processing Centre':'Mississauga'})
df.dropna(how = 'any', inplace = True)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Task 2. Getting locations data.Note! Please, be kindly informed that because I am using Google API and not Python Geocoderpackage, I can collect more precise locations for every neighbor even with thesame Postal Code. So probably my dataframe final shape can be longer than for those whoare using Python Geocoder.
###Code
# I am using Google Geocoding Api, but I need to hide my API key for privacy reasons
import googlemaps
gmaps = googlemaps.Client(key=key)
df_with_coord = {'index':[], 'pc':[], 'borough':[], 'neighborhood':[], 'lat':[], 'long':[]}
index = 0 # for being sure that neighbors with the same Postal Code will not collapse,
# indexing will be used
for hood, borough, pc in zip(df.Neighborhood, df.Borough, df.PostalCode):
# Firstly, care about hoods with two or more names per row
if ',' in hood:
hood_splitted = hood.split(',')
for name in hood_splitted:
index += 1
# for every name find location, using Geocodong API
location_str = f'{name}, {borough}'
geocode_result = gmaps.geocode(location_str)
lat = geocode_result[0]['geometry']['location']['lat']
long = geocode_result[0]['geometry']['location']['lng']
# update dict
df_with_coord['index'].append(index)
df_with_coord['pc'].append(pc)
df_with_coord['borough'].append(borough)
df_with_coord['neighborhood'].append(name)
df_with_coord['lat'].append(lat)
df_with_coord['long'].append(long)
else:
index += 1
# for every hood find location, using Geocodong API
location_str = f'{hood}, {borough}'
geocode_result = gmaps.geocode(location_str)
lat = geocode_result[0]['geometry']['location']['lat']
long = geocode_result[0]['geometry']['location']['lng']
# update dict
df_with_coord['index'].append(index)
df_with_coord['pc'].append(pc)
df_with_coord['borough'].append(borough)
df_with_coord['neighborhood'].append(hood)
df_with_coord['lat'].append(lat)
df_with_coord['long'].append(long)
# Creating final dataframe from dictionary with latitudes and longitudes
final_data = pd.DataFrame(df_with_coord)
final_data.drop('index', axis = 1, inplace = True)
final_data.columns = ['Postal_Code', 'Borough', 'Neighborhood', 'Latitude', 'Longitude']
final_data.head()
###Output
_____no_output_____
###Markdown
Task 3. Clustering the neighborhoods of one borough.
###Code
# I want to find borough with highest number of neighborhoods and use it
# for analysis
boroughs = list(final_data.Borough.unique())
length = []
for i in range(len(boroughs)):
length.append(len(final_data[(final_data.Borough == boroughs[i])]))
index_of_search = length.index(max(length))
if length.count(max(length)) > 1:
print(f'There are {length.count(max(length))} Boroughs with max number of neighborhoods')
else:
print (f'We will use this borough with highest number of neighborhoods: \n {boroughs[index_of_search]}')
data_etobicoke = final_data[(final_data.Borough == 'Etobicoke')].reset_index(drop=True)
data_etobicoke.head()
# function that finds nearby venues
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
etobicoke_venues = getNearbyVenues(data_etobicoke.Neighborhood, data_etobicoke.Latitude,
data_etobicoke.Longitude, radius=500)
# Venues dataframe
print(etobicoke_venues.shape)
etobicoke_venues.head()
###Output
(388, 7)
###Markdown
Let's check how many venues were returned for each neighborhood
###Code
etobicoke_venues.groupby('Neighborhood').count()
###Output
_____no_output_____
###Markdown
Let's find out how many unique categories can be curated from all the returned venues
###Code
print('There are {} uniques categories.'.format(len(etobicoke_venues['Venue Category'].unique())))
###Output
There are 97 uniques categories.
###Markdown
Neighborhood analysis
###Code
# one hot encoding
etobicoke_onehot = pd.get_dummies(etobicoke_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
etobicoke_onehot['Neighborhood'] = etobicoke_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [etobicoke_onehot.columns[-1]] + list(etobicoke_onehot.columns[:-1])
etobicoke_onehot = etobicoke_onehot[fixed_columns]
etobicoke_onehot.head()
etobicoke_onehot.shape
###Output
_____no_output_____
###Markdown
Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category
###Code
etobicoke_grouped = etobicoke_onehot.groupby('Neighborhood').mean().reset_index()
etobicoke_grouped
###Output
_____no_output_____
###Markdown
New size:
###Code
etobicoke_grouped.shape
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in etobicoke_grouped['Neighborhood']:
print("----"+hood+"----")
temp = etobicoke_grouped[etobicoke_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
---- Albion Gardens----
venue freq
0 Caribbean Restaurant 0.4
1 Asian Restaurant 0.2
2 Pharmacy 0.2
3 Supermarket 0.2
4 American Restaurant 0.0
---- Beaumond Heights----
venue freq
0 Indian Restaurant 0.3
1 Caribbean Restaurant 0.2
2 Bank 0.1
3 Pizza Place 0.1
4 Ice Cream Shop 0.1
---- Bloordale Gardens----
venue freq
0 Convenience Store 0.25
1 Deli / Bodega 0.12
2 Park 0.12
3 Coffee Shop 0.12
4 Sandwich Place 0.12
---- Cloverdale----
venue freq
0 Fast Food Restaurant 0.10
1 Coffee Shop 0.10
2 Supermarket 0.10
3 Department Store 0.05
4 Gas Station 0.05
---- Humber Bay----
venue freq
0 Coffee Shop 0.16
1 Italian Restaurant 0.05
2 Smoothie Shop 0.05
3 Pizza Place 0.05
4 Farmers Market 0.05
---- Humber Bay Shores----
venue freq
0 Italian Restaurant 0.20
1 Park 0.13
2 Sushi Restaurant 0.07
3 Field 0.07
4 Café 0.07
---- Humbergate----
venue freq
0 Bus Line 1.0
1 American Restaurant 0.0
2 Moving Target 0.0
3 Pool Hall 0.0
4 Plaza 0.0
---- Islington----
venue freq
0 Restaurant 0.29
1 Vietnamese Restaurant 0.14
2 Ice Cream Shop 0.14
3 Rental Car Location 0.14
4 Concert Hall 0.14
---- Jamestown----
venue freq
0 Hockey Arena 0.33
1 Gym Pool 0.33
2 Pizza Place 0.33
3 Movie Theater 0.00
4 Pool Hall 0.00
---- King's Mill Park----
venue freq
0 Park 0.33
1 Metro Station 0.33
2 Bus Line 0.33
3 American Restaurant 0.00
4 Moving Target 0.00
---- Kingsway Park South East----
venue freq
0 Bakery 0.17
1 Bar 0.17
2 Coffee Shop 0.17
3 Liquor Store 0.17
4 Park 0.17
---- Kingsway Park South West----
venue freq
0 Bakery 0.17
1 Bar 0.17
2 Coffee Shop 0.17
3 Liquor Store 0.17
4 Park 0.17
---- Long Branch----
venue freq
0 Coffee Shop 0.19
1 Bank 0.12
2 Italian Restaurant 0.06
3 Grocery Store 0.06
4 Pharmacy 0.06
---- Markland Wood----
venue freq
0 Discount Store 0.2
1 Pizza Place 0.2
2 Bank 0.2
3 Park 0.2
4 Fast Food Restaurant 0.2
---- Martin Grove----
venue freq
0 Hardware Store 0.2
1 Intersection 0.2
2 Restaurant 0.2
3 Skating Rink 0.2
4 Burger Joint 0.2
---- Martin Grove Gardens----
venue freq
0 American Restaurant 0.25
1 Bakery 0.25
2 Park 0.25
3 Bus Line 0.25
4 Moving Target 0.00
---- Mimico NE----
venue freq
0 Convenience Store 0.29
1 Skating Rink 0.14
2 Bank 0.14
3 Bar 0.14
4 Grocery Store 0.14
---- Mimico South----
venue freq
0 Convenience Store 0.29
1 Skating Rink 0.14
2 Bank 0.14
3 Bar 0.14
4 Grocery Store 0.14
---- Montgomery Road----
venue freq
0 Pub 0.17
1 Gym 0.17
2 Grocery Store 0.17
3 Caribbean Restaurant 0.17
4 Park 0.17
---- Mount Olive----
venue freq
0 Sandwich Place 1.0
1 American Restaurant 0.0
2 Print Shop 0.0
3 Plaza 0.0
4 Playground 0.0
---- Old Burnhamthorpe----
venue freq
0 Shopping Plaza 0.1
1 Electronics Store 0.1
2 Convenience Store 0.1
3 Liquor Store 0.1
4 Pizza Place 0.1
---- Old Mill North----
venue freq
0 Park 0.25
1 American Restaurant 0.12
2 Italian Restaurant 0.12
3 River 0.12
4 Event Space 0.12
---- Princess Gardens----
venue freq
0 Construction & Landscaping 0.50
1 Gym / Fitness Center 0.25
2 Intersection 0.25
3 American Restaurant 0.00
4 Music Store 0.00
---- Richview Gardens----
venue freq
0 Intersection 0.08
1 Pharmacy 0.08
2 Shopping Mall 0.08
3 Smoothie Shop 0.08
4 Coffee Shop 0.08
---- Royal York South East----
venue freq
0 Mobile Phone Shop 0.06
1 Indie Movie Theater 0.06
2 Seafood Restaurant 0.06
3 French Restaurant 0.06
4 Café 0.06
---- Royal York South West----
venue freq
0 Mobile Phone Shop 0.06
1 Indie Movie Theater 0.06
2 Seafood Restaurant 0.06
3 French Restaurant 0.06
4 Café 0.06
---- Silverstone----
venue freq
0 Indian Restaurant 0.33
1 Pizza Place 0.33
2 Sandwich Place 0.33
3 American Restaurant 0.00
4 Moving Target 0.00
---- South of Bloor----
venue freq
0 Coffee Shop 0.12
1 Pub 0.08
2 Fast Food Restaurant 0.08
3 Pizza Place 0.08
4 Grocery Store 0.04
---- St. Phillips----
venue freq
0 Restaurant 0.25
1 Grocery Store 0.25
2 Coffee Shop 0.25
3 Pizza Place 0.25
4 American Restaurant 0.00
---- Sunnylea----
venue freq
0 Park 0.50
1 Furniture / Home Store 0.25
2 Other Great Outdoors 0.25
3 American Restaurant 0.00
4 Movie Theater 0.00
---- The Queensway East----
venue freq
0 Restaurant 0.24
1 BBQ Joint 0.10
2 Yoga Studio 0.05
3 Burrito Place 0.05
4 Liquor Store 0.05
---- The Queensway West----
venue freq
0 Restaurant 0.24
1 BBQ Joint 0.10
2 Yoga Studio 0.05
3 Burrito Place 0.05
4 Liquor Store 0.05
---- Thistletown----
venue freq
0 Indian Restaurant 0.25
1 Caribbean Restaurant 0.17
2 Supermarket 0.08
3 Pharmacy 0.08
4 Pizza Place 0.08
----Alderwood----
venue freq
0 Pizza Place 0.33
1 Pub 0.17
2 Gym 0.17
3 Coffee Shop 0.17
4 Sandwich Place 0.17
----Eringate----
venue freq
0 Hockey Arena 0.17
1 Convenience Store 0.17
2 Coffee Shop 0.17
3 Chinese Restaurant 0.17
4 Pizza Place 0.17
----Islington Avenue----
venue freq
0 Moving Target 0.2
1 Intersection 0.2
2 Baseball Field 0.2
3 Park 0.2
4 Food Service 0.2
----Kingsview Village----
venue freq
0 Park 1.0
1 American Restaurant 0.0
2 Hockey Arena 0.0
3 Pool Hall 0.0
4 Plaza 0.0
----Mimico NW----
venue freq
0 Convenience Store 0.29
1 Skating Rink 0.14
2 Bank 0.14
3 Bar 0.14
4 Grocery Store 0.14
----New Toronto----
venue freq
0 Café 0.10
1 Pharmacy 0.10
2 Record Shop 0.05
3 Italian Restaurant 0.05
4 Gym 0.05
----Old Mill South----
venue freq
0 Park 0.25
1 American Restaurant 0.12
2 Italian Restaurant 0.12
3 River 0.12
4 Event Space 0.12
----South Steeles----
venue freq
0 Residential Building (Apartment / Condo) 0.25
1 BBQ Joint 0.25
2 Bank 0.25
3 Gas Station 0.25
4 American Restaurant 0.00
----The Kingsway----
venue freq
0 Bakery 0.17
1 Bar 0.17
2 Coffee Shop 0.17
3 Liquor Store 0.17
4 Park 0.17
----West Deane Park----
venue freq
0 Park 0.50
1 Convenience Store 0.25
2 Skating Rink 0.25
3 Pool Hall 0.00
4 Plaza 0.00
----Westmount----
venue freq
0 Convenience Store 0.2
1 Plaza 0.2
2 Bakery 0.2
3 Pizza Place 0.2
4 Café 0.2
###Markdown
Now we will put that into _pandas_ dataframe
###Code
# function for sorting the venues in descending order.
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
###Output
_____no_output_____
###Markdown
New dataframe with the top 10 venues for each neighborhood.
###Code
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = etobicoke_grouped['Neighborhood']
for ind in np.arange(etobicoke_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(etobicoke_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
And finally, clustering
###Code
etobicoke_grouped_clustering.shape
from sklearn.cluster import KMeans
import folium
import matplotlib.cm as cm
import matplotlib.colors as colors
# set number of clusters
kclusters = 5
etobicoke_grouped_clustering = etobicoke_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(etobicoke_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
###Output
_____no_output_____
###Markdown
Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood.
###Code
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
etobicoke_merged = data_etobicoke
# merge manhattan_grouped with manhattan_data to add latitude/longitude for each neighborhood
etobicoke_merged = etobicoke_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
etobicoke_merged.drop('Postal_Code', axis = 1, inplace=True)
etobicoke_merged.head() # check the last columns!
###Output
_____no_output_____
###Markdown
Visualization
###Code
geocode_result = gmaps.geocode('Etobicoke, Toronto')
latitude = geocode_result[0]['geometry']['location']['lat']
longitude = geocode_result[0]['geometry']['location']['lng']
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(etobicoke_merged['Latitude'], etobicoke_merged['Longitude'], etobicoke_merged['Neighborhood'], etobicoke_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
In case if map is not showing in the Github, I took the screenshot![map.jpg](attachment:b3b4c801-049d-4c7f-8a20-b121d049f4f9.jpg)
###Code
etobicoke_merged[(etobicoke_merged['Cluster Labels'] == 0)]
etobicoke_merged.head()
###Output
_____no_output_____
###Markdown
Coursera Capstone Notebook
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Capstone project This notebook will be mainly used for the capstone project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
VAGABOND STUDIOS MARKET ANALYSIS TOOL SALES PER YEAR BY GENRE SELECT WHICH GENRE YOU WOULD LIKE TO VIEW SALES FOR
###Code
# GRAPH: TOTAL NORTH AMERICAN SALES PER YEAR BY GENRE
def changeGenre(Genre):
plt.plot(genreSales[Genre])
plt.title('North American Sales\nGenre: ' + str(Genre))
plt.xlabel('Year')
plt.ylabel('Sales in Millions')
plt.show()
interact(changeGenre, Genre=GENRES)
###Output
_____no_output_____
###Markdown
GAMES RELEASED PER YEAR BY PLATFORM SELECT WHICH PLATFORM YOU WOULD LIKE TO SEE NUMBER OF RELEASES ON
###Code
# GRAPH: SHOWS NUMBER OF GAMES RELEASED FOR A SELECTED PLATFORM EACH YEAR
def changePlatform(Platform):
plt.plot(platformSales[Platform])
plt.title('North American Sales\nPlatform: ' + str(Platform))
plt.xlabel('Year')
plt.ylabel('Number of Games Released')
plt.show()
interact(changePlatform, Platform=PLATFORMS)
###Output
_____no_output_____
###Markdown
PUBLISHERS' CONTROL OF MARKET SELECT WHICH YEAR YOU WOULD LIKE TO SEE THE TOP PUBLISHERS
###Code
# PIE CHART SHOWING PUBLISHERS NUMBER OF GAMES RELEASED FOR A GIVEN YEAR
# THIS WILL SHOW WHICH PUBLISHERS ARE POPULAR IN THE GIVEN YEAR
def topPublishers(Year):
given_year = publisher_count_by_year[publisher_count_by_year['Year'] == Year]
year_counts = given_year['Publisher'].value_counts()
publishers = pd.DataFrame({'Publisher':year_counts.index, 'Count':year_counts.values})
top = publishers.iloc[:5]
lab = top['Publisher']
plot = top.plot.pie(y='Count', labels=lab, autopct='%1.1f%%')
plot.legend(title="Top Publishers: ", loc="center left", bbox_to_anchor=(1, 0, 0.5, 1))
plot.set_title("Percent of Publishers' Market Share")
publisher_count_by_year = video_game_sales.filter(['Year', 'Publisher'], axis=1)
interact(topPublishers, Year=YEARS)
###Output
_____no_output_____
###Markdown
SALES PREDICTION MODEL SELECT A GENRE AND THE YEAR YOU WOULD LIKE THE PREDICTED SALES FOR
###Code
# PREDICTION MODEL:
# SELECT A GENRE AND YEAR TO GET A PREDICTION FOR SALES
interact(makePrediction, Genre=GENRES, Year=PREDICTION_YEARS)
###Output
_____no_output_____
###Markdown
Coursera CapstoneIBM Data Science Course This notebook is a built as a part of the final course of the IBM Data Science Certificate. It will be updated as we proceed in the course
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Import necessary packages
###Code
import dill
from bokeh.io import curdoc,output_notebook, show, output_file
from bokeh.plotting import figure
from bokeh.models import (LinearColorMapper, ColorBar,GMapOptions, Patches,GMapPlot,Range1d,HoverTool,
WheelZoomTool,PanTool,TapTool,CustomJS,BoxZoomTool,OpenURL)
from bokeh.palettes import brewer
from bokeh.plotting import gmap,curdoc
from bokeh.models.widgets import RadioGroup
from bokeh.layouts import widgetbox, row, column
from bokeh.models import CustomJS
import pandas as pd
###Output
_____no_output_____
###Markdown
Load the datasets to be used
###Code
merged=dill.load(open('static/merged.pkd','rb'))
Y_2024=dill.load(open('static/Y_2024.pkd','rb'))
###Output
_____no_output_____
###Markdown
The function to convert polygon shapes to list objects
###Code
def get_coords(poly):
if poly.type == 'Polygon':
x,y=poly.exterior.xy
return [list(x),list(y)]
else:
X=[]
Y=[]
for p in poly:
x,y=p.exterior.xy
X.append(list(x))
Y.append(list(y))
return [X,Y]
###Output
_____no_output_____
###Markdown
Building our DataSource
###Code
merged=pd.merge(merged, Y_2024, left_on='boro_cd', right_on='cd')
from bokeh.models import ColumnDataSource
X=[]
Y=[]
Need_1=[]
Need_2=[]
Need_3=[]
CD=[]
Pov_rate=[]
Need_1_2024=[]
Need_2_2024=[]
Need_3_2024=[]
for i in range(55):
coords=get_coords(merged['geometry'][i])
if len(coords[0])>50:
X.append(coords[0])
Y.append(coords[1])
Need_1.append(merged['son_issue_1'][i])
Need_2.append(merged['son_issue_2'][i])
Need_3.append(merged['son_issue_3'][i])
CD.append(merged['boro_cd'][i])
Pov_rate.append(merged['poverty_rate'][i])
Need_1_2024.append(merged['top3'][i][0])
Need_2_2024.append(merged['top3'][i][1])
Need_3_2024.append(merged['top3'][i][2])
else:
for j in range(len(coords[0])):
X.append(coords[0][j])
Y.append(coords[1][j])
Need_1.append(merged['son_issue_1'][i])
Need_2.append(merged['son_issue_2'][i])
Need_3.append(merged['son_issue_3'][i])
CD.append(merged['boro_cd'][i])
Pov_rate.append(merged['poverty_rate'][i])
Need_1_2024.append(merged['top3'][i][0])
Need_2_2024.append(merged['top3'][i][1])
Need_3_2024.append(merged['top3'][i][2])
source= ColumnDataSource(
data=dict(
lat=Y,
lon=X,
son_issue_1=Need_1,
son_issue_2=Need_2,
son_issue_3=Need_3,
cd=CD,
X=CD,
PR=Pov_rate,
pred_1=Need_1_2024,
pred_2=Need_2_2024,
pred_3=Need_3_2024
)
)
###Output
_____no_output_____
###Markdown
creating palette and color mapper for the map
###Code
palette = brewer['Pastel2'][5]
color_mapper=LinearColorMapper(palette=palette,low=100,high=500)
###Output
_____no_output_____
###Markdown
tooltips for the hover tool
###Code
TOOLTIPS="""
<div>
<div>
<span style="font-size: 16px; font-weight:bold; color: #00BFFF;">District:</span> <span style="font-size: 14px; color: #000000"> @cd </span><br>
<span style="font-size: 14px; font-weight:bold; color: #00BFFF;">1st need:</span> <span style="font-size: 14px; color: #000000"> @son_issue_1 </span><br>
<span style="font-size: 12px; font-weight:bold; color: #00BFFF;">2nd need: </span> <span style="font-size: 12px; color: #000000">@son_issue_2</span><br>
<span style="font-size: 10px; font-weight:bold; color: #00BFFF;">3rd need: </span> <span style="font-size: 10px; color: #000000">@son_issue_3</span>
</div>
</div>
"""
TOOLTIPS_PRED="""
<div>
<div>
<span style="font-size: 16px; font-weight:bold; color: #00BFFF;">District:</span> <span style="font-size: 14px; color: #000000"> @cd </span><br>
<span style="font-size: 14px; font-weight:bold; color: #00BFFF;">1st need:</span> <span style="font-size: 14px; color: #000000"> @pred_1 </span><br>
<span style="font-size: 12px; font-weight:bold; color: #00BFFF;">2nd need: </span> <span style="font-size: 12px; color: #000000">@pred_2</span><br>
<span style="font-size: 10px; font-weight:bold; color: #00BFFF;">3rd need: </span> <span style="font-size: 10px; color: #000000">@pred_3</span>
</div>
</div>
"""
TOOLTIPS_PR="""
<div>
<div>
<span style="font-size: 16px; font-weight:bold; color: #00BFFF;">District:</span> <span style="font-size: 14px; color: #000000"> @cd </span><br>
<span style="font-size: 14px; font-weight:bold; color: #00BFFF;">Poverty Rate:</span> <span style="font-size: 14px; color: #000000"> @PR </span><br>
</div>
</div>
"""
###Output
_____no_output_____
###Markdown
Radio group's callback function
###Code
#taptool_callback=OpenURL(url='https://www.google.com/')
def radio_handler(new):
if new==0:
#attr=radio_group.labels[new]
source.data['X']=source.data['cd']
color_mapper.low=min(source.data['cd'])
color_mapper.high=max(source.data['cd'])
hover.tooltips=TOOLTIPS
layout.children[0].map_options.lng=-74.00712
layout.children[0].map_options.lat=40.71455
layout.children[0].width=1200
layout.children[0].height=1000
layout.children[0].map_options.zoom=11
if new==1:
source.data['X']=source.data['cd']
color_mapper.low=min(source.data['cd'])
color_mapper.high=max(source.data['cd'])
hover.tooltips=TOOLTIPS_PRED
layout.children[0].map_options.lng=-74.00712
layout.children[0].map_options.lat=40.71455
layout.children[0].width=1200
layout.children[0].height=1000
layout.children[0].map_options.zoom=11
if new==2:
source.data['X']=source.data['cd']
color_mapper.low=min(source.data['cd'])
color_mapper.high=max(source.data['cd'])
hover.tooltips=TOOLTIPS_PRED
layout.children[0].map_options.lng=-73.9712
layout.children[0].map_options.lat=40.7831
layout.children[0].width=600
layout.children[0].height=1200
layout.children[0].map_options.zoom=13
###Output
_____no_output_____
###Markdown
Generating the map
###Code
map_options=GMapOptions(lat=40.71455, lng=-74.00712,map_type="roadmap",zoom=11)
plot=GMapPlot(x_range=Range1d(), y_range=Range1d(), map_options=map_options,width=1200,height=1000)
plot.api_key="AIzaSyAG6g5nqyGVnwHjvA-l4bpG0sBoOJZ75yA"
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = None
#Add patch renderers to figure.
patch=Patches(xs='lon',ys='lat',fill_color={'field':'X', 'transform' : color_mapper},line_color = 'black', fill_alpha = 0.5)
plot.add_glyph(source,patch)
# patch_Pov_rate=Patches(xs='lon',ys='lat',legend='Poverty Rates',fill_color={'field':'PR', 'transform' : color_mapper_Pov_rate},line_color = 'black', fill_alpha = 0.5)
# plot.add_glyph(source_Pov_rate,patch_Pov_rate)
#Add hover tool
hover = HoverTool(tooltips=TOOLTIPS)
plot.add_tools(hover,WheelZoomTool(), PanTool(),BoxZoomTool())
#Adding Radio Group to switch glyphs
radio_group = RadioGroup(labels=["Current", "2024 Predictions",'Manhattan'],active=0)
radio_group.on_click(radio_handler)
# taptool=TapTool(callback=taptool_callback)
# plot.add_tools(taptool)
layout = column(plot,widgetbox(radio_group),sizing_mode='fixed')
curdoc().add_root(layout)
#Add tap tool
# taptool=plot.select(type=TapTool)
# taptool.callback=callback
# output_notebook()
# show(layout)
###Output
_____no_output_____
###Markdown
Clean the data and pre-process the data
###Code
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
stops = stopwords.words('english')
stops.extend('b,.,[,],(,),;,/,-,\',?,",:,<,>,n\'t,|,#,\'s,\",\'re,\'ve,\'ll,\'d,\'re'.split(','))
stops.extend(',')
stops[:10]
train_raw = df_data_1[df_data_1['Date'] < '2015-01-01'].iloc[:,2:27]
train_target = df_data_1[df_data_1['Date'] < '2015-01-01']['Label']
test_raw = df_data_1[df_data_1['Date'] > '2014-12-31'].iloc[:,2:27]
test_target = df_data_1[df_data_1['Date'] > '2014-12-31']['Label']
train_down = np.sum(train_target == 0)
train_up = np.sum(train_target == 1)
test_down = np.sum(test_target == 0)
test_up = np.sum(test_target == 1)
print('In train data set {} days is down and {} days is up'.format(train_down,train_up))
print('In test data set {} days is down and {} days is up'.format(test_down,test_up))
train_raw.isnull().sum().sum() # there is some columns with na value
test_raw.isnull().sum().sum()
train_raw.head()
train_raw.shape
test_raw.shape
def pre_process(df):
for row in range(df.shape[0]):
for col in range(df.shape[1]):
try:
df.iloc[row,col] = [word.lower() for word in word_tokenize(df.iloc[row,col]) if word not in stops and word.isalpha()]
except:
print("Row {} and colum {} is Na".format(row,col))
print('Finish text pre-processing')
return df
train_pre = pre_process(train_raw)
train_pre.head() # all the string has been token and clean stopwords.
test_pre = pre_process(test_raw)
test_pre.head()
###Output
_____no_output_____
###Markdown
Baseline model NaiveBayesClassifier This is NLTK version
###Code
from tqdm import tqdm
from nltk.classify import NaiveBayesClassifier
import nltk.classify.util
def get_train_feature(df,num_news,trainable=True):
trainheadlines = []
for row in range(0,len(df)):
top_news = []
for x in df.iloc[row,0:num_news]:
top_news += x
trainheadlines.append(top_news) # combine the number of top news word as one list
train_feature = []
for i in tqdm(range(len(trainheadlines))):
d = {}
for word in trainheadlines[i]:
d[word] = True
if trainable == True:
train_feature.append((d,train_target[i])) # change the word into true lable, and sentence lable
else:
train_feature.append((d,test_target[i+len(train_pre)]))
print("Finish...")
print("We use top {} news as feature".format(num_news))
return train_feature
num_news = 20
train_feature = get_train_feature(train_pre,num_news,trainable=True)
test_feature = get_train_feature(test_pre,num_news,trainable=False)
# Training a NaiveBayesClassifier with our training feature words.
classifier = NaiveBayesClassifier.train(train_feature)
print('Accuracy for test data by using {} top news is: {}'.format(num_news,nltk.classify.util.accuracy(classifier, test_feature)))
# We can see which words fit best in each class.
classifier.show_most_informative_features()
NB_acc = []
for i in range(1,23):
train_feature = get_train_feature(train_pre,i,trainable=True)
test_feature = get_train_feature(test_pre,i,trainable=False)
classifier = NaiveBayesClassifier.train(train_feature)
NB_acc.append(nltk.classify.util.accuracy(classifier, test_feature))
import matplotlib.pyplot as plt
plt.plot(NB_acc)
plt.title('NavieBayes Result')
plt.ylabel('Accuracy')
plt.xlabel('Number of Top New Use')
plt.xticks(np.arange(len(NB_acc)), np.arange(1, len(NB_acc)+1))
plt.show()
###Output
_____no_output_____
###Markdown
From the above plot, we could see when we use top 2 news as training dataset, we could get highest accuracy in Navie Bayes model. The accuracy is new 52% which also is higher than random guess We can see most the accuracy is near 50% which is same as random guess probability. However, from the informative features we can see the word like hints, vanished, and missed those negative words take obvious higher ratio in lable 0 (down) We maybe think need to swith to n-gram model or change to BOW using machine learning BOW By Using Count Vectorizer and TF-IDF (sklearn version)
###Code
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
from time import time
train1 = df_data_1[df_data_1['Date'] < '2015-01-01'].iloc[:,2:27]
train_target1 = df_data_1[df_data_1['Date'] < '2015-01-01']['Label']
test1 = df_data_1[df_data_1['Date'] > '2014-12-31'].iloc[:,2:27]
test_target1 = df_data_1[df_data_1['Date'] > '2014-12-31']['Label']
def data_prepare(train,test,num_news,type_vertor,ngram=False):
trainheadlines = []
for row in range(0,len(train.index)):
trainheadlines.append(' '.join(str(x) for x in train.iloc[row,:num_news])) # combine all news into one sentence dependence how many news you want
testheadlines = []
for row in range(0,len(test.index)):
testheadlines.append(' '.join(str(x) for x in test.iloc[row,:num_news]))
if type_vertor == 'count':
if ngram == True:
count_vec = CountVectorizer(stop_words='english',ngram_range=(2,2))
else:
count_vec = CountVectorizer(stop_words='english')
count_train = count_vec.fit_transform(trainheadlines)
count_test = count_vec.transform(testheadlines)
count_occur_df = pd.DataFrame((count, word) for word, count in zip(count_train.toarray().tolist()[0], count_vec.get_feature_names()))
count_occur_df.columns = ['Word', 'Count']
count_occur_df.sort_values('Count', ascending=False, inplace=True)
print("The count matrix shape is {}".format(count_train.shape))
print("Top 5 count occurence is: ")
print(count_occur_df.head())
return count_train, count_test
if type_vertor == 'tf-idf':
if ngram == True:
tfidf_vec = TfidfVectorizer(max_df=0.97, max_features = 100000, stop_words='english',ngram_range=(2,2))
else:
tfidf_vec = TfidfVectorizer(max_df=0.97, max_features = 200000, stop_words='english')
tfidf_train = tfidf_vec.fit_transform(trainheadlines)
tfidf_test = tfidf_vec.transform(testheadlines)
tfidf_count_occur_df = pd.DataFrame((count, word) for word, count in zip(tfidf_train.toarray().tolist()[0], tfidf_vec.get_feature_names()))
tfidf_count_occur_df.columns = ['Word', 'TF-IDF']
tfidf_count_occur_df.sort_values('TF-IDF', ascending=False, inplace=True)
print("The TF-IDF matrix shape is {}".format(tfidf_train.shape))
print("Top 5 TF-IDF is: ")
print(tfidf_count_occur_df.head())
return tfidf_train, tfidf_test
basic_count_train,basic_count_test = data_prepare(train1,test1,20,'count')
basic_tfidf_train,basic_tfidf_test = data_prepare(train1,test1,20,'tf-idf')
###Output
_____no_output_____
###Markdown
Logistic Regression with different vector way
###Code
def model(train,test,train_label,test_label,num_news):
start = time.time()
model = LogisticRegression()
LR = model.fit(train,train_label)
predictions = LR.predict(test)
acc = np.mean(predictions == test_label)
print('This model takes {} second by using sklearn.'.format(time.time()-start))
print('Accuracy for test data by using {} top news is: {}'.format(num_news,acc))
return LR,acc
Basic_LR_count,count_acc = model(basic_count_train,basic_count_test,train_target1,test_target1,num_news)
# This is basic model by using count occurence which is not good as navie
Basic_LR_tfidf,tfidf_acc = model(basic_tfidf_train,basic_tfidf_test,train_target1,test_target1,num_news)
# This is basic model by using cTF-IDF which is same as navie
basic_count_acc = []
for i in range(1,26): # this is basic count vector by using logistic regression
basic_count_train,basic_count_test = data_prepare(train1,test1,i,'count')
Basic_LR_count,count_acc = model(basic_count_train,basic_count_test,train_target1,test_target1,i)
basic_count_acc.append(count_acc)
import matplotlib.pyplot as plt
plt.plot(basic_count_acc)
plt.title('Logistic Regression With Count Vector Result')
plt.ylabel('Accuracy')
plt.xlabel('Number of Top New Use')
plt.xticks(np.arange(len(basic_count_acc)), np.arange(1, len(basic_count_acc)+1))
plt.show()
###Output
_____no_output_____
###Markdown
From above plot, we could see with the top news we use increasing, the accuracy is decreasing. I think because we add too much noise to model and logistic regression could preformence best only on n >> p even with lasso penalty. The best accuracy is around 53% by using only one top news which is better than Navie Bayes.
###Code
basic_tfidf_acc = []
for i in range(1,26): # this is basic tf-idf vector by using logistic regression
basic_tfidf_train,basic_tfidf_test = data_prepare(train1,test1,i,'tf-idf')
Basic_LR_tfidf,tfidf_acc = model(basic_tfidf_train,basic_tfidf_test,train_target1,test_target1,i)
basic_tfidf_acc.append(tfidf_acc)
plt.plot(basic_tfidf_acc)
plt.title('Logistic Regression With TF-IDF Vector Result')
plt.ylabel('Accuracy')
plt.xlabel('Number of Top New Use')
plt.xticks(np.arange(len(basic_tfidf_acc)), np.arange(1, len(basic_tfidf_acc)+1))
plt.show()
###Output
_____no_output_____
###Markdown
From above plot, we could see with the top news we use increasing, the accuracy is decreasing first then increasing. I think because we add too much noise to model and logistic regression could preformence best only on n >> p even with lasso penalty. The best accuracy is around 51% by using two top news which is not good as Navie Bayes.
###Code
advance_count_acc = []
for i in range(1,26): # this is advance count vector in ngram by using logistic regression
advance_count_train,advance_count_test = data_prepare(train1,test1,i,'count',ngram=True)
Advance_LR_count,Advance_count_acc = model(advance_count_train,advance_count_test,train_target1,test_target1,i)
advance_count_acc.append(Advance_count_acc)
plt.plot(advance_count_acc)
plt.title('Logistic Regression With Count Vector Use Ngram Result')
plt.ylabel('Accuracy')
plt.xlabel('Number of Top New Use')
plt.xticks(np.arange(len(advance_count_acc)), np.arange(1, len(advance_count_acc)+1))
plt.show()
###Output
_____no_output_____
###Markdown
From above plot,we could find most of accuracy is above 50% and the best accuracy is near 54% which is get by using top 18 news and so far this is the best model.
###Code
advance_tfidf_acc = []
for i in range(1,26): # this is advance TD-IDF vector in ngram by using logistic regression, for this one I shrink the number of feature to 100000 due to CPU
advance_tfidf_train,advance_tfidf_test = data_prepare(train1,test1,i,'tf-idf',ngram=True)
Advance_LR_tfidf,Advance_tfidf_acc = model(advance_tfidf_train,advance_tfidf_test,train_target1,test_target1,i)
advance_tfidf_acc.append(Advance_tfidf_acc)
plt.plot(advance_tfidf_acc)
plt.title('Logistic Regression With TF-IDF Vector Use Ngram Result')
plt.ylabel('Accuracy')
plt.xlabel('Number of Top New Use')
plt.xticks(np.arange(len(advance_tfidf_acc)), np.arange(1, len(advance_tfidf_acc)+1))
plt.show()
###Output
_____no_output_____
###Markdown
From above plot, we can see the best accuracy is near 51.33% which is using top 1 or top 2 . Since I shrink the number of feature to 100000, some of the information maybe exclude. XGB model with different vertor way
###Code
from xgboost import XGBClassifier
def xgb_model(train,test,train_label,test_label,num_news):
start = time.time()
xgb_model = XGBClassifier()
XGB = xgb_model.fit(train,train_label)
predictions = XGB.predict(test)
acc = np.mean(predictions == test_label)
print('This model takes {} second by using sklearn.'.format(time.time()-start))
print('Accuracy for test data by using {} top news is: {}'.format(num_news,acc))
return XGB,acc
advance_count_acc = []
for i in range(1,26): # this is advance count vector in ngram by using XGB model
advance_count_train,advance_count_test = data_prepare(train1,test1,i,'count',ngram=True)
Advance_XGB_count,Advance_count_acc = xgb_model(advance_count_train,advance_count_test,train_target1,test_target1,i)
advance_count_acc.append(Advance_count_acc)
plt.plot(advance_count_acc)
plt.title('XGB With Count Vector Use Ngram Result')
plt.ylabel('Accuracy')
plt.xlabel('Number of Top New Use')
plt.xticks(np.arange(len(advance_count_acc)), np.arange(1, len(advance_count_acc)+1))
plt.show()
###Output
_____no_output_____
###Markdown
From above plot we could see most of accuracy is below 50% and the best is near 52% which is not really good and XGB takes longer when training. For future improvement, I would like to do hyper-parameters turning ML model or maybe change to deep learning model. Belowe will be spark version
###Code
import ibmos2spark
# @hidden_cell
credentials = {
'endpoint': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
'service_id': 'iam-ServiceId-8ccbe184-10b7-4c6c-94e6-2edebc3056d0',
'iam_service_endpoint': 'https://iam.ng.bluemix.net/oidc/token',
'api_key': 'CfTULjIbd6PYKlQASmTULw8xS8emXJanwHA3k3M7nESJ'
}
configuration_name = 'os_5d5d24a96f0d417089f2601d83de16a9_configs'
cos = ibmos2spark.CloudObjectStorage(sc, credentials, configuration_name, 'bluemix_cos')
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df_data_2 = spark.read\
.format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\
.option('header', 'true')\
.load(cos.url('Combined_News_DJIA.csv', 'default-donotdelete-pr-zdopmag1jdqncb'))
df_data_2.take(5)
df_data_2.printSchema()
new_df2 = df_data_2.dropna() # clean all Na value
new_df2.count()
from pyspark.sql.types import * # Change column type
new_df2 = new_df2.withColumn("Label", new_df2["Label"].cast(IntegerType()))
from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer, Tokenizer
from pyspark.ml.classification import LogisticRegression
from pyspark.sql.functions import concat, col, lit, concat_ws, udf, array
from pyspark.sql.types import StringType
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
def df_combine(num_news):
col_list = new_df2.columns[2:][:num_news]
df = new_df2.withColumn('concat_cols',concat(*col_list)) # select how many top news you want to concat as one column
return df
com_df2 = df_combine(24)
com_df2.count(),len(com_df2.columns)
com_df2.select('concat_cols').show(5)
stops.extend(["http","https","amp","rt","t","c","the","b"])
stops[:10]
# regular expression tokenizer
tokenizer = Tokenizer(inputCol="concat_cols", outputCol="words")
# stop words
stopwordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered").setStopWords(stops)
# bag of words count
countVectors = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=200000, minDF=2) # indicate min number of term show in document
#label_stringIdx = StringIndexer(inputCol = "Category", outputCol = "label")
pipeline = Pipeline(stages=[tokenizer, stopwordsRemover, countVectors])
# Fit the pipeline to training documents.
pipelineFit = pipeline.fit(com_df2)
dataset = pipelineFit.transform(com_df2)
dataset.show(5)
train_df2 = dataset[dataset['Date'] < '2015-01-01']
test_df2 = dataset[dataset['Date'] > '2014-12-31']
print("Training Dataset Count: " + str(train.count()))
print("Test Dataset Count: " + str(test.count()))
train_new_df2 = train_df2.dropna()
test_new_df2 = test_df2.dropna()
train_new_df2.count(), test_new_df2.count()
lr = LogisticRegression(labelCol="Label", featuresCol="features", maxIter=20, regParam=0.8, elasticNetParam=0)
lrModel = lr.fit(train_new_df2)
predictions = lrModel.transform(test_new_df2)
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(labelCol='Label',predictionCol="prediction")
lr_acc = evaluator.evaluate(predictions)
print('Logistic Regression with spark version is {0:0.3f}'.format(lr_acc))
###Output
_____no_output_____
###Markdown
Try Deep Learning model
###Code
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, Conv1D, GRU, CuDNNGRU, CuDNNLSTM, BatchNormalization
from keras.layers import Bidirectional, GlobalMaxPool1D, MaxPooling1D, Add, Flatten
from keras.layers import GlobalAveragePooling1D, GlobalMaxPooling1D, concatenate, SpatialDropout1D
from keras.models import Model, load_model, Sequential
from keras import initializers, regularizers, constraints, optimizers, layers, callbacks
from keras import backend as K
from keras.engine import InputSpec, Layer
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, TensorBoard, Callback, EarlyStopping
train2 = df_data_1[df_data_1['Date'] < '2015-01-01'].iloc[:,2:27]
train_target2 = df_data_1[df_data_1['Date'] < '2015-01-01']['Label']
test2 = df_data_1[df_data_1['Date'] > '2014-12-31'].iloc[:,2:27]
test_target2 = df_data_1[df_data_1['Date'] > '2014-12-31']['Label']
train2.shape
test2.shape
def data_prepare2(train,test,num_news,m_len):
trainheadlines2 = []
for row in range(0,len(train.index)):
trainheadlines2.append('00'.join(str(x) for x in train.iloc[row,:num_news])) # combine all news into one sentence dependence how many news you want
testheadlines2 = []
for row in range(0,len(test.index)):
testheadlines2.append('00'.join(str(x) for x in test.iloc[row,:num_news]))
tk = Tokenizer(lower = True)
tk.fit_on_texts(trainheadlines2)
X_train = tk.texts_to_sequences(trainheadlines2)
X_test = tk.texts_to_sequences(testheadlines2)
print('Number of unique word is {}'.format(len(tk.word_index)))
max_len = m_len
X_train = pad_sequences(X_train, maxlen=max_len, padding='post', truncating='post')
X_test = pad_sequences(X_test, maxlen=max_len, padding='post', truncating='post')
print(X_train.shape,X_test.shape)
print('Finish data preparing...')
return X_train, X_test
X_train, X_test = data_prepare2(train2,test2,20,200)
max_features = 100000
model1=Sequential()
model1.add(Embedding(max_features,200,mask_zero=True))
model1.add(Bidirectional(LSTM(128,dropout=0.4, recurrent_dropout=0.4,return_sequences=True)))
model1.add(Bidirectional(LSTM(64,dropout=0.5, recurrent_dropout=0.5,return_sequences=False)))
model1.add(Dense(1,activation='softmax'))
model1.compile(loss='binary_crossentropy',optimizer=Adam(lr=0.001),metrics=['accuracy'])
model1.summary()
model1.fit(X_train, train_target2, validation_data=(X_test, test_target2),epochs=3, batch_size=32, verbose=1)
###Output
_____no_output_____
###Markdown
The accuracy is near 51% but no more improve after first epoch. Try other model later. Unsupervised Learning Model
###Code
import re # For preprocessing
import pandas as pd # For data handling
from time import time # To time our operations
from collections import defaultdict # For word frequency
import spacy # For preprocessing
import logging # Setting up the loggings to monitor gensim
logging.basicConfig(format="%(levelname)s - %(asctime)s: %(message)s", datefmt= '%H:%M:%S', level=logging.INFO)
from gensim.models.phrases import Phrases, Phraser
train3 = df_data_1[df_data_1['Date'] < '2015-01-01'].iloc[:,2:27]
train_target3 = df_data_1[df_data_1['Date'] < '2015-01-01']['Label']
test3 = df_data_1[df_data_1['Date'] > '2014-12-31'].iloc[:,2:27]
test_target3 = df_data_1[df_data_1['Date'] > '2014-12-31']['Label']
def data_prepare3(train,test,num_news):
trainheadlines3 = []
for row in range(0,len(train.index)):
trainheadlines3.append('0'.join(str(x[2:]) for x in train.iloc[row,:num_news])) # combine all news into one sentence dependence how many news you want
# clean the b' or each news
testheadlines3 = []
for row in range(0,len(test.index)):
testheadlines3.append('0'.join(str(x[2:]) for x in test.iloc[row,:num_news]))
print('Finish combine the news...')
nlp = spacy.load('en', disable=['ner', 'parser']) # disabling Named Entity Recognition for speed
def cleaning(doc):
# Lemmatizes and removes stopwords, punctuation
# doc needs to be a spacy Doc object
txt = [token.lemma_ for token in doc if not token.is_stop and not token.is_punct]
# Word2Vec uses context words to learn the vector representation of a target word,
# if a sentence is only one or two words long,
# the benefit for the training is very small
if len(txt) > 2:
return ' '.join(txt)
brief_cleaning_train = (re.sub("[^A-Za-z']+", ' ', str(row)).lower() for row in trainheadlines3)
brief_cleaning_test = (re.sub("[^A-Za-z']+", ' ', str(row)).lower() for row in testheadlines3)
t = time()
txt_train = [cleaning(doc) for doc in nlp.pipe(brief_cleaning_train, batch_size=50, n_threads=-1)]
txt_test = [cleaning(doc) for doc in nlp.pipe(brief_cleaning_test, batch_size=50, n_threads=-1)]
print('Time to clean up everything: {} mins'.format(round((time() - t) / 60, 2)))
df_clean_train = pd.DataFrame({'clean': txt_train})
df_clean_train = df_clean_train.dropna().drop_duplicates()
df_clean_test = pd.DataFrame({'clean': txt_test})
df_clean_test = df_clean_test.dropna().drop_duplicates()
print('Finish cleaning...')
return df_clean_train, df_clean_test
df_clean_train, df_clean_test = data_prepare3(train3,test3,22)
df_clean_train.clean[0]
df_clean_train.head()
df_clean_test.head()
def data_bigrams(df):
t = time()
sent = [row.split() for row in df['clean']] # seperate the each word for each sentence
phrases = Phrases(sent, min_count=5, progress_per=100) # detect the bigrams combinetion, Creates the relevant phrases from the list of sentences:
bigram = Phraser(phrases) # save memory for bigrams detection
sentences = bigram[sent] # Transform the corpus based on the bigrams detected
print('Time to detect everything: {} mins'.format(round((time() - t) / 60, 2)))
return sentences
sentences = data_bigrams(df_clean_train)
word_freq = defaultdict(int)
for sent in sentences:
for i in sent:
word_freq[i] += 1
len(word_freq)
sorted(word_freq, key=word_freq.get, reverse=True)[:10]
# mainly a sanity check of the effectiveness of the lemmatization, removal of stopwords, punctuation, and addition of bigrams.
import multiprocessing
from gensim.models import Word2Vec
cores = multiprocessing.cpu_count() # Count the number of cores in a computer
w2v_model = Word2Vec(min_count=5,
window=2,
size=300,
sample=6e-5,
alpha=0.003,
min_alpha=0.0007,
negative=20,
workers=cores-1)
t = time()
w2v_model.build_vocab(sentences, progress_per=100)
print('Time to build vocab: {} mins'.format(round((time() - t) / 60, 2)))
t = time()
w2v_model.train(sentences, total_examples=w2v_model.corpus_count, epochs=40, report_delay=1)
print('Time to train the model: {} mins'.format(round((time() - t) / 60, 2)))
print('The size of word2vector model has {} unique word.'.format(len(w2v_model.wv.vocab)))
w2v_model.init_sims(replace=True)
w2v_model.wv.most_similar('good')
import pandas as pd
import numpy as np
from gensim.models import Word2Vec
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score, f1_score
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
model = KMeans(n_clusters=2, max_iter=1000, random_state=True, n_init=60).fit(X=w2v_model.wv.vectors)
w2v_model.wv.similar_by_vector(model.cluster_centers_[0], topn=10, restrict_vocab=None)
positive_cluster_center = model.cluster_centers_[0]
negative_cluster_center = model.cluster_centers_[1]
words = pd.DataFrame(w2v_model.wv.vocab.keys())
words.columns = ['words']
words['vectors'] = words.words.apply(lambda x: w2v_model.wv[f'{x}']) # give each word with vector(300,1) value
words['cluster'] =words.vectors.apply(lambda x: model.predict([np.array(x)])) # give each word cluester label, here 1 as negative, 0 as positive
words.cluster = words.cluster.apply(lambda x: x[0])
words.head()
words['cluster_value'] = [1 if i==0 else -1 for i in words.cluster] # positive as 1, negative as -1
words['closeness_score'] = words.apply(lambda x: 1/(model.transform([x.vectors]).min()), axis=1) # calculate the distance of each word to each cluster and find closest
words['sentiment_coeff'] = words.closeness_score * words.cluster_value
words.head(10)
sentiment_dict = dict(zip(words.words.values, words.sentiment_coeff.values)) # create sentiment dictionary
file_weighting = df_clean_train.copy()
file_weighting_test = df_clean_test.copy()
tfidf = TfidfVectorizer()
tfidf.fit(file_weighting.clean)
features = pd.Series(tfidf.get_feature_names())
transformed = tfidf.transform(file_weighting.clean) # replace each word with their corresponding tfidf score.
def create_tfidf_dictionary(x, transformed_file, features):
'''
create dictionary for each input sentence x, where each word has assigned its tfidf score
inspired by function from this wonderful article:
https://medium.com/analytics-vidhya/automated-keyword-extraction-from-articles-using-nlp-bfd864f41b34
x - row of dataframe, containing sentences, and their indexes,
transformed_file - all sentences transformed with TfidfVectorizer
features - names of all words in corpus used in TfidfVectorizer
'''
vector_coo = transformed_file[x.name].tocoo()
vector_coo.col = features.iloc[vector_coo.col].values
dict_from_coo = dict(zip(vector_coo.col, vector_coo.data))
return dict_from_coo
def replace_tfidf_words(x, transformed_file, features):
'''
replacing each word with it's calculated tfidf dictionary with scores of each word, if the word is not in the tfidf dictionary, then use 0 replace
x - row of dataframe, containing sentences, and their indexes,
transformed_file - all sentences transformed with TfidfVectorizer
features - names of all words in corpus used in TfidfVectorizer
'''
dictionary = create_tfidf_dictionary(x, transformed_file, features)
return list(map(lambda y: dictionary[f'{y}'] if (y in dictionary) else 0, x.clean.split()))
replaced_tfidf_scores = file_weighting.apply(lambda x: replace_tfidf_words(x, transformed, features), axis=1)
def replace_sentiment_words(word, sentiment_dict):
'''
replacing each word with its associated sentiment score from sentiment dict
'''
try:
out = sentiment_dict[word]
except KeyError:
out = 0
return out
replaced_closeness_scores = file_weighting.clean.apply(lambda x: list(map(lambda y: replace_sentiment_words(y, sentiment_dict), x.split())))
replaced_closeness_scores_test = file_weighting_test.clean.apply(lambda x:
list(map(lambda y: replace_sentiment_words(y, sentiment_dict), x.split())))
replaced_closeness_scores.head()
replaced_closeness_scores_test.head()
###Output
_____no_output_____
###Markdown
Using closeness scores
###Code
positive_rate = replaced_closeness_scores.apply(lambda x: np.mean(np.array(x) >= 0))
negative_rate = replaced_closeness_scores.apply(lambda x: np.mean(np.array(x) < 0))
positive_rate_test = replaced_closeness_scores_test.apply(lambda x: np.mean(np.array(x) >= 0))
negative_rate_test = replaced_closeness_scores_test.apply(lambda x: np.mean(np.array(x) < 0))
replacement_new_df = pd.DataFrame(data=[file_weighting.clean, positive_rate, negative_rate,train_target3]).T
replacement_new_df.columns = ['sentence', 'positive rate', 'negative rate', 'Label']
replacement_new_df_test = pd.DataFrame(data=[file_weighting_test.clean, positive_rate_test, negative_rate_test]).T
replacement_new_df_test.columns = ['sentence', 'positive rate', 'negative rate']
replacement_new_df.head()
replacement_new_df_test.head()
replacement_new_df.dtypes
replacement_new_df = replacement_new_df.astype({'Label': 'int','positive rate':'float','negative rate':'float'})
replacement_new_df_test = replacement_new_df_test.astype({'positive rate':'float','negative rate':'float'})
replacement_new_df.dtypes
replacement_new_df_test.dtypes
model = LogisticRegression()
LR = model.fit(replacement_new_df[['positive rate','negative rate']],replacement_new_df['Label'])
predictions = LR.predict(replacement_new_df[['positive rate','negative rate']])
predicted_classes = predictions
y_test = replacement_new_df['Label']
conf_matrix = pd.DataFrame(confusion_matrix(replacement_new_df['Label'], predictions))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
predictions = LR.predict(replacement_new_df_test[['positive rate','negative rate']])
predicted_classes = predictions
y_test = test_target3
conf_matrix = pd.DataFrame(confusion_matrix(test_target3, predictions))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
from xgboost import XGBClassifier
xgb_model = XGBClassifier()
XGB = xgb_model.fit(replacement_new_df[['positive rate','negative rate']],replacement_new_df['Label'])
XGB_predictions = XGB.predict(replacement_new_df[['positive rate','negative rate']])
XGB_predictions_test = XGB.predict(replacement_new_df_test[['positive rate','negative rate']])
predicted_classes = XGB_predictions
y_test = replacement_new_df['Label']
conf_matrix = pd.DataFrame(confusion_matrix(replacement_new_df['Label'], XGB_predictions))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
predicted_classes = XGB_predictions_test
y_test = test_target3
conf_matrix = pd.DataFrame(confusion_matrix(test_target3, XGB_predictions_test))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
###Output
Confusion Matrix
###Markdown
Try another way by using NLTK (Vader) with unsupervised sentimental analysis, since Vader is good at dealing with social media text
###Code
import nltk
nltk.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
def get_vader_score(sent):
# Polarity score returns dictionary
ss = sid.polarity_scores(sent)
for k in sorted(ss):
print('{0}: {1}, '.format(k, ss[k]), end='')
print()
return list(ss.values())
replacement_vader_train = df_clean_train.clean.apply(get_vader_score)
replacement_vader_test = df_clean_test.clean.apply(get_vader_score)
replacement_df_vader_train = pd.DataFrame(list(replacement_vader_train),columns=['neg','neu','pos','com'])
replacement_df_vader_train.head()
replacement_df_vader_test = pd.DataFrame(list(replacement_vader_test),columns=['neg','neu','pos','com'])
replacement_df_vader_test.head()
vader_model = LogisticRegression()
vader_LR = vader_model.fit(replacement_df_vader_train[['neg','neu','pos','com']],train_target3)
vader_predictions = vader_LR.predict(replacement_df_vader_train[['neg','neu','pos','com']])
predicted_classes = vader_predictions
y_test = train_target3
conf_matrix = pd.DataFrame(confusion_matrix(train_target3, vader_predictions))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
vader_predictions_test = vader_LR.predict(replacement_df_vader_test[['neg','neu','pos','com']])
predicted_classes = vader_predictions_test
y_test = test_target3
conf_matrix = pd.DataFrame(confusion_matrix(test_target3, vader_predictions_test))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
xgb_vader_model = XGBClassifier()
XGB_vader = xgb_vader_model.fit(replacement_df_vader_train[['neg','neu','pos','com']],train_target3)
XGB_vader_predictions = XGB_vader.predict(replacement_df_vader_train[['neg','neu','pos','com']])
XGB_vader_predictions_test = XGB_vader.predict(replacement_df_vader_test[['neg','neu','pos','com']])
predicted_classes = XGB_vader_predictions
y_test = train_target3
conf_matrix = pd.DataFrame(confusion_matrix(train_target3, XGB_vader_predictions))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
predicted_classes = XGB_vader_predictions_test
y_test = test_target3
conf_matrix = pd.DataFrame(confusion_matrix(test_target3, XGB_vader_predictions_test))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
replacement_vader_train[0]
df_clean_train.head()
###Output
_____no_output_____
###Markdown
using original dataset without cleaning
###Code
def get_vader_comp(sent):
# Polarity score returns dictionary
ss = sid.polarity_scores(sent)
return ss['compound']
def original_data_pre(train,test,num_news):
replacement_comp_train = train.iloc[:,:num_news].applymap(lambda sent: get_vader_comp(sent[2:]))
replacement_comp_test = test.iloc[:,:num_news].applymap(lambda sent: get_vader_comp(sent))
return replacement_comp_train, replacement_comp_test
replacement_comp_train, replacement_comp_test = original_data_pre(train3,test3,10)
replacement_comp_train.head()
train_columns = list(replacement_comp_train.columns)
xgb_vader_comp_model = XGBClassifier()
XGB_vader_comp = xgb_vader_comp_model.fit(replacement_comp_train[train_columns],train_target3)
XGB_vader_comp_predictions = XGB_vader_comp.predict(replacement_comp_train[train_columns])
XGB_vader_comp_predictions_test = XGB_vader_comp.predict(replacement_comp_test[train_columns])
predicted_classes = XGB_vader_comp_predictions
y_test = train_target3
conf_matrix = pd.DataFrame(confusion_matrix(train_target3, XGB_vader_comp_predictions))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
predicted_classes = XGB_vader_comp_predictions_test
y_test = test_target3
conf_matrix = pd.DataFrame(confusion_matrix(test_target3, XGB_vader_comp_predictions_test))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
XGB_vader_comp.get_xgb_params
xgb_vader_comp = Pipeline(steps=[('classifier', XGBClassifier())])
xgb_vader_comp_param_grid ={
'classifier__objective':['binary:logistic'],
'classifier__learning_rate': [0.05,0.1,0.2,0.3], #so called `eta` value
'classifier__max_depth': [3,4,5,6],
'classifier__min_child_weight': [1,2,3],
'classifier__subsample': [0.6,0.8],
'classifier__colsample_bytree': [0.5,0.6,0.7],
'classifier__n_estimators': [100,200,300]}
xgb_vader_comp_CV = GridSearchCV(xgb_vader_comp, xgb_vader_comp_param_grid, n_jobs= 1)
xgb_vader_comp_CV.fit(replacement_comp_train[train_columns],train_target3)
print(xgb_vader_comp_CV.best_params_)
print('Training accuracy:{0:.3f}'.format(xgb_vader_comp_CV.best_score_))
print('Validation accuracy: {0:.3f}'.format(xgb_vader_comp_CV.best_estimator_.score(replacement_comp_test[train_columns],test_target3)))
###Output
C:\Users\Administrator\Anaconda3\envs\envTF113\lib\site-packages\sklearn\model_selection\_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning.
warnings.warn(CV_WARNING, FutureWarning)
###Markdown
So far, using XGB with unsupervised learning method (replace the each row whole sentence with positive and negative rate) in 20 news has the best result, over 74% accuracy Merging sentiment scores and tfidf scores The dot product of such 2 sentence vectors indicated whether overall sentiment was positive or negative (if the dot product was positive, the sentiment was positive, and in opposite case negative).
###Code
replacement_df = pd.DataFrame(data=[replaced_closeness_scores, replaced_tfidf_scores, file_weighting.clean, train_target3]).T
replacement_df.columns = ['sentiment_coeff', 'tfidf_scores', 'sentence', 'Label']
replacement_df['sentiment_rate'] = replacement_df.apply(lambda x: np.array(x.loc['sentiment_coeff']) @ np.array(x.loc['tfidf_scores']), axis=1)
replacement_df['prediction'] = (replacement_df.sentiment_rate>0).astype('int8')
replacement_df['sentiment'] = [1 if i==1 else 0 for i in replacement_df.Label]
replacement_df.head(10)
predicted_classes = replacement_df.prediction
y_test = replacement_df.sentiment
conf_matrix = pd.DataFrame(confusion_matrix(replacement_df.sentiment, replacement_df.prediction))
print('Confusion Matrix')
display(conf_matrix)
test_scores = accuracy_score(y_test,predicted_classes), precision_score(y_test, predicted_classes), recall_score(y_test, predicted_classes), f1_score(y_test, predicted_classes)
print('\n \n Scores')
scores = pd.DataFrame(data=[test_scores])
scores.columns = ['accuracy', 'precision', 'recall', 'f1']
scores = scores.T
scores.columns = ['scores']
display(scores)
###Output
Confusion Matrix
###Markdown
The Capstone project will look at neighborhoods of Toronto. This repository will hold the analysis of Toronto neighborhoods.
###Code
# Import the libraries.
import pandas as pd
import numpy as np
import wikipedia as wp
# The wikipedia URL for where to get the data.
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
# Use pandas to read in the wikipedia table.
tables = pd.read_html(url, header=0)
# Turn the list from the last step into a pandas df, and remove the boroughs that are not assigned.
toronto_df = tables[0]
toronto_df = toronto_df[toronto_df.Borough != 'Not assigned']
toronto_df.head(5)
# Get the dimensions of the dataframe
toronto_df.shape
###Output
_____no_output_____
###Markdown
Part 2 - Get the latitude and longitude of each neighborhood
###Code
### This can get the latitude and longitude of each neighborhood.
### However it is running and timing out, so I will use the CSV file instead.
#import geocoder # import geocoder
### initialize your variable to None
lat_lng_coords = None
postal_code = 'M5G'
### loop until you get the coordinates
#while(lat_lng_coords is None):
# g = geocoder.google('{}, Toronto, Ontario'.format(postal_code))
# lat_lng_coords = g.latlng
#latitude = lat_lng_coords[0]
#longitude = lat_lng_coords[1]
# Get the latitude and longitude from the remote csv file.
lat_lng_df = pd.read_csv("https://cocl.us/Geospatial_data")
# Merge the two files on the postal code. This will create a new data frame with postal code,
# borough, neighborhood, latitude, and longitude.
data_df = toronto_df.merge(lat_lng_df, left_on='Postal Code', right_on='Postal Code')
# Take a look at the combined data frame.
data_df.head()
# Only using the postal codes where the Borough contains 'Toronto'. Reset the index so it is sequential.
data_df = data_df[data_df.Borough.str.contains('Toronto')]
data_df.reset_index(drop=True, inplace=True)
data_df.head()
###Output
_____no_output_____
###Markdown
Part 3 - Perform analysis on the Toronto dataset. Step 1: Create a base map of Toronto using latitude and longitudeUsing Folium, we will create an empty map of Toronto. To do this, we first need to get a latitude and longitude of Toronto using the geopy library.
###Code
# This is the city we want to locate.
city = 'Toronto'
import geopy
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="toronto_explorer")
location = geolocator.geocode(city)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinates of Toronto are {}, {}.'.format(latitude, longitude))
# Create map of Toronto using latitude and longitude from part 2. Using zoom level 12 to see the appropriate level of detail.
import folium
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=12)
# add markers to map
for lat, lng, label in zip(data_df['Latitude'], data_df['Longitude'], data_df['Neighbourhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
CLIENT_ID = 'Y550BMMC2OBAN3LRQKB3UECJ1HM40LFY2PE0KJFAYZDGTYAK'
CLIENT_SECRET = 'MDXCMLZ3DQBJXQHOUEPILQYW1BPGAA21LITYAILLOALL1N0I'
LIMIT = 100 # A default Foursquare API limit value
data_df.loc[0, 'Neighbourhood']
neighbourhood_latitude = data_df.loc[0, 'Latitude'] # neighborhood latitude value
neighbourhood_longitude = data_df.loc[0, 'Longitude'] # neighborhood longitude value
neighbourhood_name = data_df.loc[0, 'Neighbourhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighbourhood_name,
neighbourhood_latitude,
neighbourhood_longitude))
# Define the radius to be within 500 meters and create the URL. Limit to 100 responses.
radius = 500
lmt = 100
version = 20201220
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
version,
neighbourhood_latitude,
neighbourhood_longitude,
radius,
lmt)
print(url)
import json
import requests
from pandas.io.json import json_normalize
results = requests.get(url).json()
results
# Function that gets the venue's category
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
# Function to get the nearby venues for each neighbourhood within a radius (500m).
def getNearbyVenues(names, latitudes, longitudes, radius=500, lmt=100, version=20201220):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
version,
lat,
lng,
radius,
lmt)
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
#Create the dataframe.
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighbourhood Latitude',
'Neighbourhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
# Need to turn this into a pandas df.
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues)
# Get name, categories, lat, lng columns only.
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
toronto_venues = getNearbyVenues(names=data_df['Neighbourhood'],
latitudes=data_df['Latitude'],
longitudes=data_df['Longitude']
)
# Take a look at the dataframe with venues.
print(toronto_venues.shape)
toronto_venues.head()
print(toronto_venues['Venue Category'].unique())
toronto_venues.groupby('Neighbourhood').count()
###Output
['Bakery' 'Coffee Shop' 'Distribution Center' 'Spa' 'Restaurant'
'Breakfast Spot' 'Gym / Fitness Center' 'Historic Site' 'Park'
'Chocolate Shop' 'Farmers Market' 'Performing Arts Venue' 'Dessert Shop'
'Pub' 'French Restaurant' 'Event Space' 'Mexican Restaurant'
'Yoga Studio' 'Café' 'Theater' 'Shoe Store' 'Brewery' 'Art Gallery'
'Cosmetics Shop' 'Electronics Store' 'Beer Store' 'Bank' 'Hotel'
'Antique Shop' 'Italian Restaurant' 'Creperie' 'Sushi Restaurant'
'Beer Bar' 'Hobby Shop' 'Diner' 'Burrito Place' 'Fried Chicken Joint'
'Nightclub' 'Japanese Restaurant' 'Smoothie Shop' 'Sandwich Place' 'Gym'
'Bar' 'College Auditorium' 'College Cafeteria' 'Clothing Store'
'Comic Shop' 'Plaza' 'Ramen Restaurant' 'Music Venue' 'Pizza Place'
'Thai Restaurant' 'Burger Joint' 'College Rec Center' 'Shopping Mall'
'New American Restaurant' 'Tanning Salon' 'Fast Food Restaurant'
'Steakhouse' 'Bookstore' 'Sporting Goods Shop'
'Modern European Restaurant' 'Gastropub' 'Miscellaneous Shop' 'Lake'
'Tea Room' 'Department Store' 'Lounge' 'Furniture / Home Store'
'Ethiopian Restaurant' 'Chinese Restaurant' 'Middle Eastern Restaurant'
'Bubble Tea Shop' 'Seafood Restaurant' 'Video Game Store' 'Wine Bar'
'Other Great Outdoors' 'Poutine Place' 'Lingerie Store' 'Movie Theater'
'Office' 'Vietnamese Restaurant' 'Ice Cream Shop' 'Smoke Shop' 'Pharmacy'
'Hookah Bar' 'Food Truck' 'BBQ Joint' 'American Restaurant'
'Cocktail Bar' 'Vegetarian / Vegan Restaurant' 'Fountain' 'Tailor Shop'
'Grocery Store' 'Cheese Shop' 'German Restaurant'
'Comfort Food Restaurant' 'Salon / Barbershop' 'Irish Pub'
'Asian Restaurant' 'Moroccan Restaurant' 'Bistro' 'Belgian Restaurant'
'Trail' 'Health Food Store' 'Neighborhood' 'Liquor Store' 'Museum'
'Concert Hall' 'Basketball Stadium' 'Jazz Club' 'Fish Market'
'Greek Restaurant' 'Bagel Shop' 'Beach' 'Gourmet Shop'
'Indian Restaurant' 'Eastern European Restaurant' 'Juice Bar'
'Art Museum' 'Poke Place' 'Portuguese Restaurant' 'Discount Store'
'Falafel Restaurant' 'Salad Place' 'Donut Shop' 'Korean Restaurant'
'Candy Store' 'Baby Store' 'Athletics & Sports' 'Speakeasy'
'Monument / Landmark' 'Colombian Restaurant' 'Mediterranean Restaurant'
'Noodle House' 'Gluten-free Restaurant' 'Brazilian Restaurant'
'Deli / Bodega' 'Latin American Restaurant' 'Gift Shop' 'Cupcake Shop'
'Building' 'Soup Place' 'Supermarket' 'Pet Store' 'Skating Rink'
'IT Services' 'Roof Deck' 'Dance Studio' 'Aquarium' 'Sports Bar'
'Train Station' 'Scenic Lookout' 'Baseball Stadium' 'History Museum'
'Food Court' 'Indie Movie Theater' 'Hotel Bar' 'Cuban Restaurant'
'Record Shop' "Men's Store" 'Fruit & Vegetable Store'
'Tibetan Restaurant' 'Caribbean Restaurant' 'Frozen Yogurt Shop'
'General Travel' 'General Entertainment' 'Taco Place' 'Garden'
'Climbing Gym' 'Stadium' 'Intersection' 'Convenience Store'
'Fish & Chips Shop' 'Food & Drink Shop' 'Gay Bar' 'Stationery Store'
'Coworking Space' 'Swim School' 'Bus Line' 'Business Service' 'Dog Run'
'Jewelry Store' 'Flea Market' 'Arts & Crafts Store'
'Cajun / Creole Restaurant' 'Toy / Game Store' 'Gas Station'
'College Gym' 'College Arts Building' 'Post Office' 'Organic Grocery'
'Snack Place' 'Gaming Cafe' 'Filipino Restaurant' 'Doner Restaurant'
'Massage Studio' 'Hospital' 'Bed & Breakfast' 'Light Rail Station'
'Airport' 'Airport Lounge' 'Harbor / Marina' 'Airport Food Court'
'Airport Terminal' 'Boutique' 'Airport Service' 'Rental Car Location'
'Sculpture Garden' 'Boat or Ferry' 'Playground'
'Molecular Gastronomy Restaurant' 'Church' 'Optical Shop' 'Butcher'
'Taiwanese Restaurant' 'Market' 'Opera House' 'Theme Restaurant'
'Martial Arts School' 'Escape Room' 'Adult Boutique' 'Sake Bar'
'Health & Beauty Service' 'Strip Club' 'Skate Park' 'Garden Center'
'Auto Workshop']
###Markdown
One hot encode the venue categoriesThis is where we need to one hot encode the categories for the venues before we run analysis on them.
###Code
# One hot encoding of the venue categorical variables
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# Add neighbourhood column back to the df
toronto_onehot['Neighbourhood'] = toronto_venues['Neighbourhood']
# Move neighbourhood column to the first column for display purposes.
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
print(toronto_onehot.shape)
toronto_onehot.head()
# Group by neighborhood
toronto_grouped = toronto_onehot.groupby('Neighbourhood').mean().reset_index()
print(toronto_grouped.shape)
toronto_grouped
# Function that sorts and returns top venues.
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
# Create a dataframe of the top 10 venues by neighbourhood.
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighbourhoods_venues_sorted = pd.DataFrame(columns=columns)
neighbourhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighbourhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighbourhoods_venues_sorted.head()
# Run k-means clustering on the neighbourhood.
from sklearn.cluster import KMeans
# Number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighbourhood', 1)
# Run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# add clustering labels
neighbourhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = data_df
# merge manhattan_grouped with manhattan_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighbourhoods_venues_sorted.set_index('Neighbourhood'), on='Neighbourhood')
toronto_merged.head() # check the last columns!
# Create Map
import matplotlib.cm as cm
import matplotlib.colors as colors
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighbourhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
# To view the cluster details, just change the cluster label value to between 0 and 4.
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Udacity Machine Learning Nanodegree Capstone Project Hello there!This is my capstone project on building a model to make better prosthetics. This project is for an open source prosthetic control system which would enable prosthetic devices to have multiple degrees of freedom. https://github.com/cyber-punk-meThe system is built of several components. It connects a muscle activity (EMG, Electromyography) sensor to a user Android/Android Things App. The app collects data, then a server builds a model specifically for this user. After that the model can be downloaded and executed on the device to control motors or other appendages.This dataset can be used to map user residual muscle gestures to certain actions of a prosthetic such as open/close hand or rotate wrist.This document is divided into 4 parts:- Data Exploration- Data Preprocessing- Evaluating & Comparing Models- Model Tuning 1. Data Exploration In this section, we will look at the type of the data we are dealing with, and some visualizations that shall help ius better understand the data we are working with.In addition, we shall load the data and process it into a matter suitable for performing the above operations. 1.1 Loading the data
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
import seaborn as sns
from IPython.display import display # Allows the use of display() for DataFrames
import matplotlib.pyplot as plt
import pandas
from pandas.plotting import scatter_matrix
###Output
_____no_output_____
###Markdown
Our data is present in 4 separate files, one for each class. First, we shall load them all into dataframes and take a peek into each of the datasets.
###Code
# read dataframes - header=None is given as we do not have the headers in the .csv files
data0 = pd.read_csv("0.csv", header=None)
data1 = pd.read_csv("1.csv", header=None)
data2 = pd.read_csv("2.csv", header=None)
data3 = pd.read_csv("3.csv", header=None)
# Display the first record
display(data0.head(n=1))
display(data1.head(n=1))
display(data2.head(n=1))
display(data3.head(n=1))
###Output
_____no_output_____
###Markdown
As we can see above, the last column contains the category we are attempting to clasify i.e the target variable.Now, we need to combine these 4 Dataframes that we have into 1 big Dataframe so we can visualize different features and work further with the data.The below code does that, and also shows us the shape of the resulting dataframe.
###Code
# append the dataframes into one unified dataset
data = [data0, data1, data2, data3]
data = pd.concat(data, sort=False)
data.shape
###Output
_____no_output_____
###Markdown
Now, we take a small peek into the resulting dataset.
###Code
data.head(n=1)
###Output
_____no_output_____
###Markdown
Finally, since we concatenated the separate datsets earlier into out current dataset, this means that the data is arranged by the output variable in the order of concatenation.We need to shuffle the data so that our algorithms do not get too biased in any one way or the other. If we do not do this, in case we are using algorithms like K-fold validation to help our classifiers better model the data, the models will wind up not learning from the data properly as the uneven spread will be problematic.
###Code
# Shuffle the dataframe to randomize
data = data.sample(frac=1)
data.shape
###Output
_____no_output_____
###Markdown
1.2 Data Visualization Now, we shall visualize the data to get a better understanding of how it is distributed.Before that, we split the data into independant variables X (or features) and the dependant variable y (target variable).
###Code
X = data.drop([64],axis=1)
y = data[[64]]
###Output
_____no_output_____
###Markdown
Below, we plot histograms for each of the 64 features we have in out dataset. This will help us get a better understanding of how the data is distributed.
###Code
%matplotlib inline
fig = plt.figure(figsize = (15,20))
ax = fig.gca()
X.hist(ax = ax)
plt.show()
###Output
_____no_output_____
###Markdown
Now, we will plot a heatmap of the features in our dataset. From here, we can intuitively tell if correlations exist between the different features.
###Code
plt.figure(figsize=(15, 10))
sns.heatmap(X.corr())
plt.xticks(rotation=90)
plt.yticks(rotation=0)
###Output
_____no_output_____
###Markdown
From the above, we can see that there is not much correlation between the different features in the datasets. This can indicate that doing PCA might not be of much help here, as there is not much correlation to draw off to make the eigenvectors that would be representative of the variance in the data. 1.3 Training and Testing SetsBelow, We shall spilt the data into train and test sets.
###Code
# Train test split to get train and test sets
from sklearn.model_selection import train_test_split
# Split
X_tr, X_test, y_tr, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
1.4 Benchmark Model
###Code
from sklearn import tree
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
clf = tree.DecisionTreeClassifier()
clf.fit(X_tr, y_tr)
y_pred = clf.predict(X_test)
print("Accuracy \n")
print(accuracy_score(y_pred, y_test))
print("\nF1 Score \n")
print(f1_score(y_pred, y_test, average = 'micro'))
###Output
Accuracy
0.791095890410959
F1 Score
0.791095890410959
###Markdown
2. Data PreprocessingHere, we shall perform 2 major tasks:- Feature Scaling : In case the As the data is sensor data, and is numerical data reported from different sensors. This means that some features could be disproportionately larger than the others. This would cause issues with the algorithm we’ll be using to model this problem.- Principal Component Analysis: As the data is collected for a whole hand, there are chances that some of the different features might be correlated to one another. This could also mean that the model would be affected. For PCA, we shall test and see how well the correlations come out to be, and if they are useful at all by testing the efficacy of an algorithm pre and post PCA. 2.0 Cleaning The DataHere, we shall check to see if there are any empty or null values in our dataset. In case there are, we can replace these by a representative statistic of our choice. (ex: mean, median etc.)
###Code
X.isnull().sum()
###Output
_____no_output_____
###Markdown
As we can see above, we do not have any null values in our dataset. This means that we can go ahead and use it straight away. 2.1 Feature ScalingWe shall now perform feature scaling on our data.We shall also see a visualization similar to earlier.
###Code
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X = data.drop([64],axis=1)
rescaled_X = scaler.fit_transform(X)
rescaled_X = pd.DataFrame(rescaled_X)
rescaled_X.head()
X = rescaled_X
import matplotlib.pyplot as plt
import pandas
from pandas.plotting import scatter_matrix
%matplotlib inline
fig = plt.figure(figsize = (15,20))
ax = fig.gca()
X.hist(ax = ax)
plt.show()
###Output
c:\users\athithya\anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\interactiveshell.py:3267: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
Now, we have scaled all of our features. The graphs above show that they are within the range of [0,1]. 2.2 Testing with a model before PCABased on the data visualization seen above, we can see that the data is now in the range [0,1]. As we are unsure based on the heatmap we plotted earlier as to how well PCA will work, we will check how well the same classifier models the data before and after doing PCA.We shall now train a Random Forest Classifier on the this data.
###Code
# Train test split to get train and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
clf = RandomForestClassifier(random_state=100, max_depth=7)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Accuracy \n")
print(accuracy_score(y_pred, y_test))
print("\nF1 Score \n")
print(f1_score(y_pred, y_test, average = 'micro'))
###Output
c:\users\athithya\anaconda3\envs\tf_gpu\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
c:\users\athithya\anaconda3\envs\tf_gpu\lib\site-packages\ipykernel_launcher.py:5: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
"""
###Markdown
So, as we can see above, we get an accuracy of 82% and a F1-score of about 82%.Let us continue by checking how this model performs on out testing set. 2.3 PCAHere, we shall be applying PCA onto the data, and checking how well PCA captures the variance of the data.
###Code
# PCA
from sklearn.decomposition import PCA
pca = PCA(n_components=32)
pca.fit(X)
print(pca.explained_variance_ratio_)
print(sum(pca.explained_variance_ratio_))
###Output
[0.06355192 0.06068093 0.05232303 0.04682559 0.0444351 0.04126497
0.03950002 0.03874431 0.03412771 0.02693582 0.02604259 0.0253622
0.024697 0.0241161 0.02279142 0.02208147 0.02161344 0.02055763
0.01881507 0.01807716 0.01709244 0.01645739 0.01576337 0.01516021
0.01362832 0.01282726 0.01255563 0.01174782 0.011478 0.01116256
0.01077317 0.01029567]
0.8314853121498679
###Markdown
The feature with the highest variance in the data captures ~6% of the variance in the data, which is not a great result. This corroborates what we saw in the heatmap earlier - there is not much correlation between the features in this dataset.
###Code
x = X
nx = pca.transform(X)
X=pd.DataFrame(nx)
y = data[[64]]
###Output
_____no_output_____
###Markdown
2.4 Testing with a model after PCA So, as we can see above, the features obtained as a result of PCA do not seem to help in reducing the dimensionality of the data, nor do they seem to adequately capture the variance in our dataset.Let us continue by checking how the same Random Forest performs on the features obtained through PCA.
###Code
# Train test split to get train and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
clf = RandomForestClassifier(random_state=42, max_depth=7)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Accuracy \n")
print(accuracy_score(y_pred, y_test))
print("\nF1 Score \n")
print(f1_score(y_pred, y_test, average = 'micro'))
###Output
c:\users\athithya\anaconda3\envs\tf_gpu\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
c:\users\athithya\anaconda3\envs\tf_gpu\lib\site-packages\ipykernel_launcher.py:5: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
"""
###Markdown
So, it looks like PCA did not help much after all in our case. In fact, it has reduced the accuracy of our classifier.In this scenario, it would be better to leave the cleaned data as it is and not perform PCA.
###Code
X = rescaled_X
X.head()
###Output
_____no_output_____
###Markdown
We shall save the cleaned data here as we may be requiring it later.
###Code
# Train test split to get train and test sets
from sklearn.model_selection import train_test_split
# First time
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
SaveX_train = X_train
SaveX_test = X_test
Savey_train = y_train
Savey_test = y_test
###Output
_____no_output_____
###Markdown
3. Evaluating and Comparing Models Now, we shall proceed to take several different classifiers, and use these to model the data. Then we shall compare how well they perform on the data. We shall be checking with the following models:- Logistic Regression- Decision Trees- Random Forests- AdaBoostFor the above models, we can use K-fold cross validation to make sure we do not waste any of the training set on cross-validation, which will help our models learn better as they wil have a larger training set available to them.We shall also experiment with a Deep Learning Multi Layer Perceptron(MLP) and see how well this performs. To compare the models, we shall use 2 metrics:- Accuracy- F1 Score 3.1 Model Comparison We are not sure which models will perform well. So, we shall try and compare a number of models to see which one performs the best.
###Code
from sklearn import model_selection
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn import metrics
scoring = ['accuracy', 'f1_macro']
models = []
models.append(('1. Logistic Regression', LogisticRegression()))
models.append(('2. Decision Tree', DecisionTreeClassifier()))
models.append(('3. Random Forest', RandomForestClassifier(n_estimators=100)))
models.append(('4. AdaBoost', AdaBoostClassifier(RandomForestClassifier(n_estimators=100))))
op = ""
for name, model in models:
kfold = model_selection.KFold(n_splits = 4, random_state = 47, shuffle=True)
cv_results = model_selection.cross_validate(model, X_train, y_train, cv = kfold, scoring=scoring, return_train_score=True )
print(name+"\nThe accuracy and F1 score are:\n")
op+=name+"\nThe accuracy and F1 score are:\n"
for met in scoring:
key = 'test_'+met
print(np.mean(cv_results[key]))
op=op+str(np.mean(cv_results[key]))+"\n"
print(op)
###Output
1. Logistic Regression
The accuracy and F1 score are:
0.3395396237276701
0.33945200953222576
2. Decision Tree
The accuracy and F1 score are:
0.7688922204540788
0.7696804477111725
3. Random Forest
The accuracy and F1 score are:
0.9146866108357044
0.9144864141906249
4. AdaBoost
The accuracy and F1 score are:
0.9156494291015225
0.9154543043569198
###Markdown
3.2 MLP Classifier Below, we shall create a Multi Layer Perceptron (MLP) with 3 hidden layers that make use of the ReLu activation function, and with multiple dropout layers.We will one-hot encode the target variable so we can create an output layer for the MLP with 4 nodes that use the softmax activation function.In addition, we shall be splitting the data twice, to get the training, cross-validation and testing sets.
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
X = data.drop([64],axis=1)
y = data[[64]]
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(y)
enc.categories_
y2=enc.transform(y)
y2 = pd.DataFrame(y2.todense())
y2.head()
# Train test split twice to get train, cross validation and test sets
from sklearn.model_selection import train_test_split
# First time
X_train, X_test, y_train, y_test = train_test_split(X, y2, test_size=0.2, random_state=0)
# Second time
X_train, X_crossval, y_train, y_crossval = train_test_split(X_train, y_train, test_size=0.25, random_state=0)
# Build the model architecture
model = Sequential()
model.add(Dense(32, activation="relu", input_shape=(64,)))
model.add(Dropout(0.25))
model.add(Dense(16, activation="relu"))
model.add(Dropout(.2))
model.add(Dense(8, activation="relu"))
model.add(Dropout(.1))
model.add(Dense(4, activation="softmax"))
# Compile the model using a loss function and an optimizer.
model.compile(loss = "categorical_crossentropy", optimizer='adam', metrics=['accuracy'])
model.summary()
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
from keras.callbacks import ModelCheckpoint
epochs = 50
checkpointer = ModelCheckpoint(filepath='weights.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
model.fit(X_train, y_train,
validation_data=(X_crossval, y_crossval),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
y_pred = model.predict(X_test)
print(y_pred)
print("Accuracy \n")
print(accuracy_score(y_pred.round(), y_test))
print("\nF1 Score \n")
print(f1_score(y_pred.round(), y_test, average = 'micro'))
###Output
Accuracy
0.8831335616438356
F1 Score
0.8932669408962979
###Markdown
So, now we have trained several models. Taking into account that the neural networks take much longer to train and model the data, and that the metrics in the case of the AdaBoost are higher, we shall be picking AdaBoost as our classifier to model this problem. 4. Model Tuning So, we have decided on using Adaboost to model our problem. We shall proceed and optimize it further so we can improve its metrics of prediction. 4.1 Pre-tuning metrics We take our cleaned data that we used earlier when we were comparing Models.
###Code
X_train = SaveX_train
X_test = SaveX_test
y_train = Savey_train
y_test = Savey_test
clf = AdaBoostClassifier(RandomForestClassifier(n_estimators=100))
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Accuracy \n")
print(accuracy_score(y_pred.round(), y_test))
print("\nF1 Score \n")
print(f1_score(y_pred, y_test, average = 'micro'))
print("Confusion Matrix: \n",confusion_matrix(y_test, y_pred))
print("Classification Report: \n",classification_report(y_test, y_pred))
###Output
c:\users\athithya\anaconda3\envs\tf_gpu\lib\site-packages\sklearn\utils\validation.py:761: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
4.2 TuningNow, we shall proceed to use Grid Search to optimize our Adaboost classifier with different hyperparameters.
###Code
from sklearn import model_selection
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn import metrics
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
clf = AdaBoostClassifier(RandomForestClassifier(n_estimators=100))
clf.fit(X_train, y_train)
param_dist = {
'n_estimators': [50, 100],
'learning_rate' : [0.01,0.05,0.1,0.3,1, 3, 5, 10]
}
scorer = make_scorer(accuracy_score)
gridsearch = GridSearchCV(clf, param_dist, scoring=scorer)
gridsearch.fit(X_train, y_train)
print(gridsearch.best_params_)
print("===========================================================================================")
print(gridsearch.best_score_)
###Output
{'learning_rate': 1, 'n_estimators': 100}
===========================================================================================
0.9154356668807536
###Markdown
Finally, we shall check the final scores achieved by our optimized model.
###Code
tuned_clf = AdaBoostClassifier(RandomForestClassifier(n_estimators=100), learning_rate=0.1)
tuned_clf.fit(X_train, y_train)
y_pred = tuned_clf.predict(X_test)
print("Accuracy \n")
print(accuracy_score(y_pred.round(), y_test))
print("\nF1 Score \n")
print(f1_score(y_pred, y_test, average = 'micro'))
print("Confusion Matrix: \n",confusion_matrix(y_test, y_pred))
print("Classification Report: \n",classification_report(y_test, y_pred))
#plot graph of feature importances
fig = plt.figure(figsize = (15,20))
feat_importances = pd.Series(tuned_clf.feature_importances_, index=X.columns)
feat_importances.plot(kind='barh')
plt.show()
###Output
_____no_output_____
###Markdown
IBM Applied Data Science Capstone Import Packages
###Code
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score
###Output
_____no_output_____
###Markdown
Load dataset from CSV File
###Code
data = pd.read_csv("winemag-data_first150k.csv")
###Output
_____no_output_____
###Markdown
Inspect dataset
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150930 entries, 0 to 150929
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 150930 non-null int64
1 country 150925 non-null object
2 description 150930 non-null object
3 designation 105195 non-null object
4 points 150930 non-null int64
5 price 137235 non-null float64
6 province 150925 non-null object
7 region_1 125870 non-null object
8 region_2 60953 non-null object
9 variety 150930 non-null object
10 winery 150930 non-null object
dtypes: float64(1), int64(2), object(8)
memory usage: 12.7+ MB
###Markdown
View dataset
###Code
data.head(20)
###Output
_____no_output_____
###Markdown
Drop unnecessary columns with low correlation
###Code
data.drop(columns=['Unnamed: 0', 'description', 'region_2'], inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Convert string values to numbers
###Code
factor = pd.factorize(data['variety'])
data.variety = factor[0]
factor = pd.factorize(data['country'])
data.country = factor[0]
factor = pd.factorize(data['province'])
data.province = factor[0]
factor = pd.factorize(data['region_1'])
data.region_1 = factor[0]
factor = pd.factorize(data['winery'])
data.winery = factor[0]
factor = pd.factorize(data['designation'])
data.designation = factor[0]
data.head()
###Output
_____no_output_____
###Markdown
Show correlations of the data
###Code
data.corr()
###Output
_____no_output_____
###Markdown
Assign X and Y Values
###Code
X = data[['country', 'province', 'region_1', 'winery', 'designation']].values
Y = data['variety'].values
###Output
_____no_output_____
###Markdown
Split dataset to train and test
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Normalize X values
###Code
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Create, Train, and Predict Random Forest Classifier
###Code
classifier = RandomForestClassifier(n_estimators=10, criterion='entropy')
classifier.fit(X_train, y_train)
y = classifier.predict(X_test)
###Output
_____no_output_____
###Markdown
Evaluate Model Performance
###Code
print("Accuracy: " + str(round(accuracy_score(y_test, y) * 100, 2)) + "%")
print("F1 Score: " + str(f1_score(y_test, y, average='weighted')))
###Output
F1 Score: 0.6521461861477633
###Markdown
IBM Data Science Capstone Project I am goin to use this notebook for Capstone project
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Analyzing Mental Illness in Tech Import libraries
###Code
import pandas as pd
from sklearn.model_selection import train_test_split,cross_val_score
import io
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader, TensorDataset
from tqdm.autonotebook import tqdm
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import LabelBinarizer,MultiLabelBinarizer
from sklearn.feature_extraction import DictVectorizer
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from xgboost import XGBClassifier,XGBRFClassifier
from sklearn.metrics import confusion_matrix, accuracy_score, roc_auc_score, roc_curve
from sklearn import preprocessing
###Output
_____no_output_____
###Markdown
Import the csv file
###Code
from google.colab import files
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
Read the csv file
###Code
df = pd.read_csv('mental-heath-in-tech-2016_20161114.csv')
###Output
_____no_output_____
###Markdown
Look at the first five entries
###Code
df.head()
###Output
_____no_output_____
###Markdown
Get statistics from the dataframe
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Get more statistics from the dataframe
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
Get unique values of target column
###Code
df["Do you currently have a mental health disorder?"].unique()
index_names = df[df['Do you currently have a mental health disorder?'] == "Maybe"].index
df.drop(index_names, inplace= True)
###Output
_____no_output_____
###Markdown
Get the unique values for the conditions column so I can recode some of the values
###Code
df["If yes, what condition(s) have you been diagnosed with?"].unique()
###Output
_____no_output_____
###Markdown
Recode some values
###Code
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(["Autism (Aspergers)",'PDD-NOS','Pervasive developmental disorder'],"Austism Spectrum Disorder")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Anxiety Disorder (Generalized, Social, Phobia, etc)|Asperges'],"Anxiety Disorder (Generalized, Social, Phobia, etc)|Autism Spectrum Disorder")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Anxiety Disorder (Generalized, Social, Phobia, etc)|Mood Disorder (Depression, Bipolar Disorder, etc)|Post-traumatic Stress Disorder|Addictive Disorder|Autism'],'Anxiety Disorder (Generalized, Social, Phobia, etc)|Mood Disorder (Depression, Bipolar Disorder, etc)|Post-traumatic Stress Disorder|Addictive Disorder|Autism Spectrum Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(["I haven't been formally diagnosed, so I felt uncomfortable answering, but Social Anxiety and Depression."],"Anxiety Disorder (Generalized, Social, Phobia, etc|Mood Disorder (Depression, Bipolar Disorder, etc)")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Combination of physical impairment (strongly near-sighted) with a possibly mental one (MCD / "ADHD", though its actually a stimulus filtering impairment)'],"Attention Deficit Hyperactivity Disorder")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Attention Deficit Hyperactivity Disorder|PTSD (undiagnosed)'],'Attention Deficit Hyperactivity Disorder|Post-traumatic Stress Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Depression'],'Mood Disorder (Depression, Bipolar Disorder, etc)')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Attention Deficit Hyperactivity Disorder|Pervasive Developmental Disorder (Not Otherwise Specified)'],'Attention Deficit Hyperactivity Disorder|Autism Spectrum Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Transgender|Mood Disorder (Depression, Bipolar Disorder, etc)|Anxiety Disorder (Generalized, Social, Phobia, etc)'],"Gender Dysphoria|Mood Disorder (Depression, Bipolar Disorder, etc)|Anxiety Disorder (Generalized, Social, Phobia, etc)")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Anxiety Disorder (Generalized, Social, Phobia, etc)|Dissociative Disorder|Autism'],'Anxiety Disorder (Generalized, Social, Phobia, etc)|Dissociative Disorder|Autism Spectrum Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Anxiety Disorder (Generalized, Social, Phobia, etc)|Mood Disorder (Depression, Bipolar Disorder, etc)|Dissociative Disorder|Autism'],'Anxiety Disorder (Generalized, Social, Phobia, etc)|Mood Disorder (Depression, Bipolar Disorder, etc)|Dissociative Disorder|Autism Spectrum Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Psychotic Disorder (Schizophrenia, Schizoaffective, etc)|Obsessive-Compulsive Disorder|ADD (w/o Hyperactivity)'],'Psychotic Disorder (Schizophrenia, Schizoaffective, etc)|Obsessive-Compulsive Disorder|Attention Deficit Hyperactivity Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Schizotypal Personality Disorder'],'Psychotic Disorder (Schizophrenia, Schizoaffective, etc)')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Anxiety Disorder (Generalized, Social, Phobia, etc)|Post-traumatic Stress Disorder|Stress Response Syndromes|Autism spectrum disorder'],'Anxiety Disorder (Generalized, Social, Phobia, etc)|Post-traumatic Stress Disorder|Stress Response Syndromes|Autism Spectrum Disorder')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(["Sexual addiction"],"Other")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Traumatic Brain Injury'],"Other")
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Burn out'],'Other')
df["If yes, what condition(s) have you been diagnosed with?"] = df["If yes, what condition(s) have you been diagnosed with?"].replace(['Seasonal Affective Disorder'], 'Other')
###Output
_____no_output_____
###Markdown
Get value counts of conditions values
###Code
df["If yes, what condition(s) have you been diagnosed with?"].value_counts()
###Output
_____no_output_____
###Markdown
Drop the rows that are NaN Fix errors in the age column
###Code
df["What is your age?"].unique()
df.loc[(df['What is your age?'] >90 ), "What is your age?"] = 34
df.loc[(df["What is your age?"] < 17),"What is your age?"] = 34
###Output
_____no_output_____
###Markdown
Get the unique values for gender
###Code
df["What is your gender?"].unique()
###Output
_____no_output_____
###Markdown
Recode the gender column into three distinct categories male, female, genderqueer/other or 0,1,2
###Code
df["What is your gender?"]=df["What is your gender?"].replace(["male","Male ","Male","male ","M","m","man","Cis male","Male.","Male (cis)", "Man", "Sex is male", "cis male","Malr","Dude", "I'm a man why didn't you make this a drop down question. You should have asked sex? And I would of answered yes please. Seriously how much text can this take?","mail","M|","male","Cis Male","Male (trans, FtM)","cisdude","cis man","MALE"], 0)
df["What is your gender?"]=df["What is your gender?"].replace(['female',"Female","I identify as female.","female ","Female assigned at birth ","F","Woman","fm","f","Cis female","Transitioned, M2F","Female or Multi-Gender Femme","Female ","woman","female/woman","Cisgender Female","mtf","fem", "Female (props for making this a freeform field, though)"," Female", "Cis-woman", "AFAB","Transgender woman", "Cis female "], 1)
df["What is your gender?"]=df["What is your gender?"].replace(["Bigender","non-binary","Genderfluid (born female)","Other/Transfeminine","Androgynous","male 9:1 female, roughly", "Other","nb masculine","none of your business","genderqueer","Human","Genderfluid","'Enby","genderqueer woman","Queer","Agender","Fluid","Male/genderqueer","Nonbinary","human","Unicorn","Genderqueer","Genderflux demi-girl","female bodied; no feelings about gender"], 2)
###Output
_____no_output_____
###Markdown
Replace the one NaN with the mode Male
###Code
df["What is your gender?"]=df["What is your gender?"].replace(np.NaN,0)
###Output
_____no_output_____
###Markdown
Fix an error in the gender column
###Code
df["What is your gender?"]=df["What is your gender?"].replace('Male',0)
df['What is your gender?'].unique()
###Output
_____no_output_____
###Markdown
Drop all columns where more than half of the observations have missing values
###Code
cols = (df.isna().sum() >= df.shape[0]/2).tolist()
drop = df.columns[cols]
df.drop(labels=drop, axis=1, inplace= True)
###Output
_____no_output_____
###Markdown
Impute NaN with the mode on every row
###Code
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
imp.fit(df)
imp_data = pd.DataFrame(data = imp.transform(df), columns = df.columns)
###Output
_____no_output_____
###Markdown
Describe the dataset
###Code
imp_data.describe()
imp_data
###Output
_____no_output_____
###Markdown
Visualizations Get Gender statistics
###Code
df["What is your gender?"].value_counts()
###Output
_____no_output_____
###Markdown
Charts and Figures
###Code
fig,ax1 = plt.subplots(figsize=(6,3), subplot_kw=dict(aspect="equal") )
plt.figure(figsize = (16,5))
fig.set_figheight(5)
fig.set_figwidth(20)
plt.subplots_adjust(wspace = 0)
fig.suptitle("Proportions of genders in tech", fontsize = 25, y=1.08)
all_techs = imp_data['What is your gender?'].count()
males = imp_data[imp_data["What is your gender?"]==0]["What is your gender?"].count()
females = imp_data[imp_data["What is your gender?"]==1]["What is your gender?"].count()
other = imp_data[imp_data["What is your gender?"]==2]["What is your gender?"].count()
labels = "Male","Female","Genderqueer/Other"
sizes = [males/all_techs, females/all_techs, other/all_techs]
colors = ['red','green','pink']
explode = (.03,0,0)
ax1.pie(sizes,explode=explode,labels=labels, colors=colors,autopct='%1.1f%%',shadow = False, startangle=140)
ax1.set_title("Overall gender prop%", pad = 20, fontsize=20)
sns.countplot(x=imp_data["How many employees does your company or organization have?"],order= ['1-5','6-25','26-100','100-500','500-1000','>1000'])
imp_data['What is your age?'].describe()
imp_data['What is your age?'].unique()
country_count = df['What country do you live in?'].value_counts().sort_values(ascending=False).to_frame()[:10]
country_count = country_count.rename(columns={'What country do you live in?': 'count'})
plt.figure(figsize=(15,5))
ax = sns.barplot(country_count.index,y='count', data=country_count, palette="ch:.25")
for p in ax.patches:
ax.annotate(format(p.get_height(),'.1f'),(p.get_x()+p.get_width()/2.,p.get_height()), ha = 'center', va = 'center',xytext=(0,9), textcoords = 'offset points')
ax = ax.set_title("Top 10 countries")
state_count = df['What US state or territory do you live in?'].value_counts().sort_values(ascending=False).to_frame()[:10]
state_count = state_count.rename(columns={'What US state or territory do you live in?':'count'})
plt.figure(figsize=(15,5))
ax = sns.barplot(state_count.index, y='count', data=state_count, palette="ch:.25")
for p in ax.patches:
ax.annotate(format(p.get_height(),'.1f'),(p.get_x()+p.get_width()/2.,p.get_height()), ha='center',va='center',xytext=(0,9), textcoords="offset points")
ax = ax.set_title('Top 10 States')
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
Encoding the data
###Code
cols = [x for x in imp_data.columns if x not in ["What is your gender?","What is your age?","Why or why not?","Why or why not?.1","What country do you live in?","What US state or territory do you live in?","What country do you work in?","What US state or territory do you work in?","Which of the following best describes your work position?"]]
data_to_encode = imp_data[cols]
data_not_encode = imp_data[["What is your gender?","What is your age?","Why or why not?","Why or why not?.1","What country do you live in?","What US state or territory do you live in?","What country do you work in?","What US state or territory do you work in?","Which of the following best describes your work position?"]]
data_not_encode["What is your gender?"] = data_not_encode["What is your gender?"].astype('int64')
def encode(data):
cat_columns = list(data.select_dtypes(include=['category','object']))
mlb = MultiLabelBinarizer()
for col in cat_columns:
data[col] = data[col].astype('str')
data[col]=mlb.fit_transform(data[col])
return data
encode(data_to_encode)
matrix = encode(data_to_encode)
encoded_data = pd.DataFrame(matrix)
encoded_data.columns = data_to_encode.columns
prep_data = pd.concat(objs = [encoded_data, data_not_encode], axis = 1)
prep_data
model_data = prep_data.copy()
col_numeric = [cols for cols in model_data.columns if model_data[cols].dtype in ['int64','float64']]
model_data = model_data[col_numeric]
model_data
###Output
_____no_output_____
###Markdown
Demographic Data
###Code
prep_data[["What is your gender?","Do you currently have a mental health disorder?"]].value_counts(normalize=True)*100
###Output
_____no_output_____
###Markdown
Making Predictions
###Code
y = model_data['Do you currently have a mental health disorder?']
cols = [col for col in model_data.columns if col not in ['Do you currently have a mental health disorder?']]
X = model_data[cols]
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = .2, random_state = 42)
X_test["What is your gender?"].unique()
def model_assess(model,name='Default'):
model.fit(X_train,y_train)
preds = model.predict(X_test)
print('---',name,'---','\n',confusion_matrix(y_test,preds),'\n','Accuracy:', round(accuracy_score(y_test, preds),5),'\n')
nb = GaussianNB()
model_assess(nb, name='Naive Bayes')
cross_val_score(nb,X,y,n_jobs = -1)
sgd = SGDClassifier(max_iter=5000, random_state=42)
model_assess(sgd, name='SGD')
cross_val_score(sgd,X,y,n_jobs = -1)
tree = DecisionTreeClassifier()
model_assess(tree,"Decision Trees")
cross_val_score(tree,X,y,n_jobs = -1)
rforest = RandomForestClassifier(max_depth= 10, random_state=42)
model_assess(rforest,"Random Forest")
cross_val_score(rforest,X,y,n_jobs=-1)
###Output
--- Random Forest ---
[[112 6]
[ 23 81]]
Accuracy: 0.86937
###Markdown
this notebook will be mainly used for the capstone project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
_____no_output_____
###Markdown
This notebook is for the capstone project "Battle of the Neighbourhoods"
###Code
import numpy as np
import pandas as pd
print("Hello Capstone Project Coursera !")
###Output
Hello Capstone Project Coursera !
###Markdown
Introduction In this project I analyze 15 top performing tech stocks in an attempt to determine an optimal portfolio to recommend to potential investors. The rolling window used tracks 100 days of stock performance and compares a number of statistical factors including: variance, standard deviation, covariance, correlation and more. This notebook can be reran to collect the most recent 100 days of stock data.Information gathered uses Pandas, and Pandas Data-reader which are Python libraries used to request data from Yahoo Finance's API for me to wrangle and develop models with. All of my code is sitting in a Jupyter Notebook environment so data does not need to be stored or maintained in a database. If in the future a database were to be used for deployment purposes I would most likely move data to a Heroku Dyno and set up a Postgres environment to act as the managed database. In doing so I could potentially create an interactive tool for investors to use online. 1. Import Packages Data Manipulation Packages Importing all of the required dependencies.
###Code
import datetime as dt
import emoji
import matplotlib.pyplot as plt
from matplotlib import style
import numpy as np
import pandas as pd
import pandas_datareader.data as web
import plotly.graph_objs as go
import plotly.offline as offline_py
from rf import return_portfolios, optimal_portfolio
offline_py.init_notebook_mode(connected=True)
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Load the adjusted closings for the 15 tech stocks.Using Pandas data-reader to pull stock data from Yahoo Finance API, create dates, retrieve data, and view data.
###Code
symbols = ["MSFT", "AMZN", "AAPL", "GOOG", "FB",
"CRM", "CSCO", "NVDA", "AMD", "NFLX",
"DOCU", "SQ", "ORCL", "TSLA", "TWTR"]
delta = dt.timedelta(days=365)
end_date = dt.datetime.now()
start_date = dt.datetime.now() - delta
stock_data = web.get_data_yahoo(symbols, start_date, end_date)
###Output
_____no_output_____
###Markdown
Focusing on Adjusted Close for analysis
###Code
close = stock_data['Adj Close']
close.head()
###Output
_____no_output_____
###Markdown
Here's our Tech Stock Universe:
###Code
close.plot()
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plt.title('Daily Prices');
###Output
_____no_output_____
###Markdown
Stock ExampleIn order to demonstrate the use of log returns and resampling, I've used Google's stock (GOOG). However, these transformations are applied to all assets in my stock universe.
###Code
color_scheme = {
'index': '#B6B2CF',
'etf': '#2D3ECF',
'tracking_error': '#6F91DE',
'df_header': 'silver',
'df_value': 'white',
'df_line': 'silver',
'heatmap_colorscale': [(0, '#6F91DE'), (0.5, 'grey'), (1, 'red')],
'background_label': '#9dbdd5',
'low_value': '#B6B2CF',
'high_value': '#2D3ECF',
'y_axis_2_text_color': 'grey',
'shadow': 'rgba(0, 0, 0, 0.75)',
'major_line': '#2D3ECF',
'minor_line': '#B6B2CF',
'main_line': 'black'}
def _generate_stock_trace(prices):
return go.Scatter(
name='Index',
x=prices.index,
y=prices,
line={'color': color_scheme['major_line']})
def plot_stock(prices, title):
config = generate_config()
layout = go.Layout(title=title)
stock_trace = _generate_stock_trace(prices)
offline_py.iplot({'data': [stock_trace], 'layout': layout})
goog_ticker = 'GOOG'
plot_stock(close[goog_ticker], f'{goog_ticker} Stock')
###Output
_____no_output_____
###Markdown
Resample Adjusted PricesResampling the daily adjusted closing prices into monthly buckets, and selecting the last observation of each month.
###Code
def resample_prices(close_prices, freq='M'):
"""
Resample close prices for each ticker and return month end prices.
"""
return close_prices.resample(freq).last()
def _generate_traces(name_df_color_data):
traces = []
for name, df, color in name_df_color_data:
traces.append(go.Scatter(
name=name,
x=df.index,
y=df,
mode='line',
line={'color': color}))
return traces
def plot_resampled_prices(df_resampled, df, title):
config = generate_config()
layout = go.Layout(title=title)
traces = _generate_traces([
('Monthly Close', df_resampled, color_scheme['major_line']),
('Close', df, color_scheme['minor_line'])])
offline_py.iplot({'data': traces, 'layout': layout})
monthly_close = resample_prices(close)
plot_resampled_prices(
monthly_close.loc[:, goog_ticker],
close.loc[:, goog_ticker],
f'{goog_ticker} Stock - Close Vs Monthly Close')
def compute_log_returns(prices):
"""
Compute log returns for each ticker.
"""
return np.log(prices) - np.log(prices.shift(1))
def plot_returns(returns, title):
layout = go.Layout(title=title)
traces = _generate_traces([
('Returns', returns, color_scheme['major_line'])])
offline_py.iplot({'data': traces, 'layout': layout})
monthly_close_returns = compute_log_returns(monthly_close)
plot_returns(
monthly_close_returns.loc[:, goog_ticker],
f'Log Returns of {goog_ticker} Stock (Monthly)')
###Output
_____no_output_____
###Markdown
Here are the monthly log returns of our entire data set:Log returns are used in order to create probability distributions based on a normal dataset.
###Code
monthly_close_returns
def shift_returns(returns, shift_n):
"""
Generate shifted returns
"""
return returns.shift(shift_n)
prev_returns = shift_returns(monthly_close_returns, 1)
###Output
_____no_output_____
###Markdown
Here I have shifted the mean of the monthly log returns forward by one month in order to generate our predictions:
###Code
prev_returns
###Output
_____no_output_____
###Markdown
Determine what stocks to take a long or short position in:Below is a function that I have written to generate a list of the stocks that have increased the most of the past year -e.g. taken the most long position over the given timeframe.
###Code
def get_top_n(prev_returns, top_n):
"""
Select the top performing stocks
"""
res = pd.DataFrame(columns=prev_returns.columns)
for index, row in prev_returns.iterrows():
curr_month = row
curr_top = pd.Series(curr_month).nlargest(top_n)
top = list(curr_top.index.values)
for col in res.columns:
if(col in top):
res.loc[index, col] = True
else:
res.loc[index, col] = False
for index, row in res.iterrows():
res.loc[index] = res.loc[index].astype('int64')
#print(res.head())
return res
def print_top(df, name, top_n=5):
print('{} Most {}:'.format(top_n, name))
print(', '.join(df.sum().sort_values(ascending=False).index[:top_n].values.tolist()))
###Output
_____no_output_____
###Markdown
Calculate and Display the Most Long Stocks and Most Short StocksBy simply applying the multiplication of -1, I use the same function to generate the most short stocks over the provided timeframe.
###Code
top_bottom_n = 10
df_long = get_top_n(prev_returns, top_bottom_n)
df_short = get_top_n(-1*prev_returns, top_bottom_n)
print_top(df_long, 'Longed Stocks')
print_top(df_short, 'Shorted Stocks')
###Output
5 Most Longed Stocks:
MSFT, AAPL, NVDA, FB, DOCU
5 Most Shorted Stocks:
AMZN, ORCL, NFLX, GOOG, CRM
###Markdown
What if we create a portfolio of the 5 most longed stocks? 1. Visualize the stock prices using matplotlib2. Calculate and visualize the daily simple rate of return 3. Calculate and visualize the mean rates of return4. Calculate and visualize the variances of the returns5. Calculate and visualize the standard deviations of the returns6. Write a short thesis based on the correlations between the tech stocks
###Code
long_stocks = ["MSFT", "AAPL", "NVDA", "FB", "DOCU"]
stock_data_daily_returns = stock_data['Adj Close'][long_stocks].pct_change()
stock_data_daily_returns.plot()
plt.xlabel("Date")
plt.ylabel("ROR")
plt.title("Daily Simple Rate of Return Over time")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.);
###Output
_____no_output_____
###Markdown
Daily simple rates of return:
###Code
["MSFT", "AAPL", "NVDA", "FB", "DOCU"]
fig = plt.figure(figsize=(15,15))
ax1 = fig.add_subplot(321)
ax2 = fig.add_subplot(322)
ax3 = fig.add_subplot(323)
ax4 = fig.add_subplot(324)
ax5 = fig.add_subplot(325)
ax1.plot(stock_data['Adj Close']['MSFT'].pct_change())
ax1.set_title("Microsoft")
ax2.plot(stock_data['Adj Close']['AAPL'].pct_change())
ax2.set_title("Apple")
ax3.plot(stock_data['Adj Close']['NVDA'].pct_change())
ax3.set_title("Nvidia")
ax4.plot(stock_data['Adj Close']['FB'].pct_change())
ax4.set_title("Facebook")
ax5.plot(stock_data['Adj Close']['DOCU'].pct_change())
ax5.set_title("Docusign")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Daily mean ROR:
###Code
# calculate daily mean
daily_mean = stock_data_daily_returns.mean()
daily_mean
# daily mean index for the x axis
daily_mean.keys()
# grab each daily mean value for the y axis
height = []
for key in daily_mean.keys():
height.append(daily_mean[key])
# arrange keys on x axis based on length
x_pos = np.arange(len(daily_mean.keys()))
# plot bars
plt.bar(x_pos, height)
# create names on the x-axis
plt.xticks(x_pos, daily_mean.keys())
# label chart
plt.xlabel("Tech Stocks")
plt.ylabel("Mean")
plt.title("Daily Mean Rate of Return")
plt.show()
###Output
_____no_output_____
###Markdown
Daily Variance:
###Code
# calculate variance
daily_var = stock_data_daily_returns.var()
daily_var
# variance index for the x axis
daily_var.keys()
# grab each variance value for the y axis
height = []
for key in daily_var.keys():
height.append(daily_var[key])
# plot bars
plt.bar(x_pos, height)
# create names on the x-axis
plt.xticks(x_pos, daily_var.keys())
# label chart
plt.xlabel("Tech Stocks")
plt.ylabel("Variance")
plt.title("Daily Variance")
# show graphic
plt.show()
###Output
_____no_output_____
###Markdown
Standard Deviation:
###Code
# calculate standard deviation
daily_std = stock_data_daily_returns.std()
daily_std
# grab each standard deviation value for the y axis
height = []
for key in daily_std.keys():
height.append(daily_std[key])
# plot bars
plt.bar(x_pos, height)
# create names on the x-axis
plt.xticks(x_pos, daily_std.keys())
# label chart
plt.xlabel("Tech Stocks")
plt.ylabel("Std. Dev.")
plt.title("Daily Standard Deviation")
# show graphic
plt.show()
###Output
_____no_output_____
###Markdown
Correlation in our portfolio:
###Code
corr = stock_data_daily_returns.corr()
corr
plt.imshow(cov)
plt.colorbar()
plt.xticks(rotation='horizontal')
plt.xticks(range(len(cov)), corr.columns)
plt.yticks(range(len(cov)), corr.columns);
###Output
_____no_output_____
###Markdown
Covariance:
###Code
cov = stock_data_daily_returns.cov()
cov
plt.imshow(cov)
plt.colorbar()
plt.xticks(rotation='horizontal')
plt.xticks(range(len(cov)), cov.columns)
plt.yticks(range(len(cov)), cov.columns);
###Output
_____no_output_____
###Markdown
Measuring the efficiency of this portfolio:
###Code
# use the covariance
cov_monthly = monthly_close_returns[long_stocks][1:].cov()
# find the expected return
expected_returns = cov_monthly.mean()
# create a set of random portfolios
random_portfolios = return_portfolios(expected_returns, cov_monthly)
###Output
_____no_output_____
###Markdown
Using Python's Cvxopt library to generate 5000 random portfolios:
###Code
# plot the set of random portfolios
random_portfolios.plot.scatter(x='Volatility', y='Returns', fontsize=12)
# calculate the set of portfolios on the EF
weights, returns, risks = optimal_portfolio(cov_monthly[1:])
###Output
pcost dcost gap pres dres
0: -4.4350e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4404e-03 -1.6998e-02 1e-02 1e-16 3e-02
2: -4.8139e-03 -7.0983e-03 2e-03 9e-17 6e-03
3: -6.8256e-03 -7.8126e-03 1e-03 4e-15 9e-04
4: -6.9895e-03 -7.0053e-03 2e-05 2e-16 1e-05
5: -6.9971e-03 -6.9972e-03 2e-07 1e-19 1e-07
6: -6.9971e-03 -6.9971e-03 2e-09 2e-21 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4350e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4404e-03 -1.6998e-02 1e-02 9e-17 3e-02
2: -4.8139e-03 -7.0982e-03 2e-03 2e-16 6e-03
3: -6.8254e-03 -7.8123e-03 1e-03 2e-15 9e-04
4: -6.9892e-03 -7.0050e-03 2e-05 2e-16 1e-05
5: -6.9968e-03 -6.9970e-03 2e-07 2e-16 1e-07
6: -6.9969e-03 -6.9969e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4350e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4404e-03 -1.6998e-02 1e-02 9e-17 3e-02
2: -4.8139e-03 -7.0980e-03 2e-03 6e-17 6e-03
3: -6.8252e-03 -7.8119e-03 1e-03 1e-15 9e-04
4: -6.9890e-03 -7.0048e-03 2e-05 1e-16 1e-05
5: -6.9965e-03 -6.9967e-03 2e-07 1e-16 1e-07
6: -6.9966e-03 -6.9966e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4350e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4404e-03 -1.6998e-02 1e-02 2e-16 3e-02
2: -4.8138e-03 -7.0979e-03 2e-03 8e-17 6e-03
3: -6.8250e-03 -7.8115e-03 1e-03 2e-15 9e-04
4: -6.9887e-03 -7.0045e-03 2e-05 3e-16 1e-05
5: -6.9962e-03 -6.9964e-03 2e-07 2e-19 1e-07
6: -6.9963e-03 -6.9963e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4349e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4404e-03 -1.6998e-02 1e-02 5e-17 3e-02
2: -4.8138e-03 -7.0977e-03 2e-03 7e-17 6e-03
3: -6.8247e-03 -7.8110e-03 1e-03 1e-15 9e-04
4: -6.9883e-03 -7.0041e-03 2e-05 1e-16 1e-05
5: -6.9959e-03 -6.9960e-03 2e-07 6e-16 1e-07
6: -6.9960e-03 -6.9960e-03 2e-09 4e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4349e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4404e-03 -1.6998e-02 1e-02 4e-17 3e-02
2: -4.8138e-03 -7.0976e-03 2e-03 1e-16 6e-03
3: -6.8245e-03 -7.8105e-03 1e-03 3e-15 9e-04
4: -6.9879e-03 -7.0037e-03 2e-05 1e-16 1e-05
5: -6.9955e-03 -6.9956e-03 2e-07 3e-16 1e-07
6: -6.9956e-03 -6.9956e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4349e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4403e-03 -1.6998e-02 1e-02 3e-17 3e-02
2: -4.8137e-03 -7.0974e-03 2e-03 7e-17 6e-03
3: -6.8242e-03 -7.8100e-03 1e-03 9e-16 9e-04
4: -6.9875e-03 -7.0033e-03 2e-05 2e-16 1e-05
5: -6.9951e-03 -6.9952e-03 2e-07 8e-20 1e-07
6: -6.9951e-03 -6.9951e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4349e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4403e-03 -1.6998e-02 1e-02 2e-16 3e-02
2: -4.8137e-03 -7.0971e-03 2e-03 1e-16 6e-03
3: -6.8238e-03 -7.8093e-03 1e-03 2e-15 9e-04
4: -6.9870e-03 -7.0028e-03 2e-05 2e-16 1e-05
5: -6.9946e-03 -6.9947e-03 2e-07 2e-16 1e-07
6: -6.9946e-03 -6.9946e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4349e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4403e-03 -1.6997e-02 1e-02 4e-17 3e-02
2: -4.8136e-03 -7.0969e-03 2e-03 1e-16 6e-03
3: -6.8234e-03 -7.8086e-03 1e-03 4e-16 9e-04
4: -6.9865e-03 -7.0022e-03 2e-05 2e-16 1e-05
5: -6.9940e-03 -6.9942e-03 2e-07 3e-16 1e-07
6: -6.9941e-03 -6.9941e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4348e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4403e-03 -1.6997e-02 1e-02 1e-16 3e-02
2: -4.8135e-03 -7.0966e-03 2e-03 1e-16 6e-03
3: -6.8230e-03 -7.8078e-03 1e-03 4e-16 9e-04
4: -6.9858e-03 -7.0016e-03 2e-05 1e-16 1e-05
5: -6.9934e-03 -6.9935e-03 2e-07 1e-16 1e-07
6: -6.9935e-03 -6.9935e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4348e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4402e-03 -1.6997e-02 1e-02 7e-17 3e-02
2: -4.8135e-03 -7.0963e-03 2e-03 8e-17 6e-03
3: -6.8225e-03 -7.8069e-03 1e-03 2e-15 9e-04
4: -6.9851e-03 -7.0009e-03 2e-05 2e-16 1e-05
5: -6.9927e-03 -6.9928e-03 2e-07 2e-16 1e-07
6: -6.9928e-03 -6.9928e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4348e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4402e-03 -1.6997e-02 1e-02 1e-16 3e-02
2: -4.8134e-03 -7.0960e-03 2e-03 8e-17 6e-03
3: -6.8219e-03 -7.8059e-03 1e-03 2e-16 9e-04
4: -6.9844e-03 -7.0001e-03 2e-05 2e-16 1e-05
5: -6.9919e-03 -6.9921e-03 2e-07 2e-16 1e-07
6: -6.9920e-03 -6.9920e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4347e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4402e-03 -1.6996e-02 1e-02 1e-16 3e-02
2: -4.8133e-03 -7.0956e-03 2e-03 8e-17 6e-03
3: -6.8213e-03 -7.8048e-03 1e-03 7e-16 9e-04
4: -6.9835e-03 -6.9992e-03 2e-05 2e-18 1e-05
5: -6.9910e-03 -6.9912e-03 2e-07 1e-16 1e-07
6: -6.9911e-03 -6.9911e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4347e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4401e-03 -1.6996e-02 1e-02 8e-17 3e-02
2: -4.8132e-03 -7.0951e-03 2e-03 2e-16 6e-03
3: -6.8206e-03 -7.8035e-03 1e-03 6e-16 9e-04
4: -6.9825e-03 -6.9983e-03 2e-05 2e-16 1e-05
5: -6.9900e-03 -6.9902e-03 2e-07 1e-16 1e-07
6: -6.9901e-03 -6.9901e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4346e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4401e-03 -1.6996e-02 1e-02 3e-17 3e-02
2: -4.8131e-03 -7.0946e-03 2e-03 2e-16 6e-03
3: -6.8198e-03 -7.8021e-03 1e-03 6e-16 9e-04
4: -6.9814e-03 -6.9972e-03 2e-05 2e-16 1e-05
5: -6.9889e-03 -6.9891e-03 2e-07 2e-16 1e-07
6: -6.9890e-03 -6.9890e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4346e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4400e-03 -1.6995e-02 1e-02 6e-17 3e-02
2: -4.8130e-03 -7.0941e-03 2e-03 2e-16 6e-03
3: -6.8189e-03 -7.8005e-03 1e-03 2e-15 9e-04
4: -6.9802e-03 -6.9959e-03 2e-05 3e-16 1e-05
5: -6.9877e-03 -6.9879e-03 2e-07 8e-20 1e-07
6: -6.9878e-03 -6.9878e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4345e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4399e-03 -1.6995e-02 1e-02 9e-17 3e-02
2: -4.8128e-03 -7.0935e-03 2e-03 3e-17 6e-03
3: -6.8179e-03 -7.7987e-03 1e-03 1e-15 9e-04
4: -6.9788e-03 -6.9945e-03 2e-05 1e-16 1e-05
5: -6.9863e-03 -6.9865e-03 2e-07 2e-19 1e-07
6: -6.9864e-03 -6.9864e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4345e-03 -1.0070e+00 1e+00 1e-16 3e+00
1: -4.4399e-03 -1.6994e-02 1e-02 5e-17 3e-02
2: -4.8127e-03 -7.0928e-03 2e-03 5e-17 6e-03
3: -6.8168e-03 -7.7967e-03 1e-03 2e-15 9e-04
4: -6.9773e-03 -6.9930e-03 2e-05 2e-16 1e-05
5: -6.9848e-03 -6.9849e-03 2e-07 1e-16 1e-07
6: -6.9848e-03 -6.9848e-03 2e-09 1e-21 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4344e-03 -1.0070e+00 1e+00 1e-16 3e+00
1: -4.4398e-03 -1.6994e-02 1e-02 5e-17 3e-02
2: -4.8125e-03 -7.0920e-03 2e-03 3e-17 6e-03
3: -6.8155e-03 -7.7944e-03 1e-03 8e-16 9e-04
4: -6.9755e-03 -6.9912e-03 2e-05 1e-16 1e-05
5: -6.9830e-03 -6.9832e-03 2e-07 2e-19 1e-07
6: -6.9831e-03 -6.9831e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4343e-03 -1.0070e+00 1e+00 4e-16 3e+00
1: -4.4397e-03 -1.6993e-02 1e-02 1e-16 3e-02
2: -4.8123e-03 -7.0911e-03 2e-03 5e-17 6e-03
3: -6.8141e-03 -7.7919e-03 1e-03 3e-15 9e-04
4: -6.9736e-03 -6.9892e-03 2e-05 3e-16 1e-05
5: -6.9811e-03 -6.9812e-03 2e-07 1e-16 1e-07
6: -6.9811e-03 -6.9811e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4342e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4396e-03 -1.6992e-02 1e-02 7e-17 3e-02
2: -4.8121e-03 -7.0902e-03 2e-03 1e-16 6e-03
3: -6.8126e-03 -7.7891e-03 1e-03 9e-16 9e-04
4: -6.9714e-03 -6.9870e-03 2e-05 5e-18 1e-05
5: -6.9789e-03 -6.9790e-03 2e-07 1e-16 1e-07
6: -6.9789e-03 -6.9789e-03 2e-09 3e-21 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4341e-03 -1.0070e+00 1e+00 3e-16 3e+00
1: -4.4395e-03 -1.6992e-02 1e-02 9e-17 3e-02
2: -4.8118e-03 -7.0891e-03 2e-03 8e-17 6e-03
3: -6.8108e-03 -7.7859e-03 1e-03 4e-16 9e-04
4: -6.9690e-03 -6.9846e-03 2e-05 2e-16 1e-05
5: -6.9764e-03 -6.9765e-03 2e-07 3e-16 1e-07
6: -6.9765e-03 -6.9765e-03 2e-09 3e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4340e-03 -1.0070e+00 1e+00 4e-16 3e+00
1: -4.4394e-03 -1.6991e-02 1e-02 4e-17 3e-02
2: -4.8115e-03 -7.0878e-03 2e-03 1e-16 6e-03
3: -6.8088e-03 -7.7823e-03 1e-03 7e-16 9e-04
4: -6.9662e-03 -6.9818e-03 2e-05 1e-16 1e-05
5: -6.9736e-03 -6.9738e-03 2e-07 1e-16 1e-07
6: -6.9737e-03 -6.9737e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4338e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4392e-03 -1.6990e-02 1e-02 8e-17 3e-02
2: -4.8112e-03 -7.0864e-03 2e-03 2e-16 6e-03
3: -6.8066e-03 -7.7783e-03 1e-03 2e-15 9e-04
4: -6.9631e-03 -6.9787e-03 2e-05 1e-16 1e-05
5: -6.9705e-03 -6.9707e-03 2e-07 1e-16 1e-07
6: -6.9706e-03 -6.9706e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4337e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4391e-03 -1.6989e-02 1e-02 6e-17 3e-02
2: -4.8108e-03 -7.0849e-03 2e-03 8e-17 6e-03
3: -6.8041e-03 -7.7739e-03 1e-03 2e-15 9e-04
4: -6.9597e-03 -6.9752e-03 2e-05 2e-16 1e-05
5: -6.9670e-03 -6.9672e-03 2e-07 1e-16 1e-07
6: -6.9671e-03 -6.9671e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4335e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4389e-03 -1.6987e-02 1e-02 8e-17 3e-02
2: -4.8104e-03 -7.0832e-03 2e-03 8e-17 6e-03
3: -6.8013e-03 -7.7689e-03 1e-03 7e-16 9e-04
4: -6.9558e-03 -6.9712e-03 2e-05 2e-16 1e-05
5: -6.9631e-03 -6.9633e-03 2e-07 1e-16 1e-07
6: -6.9632e-03 -6.9632e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4333e-03 -1.0070e+00 1e+00 4e-16 3e+00
1: -4.4387e-03 -1.6986e-02 1e-02 2e-16 3e-02
2: -4.8100e-03 -7.0812e-03 2e-03 8e-17 6e-03
3: -6.7981e-03 -7.7633e-03 1e-03 2e-15 9e-04
4: -6.9514e-03 -6.9668e-03 2e-05 1e-16 1e-05
5: -6.9587e-03 -6.9589e-03 2e-07 3e-16 1e-07
6: -6.9588e-03 -6.9588e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4331e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4385e-03 -1.6984e-02 1e-02 6e-17 3e-02
2: -4.8095e-03 -7.0790e-03 2e-03 7e-17 6e-03
3: -6.7946e-03 -7.7570e-03 1e-03 7e-16 8e-04
4: -6.9465e-03 -6.9619e-03 2e-05 2e-16 1e-05
5: -6.9538e-03 -6.9539e-03 2e-07 2e-16 1e-07
6: -6.9539e-03 -6.9539e-03 2e-09 2e-21 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4328e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4382e-03 -1.6982e-02 1e-02 1e-16 3e-02
2: -4.8089e-03 -7.0766e-03 2e-03 9e-17 6e-03
3: -6.7906e-03 -7.7500e-03 1e-03 4e-16 8e-04
4: -6.9410e-03 -6.9563e-03 2e-05 2e-18 1e-05
5: -6.9483e-03 -6.9484e-03 2e-07 1e-16 1e-07
6: -6.9483e-03 -6.9483e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4326e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4380e-03 -1.6980e-02 1e-02 9e-17 3e-02
2: -4.8083e-03 -7.0738e-03 2e-03 5e-17 6e-03
3: -6.7862e-03 -7.7421e-03 1e-03 7e-16 8e-04
4: -6.9349e-03 -6.9501e-03 2e-05 2e-16 1e-05
5: -6.9421e-03 -6.9422e-03 2e-07 1e-19 1e-07
6: -6.9421e-03 -6.9421e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4322e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4376e-03 -1.6978e-02 1e-02 7e-17 3e-02
2: -4.8076e-03 -7.0708e-03 2e-03 7e-17 6e-03
3: -6.7812e-03 -7.7333e-03 1e-03 2e-15 8e-04
4: -6.9279e-03 -6.9431e-03 2e-05 2e-16 1e-05
5: -6.9351e-03 -6.9352e-03 2e-07 2e-16 1e-07
6: -6.9352e-03 -6.9352e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4319e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4373e-03 -1.6975e-02 1e-02 6e-17 3e-02
2: -4.8068e-03 -7.0673e-03 2e-03 8e-17 6e-03
3: -6.7756e-03 -7.7235e-03 9e-04 2e-15 8e-04
4: -6.9202e-03 -6.9353e-03 2e-05 1e-18 1e-05
5: -6.9273e-03 -6.9274e-03 2e-07 2e-16 1e-07
6: -6.9274e-03 -6.9274e-03 2e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4315e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4369e-03 -1.6973e-02 1e-02 5e-17 3e-02
2: -4.8059e-03 -7.0635e-03 2e-03 1e-16 6e-03
3: -6.7693e-03 -7.7125e-03 9e-04 3e-16 8e-04
4: -6.9115e-03 -6.9265e-03 2e-05 2e-18 1e-05
5: -6.9185e-03 -6.9187e-03 2e-07 1e-19 1e-07
6: -6.9186e-03 -6.9186e-03 2e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4311e-03 -1.0070e+00 1e+00 1e-16 3e+00
1: -4.4365e-03 -1.6969e-02 1e-02 9e-17 3e-02
2: -4.8049e-03 -7.0591e-03 2e-03 6e-17 6e-03
3: -6.7622e-03 -7.7002e-03 9e-04 4e-16 7e-04
4: -6.9017e-03 -6.9167e-03 1e-05 2e-16 1e-05
5: -6.9087e-03 -6.9088e-03 1e-07 1e-16 1e-07
6: -6.9088e-03 -6.9088e-03 1e-09 3e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4306e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4360e-03 -1.6966e-02 1e-02 7e-17 3e-02
2: -4.8038e-03 -7.0543e-03 2e-03 4e-17 6e-03
3: -6.7543e-03 -7.6864e-03 9e-04 9e-16 7e-04
4: -6.8907e-03 -6.9056e-03 1e-05 1e-16 1e-05
5: -6.8977e-03 -6.8978e-03 1e-07 2e-16 1e-07
6: -6.8977e-03 -6.8977e-03 1e-09 3e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4300e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4354e-03 -1.6962e-02 1e-02 7e-17 3e-02
2: -4.8026e-03 -7.0489e-03 2e-03 1e-16 6e-03
3: -6.7454e-03 -7.6711e-03 9e-04 2e-15 7e-04
4: -6.8784e-03 -6.8932e-03 1e-05 3e-16 1e-05
5: -6.8853e-03 -6.8854e-03 1e-07 1e-16 1e-07
6: -6.8854e-03 -6.8854e-03 1e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4294e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.4348e-03 -1.6957e-02 1e-02 1e-16 3e-02
2: -4.8012e-03 -7.0428e-03 2e-03 5e-17 6e-03
3: -6.7355e-03 -7.6539e-03 9e-04 4e-16 7e-04
4: -6.8645e-03 -6.8793e-03 1e-05 1e-16 1e-05
5: -6.8714e-03 -6.8715e-03 1e-07 2e-16 1e-07
6: -6.8715e-03 -6.8715e-03 1e-09 1e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4287e-03 -1.0070e+00 1e+00 2e-16 3e+00
1: -4.4341e-03 -1.6952e-02 1e-02 2e-16 3e-02
2: -4.7996e-03 -7.0360e-03 2e-03 1e-16 6e-03
3: -6.7244e-03 -7.6348e-03 9e-04 2e-15 6e-04
4: -6.8490e-03 -6.8637e-03 1e-05 2e-16 1e-05
5: -6.8558e-03 -6.8560e-03 1e-07 2e-16 1e-07
6: -6.8559e-03 -6.8559e-03 1e-09 2e-16 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4280e-03 -1.0069e+00 1e+00 1e-16 3e+00
1: -4.4333e-03 -1.6946e-02 1e-02 6e-17 3e-02
2: -4.7979e-03 -7.0284e-03 2e-03 1e-16 6e-03
3: -6.7119e-03 -7.6134e-03 9e-04 3e-16 6e-04
4: -6.8316e-03 -6.8462e-03 1e-05 2e-16 9e-06
5: -6.8383e-03 -6.8385e-03 1e-07 3e-16 9e-08
6: -6.8384e-03 -6.8384e-03 1e-09 1e-21 9e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4271e-03 -1.0069e+00 1e+00 2e-16 3e+00
1: -4.4324e-03 -1.6940e-02 1e-02 2e-16 3e-02
2: -4.7959e-03 -7.0199e-03 2e-03 4e-17 6e-03
3: -6.6979e-03 -7.5895e-03 9e-04 4e-16 5e-04
4: -6.8120e-03 -6.8265e-03 1e-05 2e-16 8e-06
5: -6.8187e-03 -6.8189e-03 1e-07 1e-16 8e-08
6: -6.8188e-03 -6.8188e-03 1e-09 1e-16 8e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4261e-03 -1.0069e+00 1e+00 4e-16 3e+00
1: -4.4314e-03 -1.6932e-02 1e-02 1e-16 3e-02
2: -4.7938e-03 -7.0104e-03 2e-03 9e-17 6e-03
3: -6.6824e-03 -7.5629e-03 9e-04 4e-16 5e-04
4: -6.7900e-03 -6.8044e-03 1e-05 2e-16 8e-06
5: -6.7967e-03 -6.7969e-03 1e-07 2e-16 8e-08
6: -6.7968e-03 -6.7968e-03 1e-09 1e-16 8e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4250e-03 -1.0069e+00 1e+00 0e+00 3e+00
1: -4.4303e-03 -1.6924e-02 1e-02 1e-16 3e-02
2: -4.7914e-03 -6.9998e-03 2e-03 1e-16 6e-03
3: -6.6650e-03 -7.5332e-03 9e-04 7e-16 4e-04
4: -6.7652e-03 -6.7797e-03 1e-05 2e-16 7e-06
5: -6.7720e-03 -6.7722e-03 1e-07 1e-16 7e-08
6: -6.7721e-03 -6.7721e-03 1e-09 2e-16 7e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4238e-03 -1.0069e+00 1e+00 0e+00 3e+00
1: -4.4291e-03 -1.6915e-02 1e-02 1e-16 3e-02
2: -4.7887e-03 -6.9880e-03 2e-03 1e-16 6e-03
3: -6.6457e-03 -7.5001e-03 9e-04 1e-15 3e-04
4: -6.7375e-03 -6.7519e-03 1e-05 3e-16 6e-06
5: -6.7443e-03 -6.7445e-03 1e-07 1e-16 6e-08
6: -6.7444e-03 -6.7444e-03 1e-09 2e-16 6e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4224e-03 -1.0069e+00 1e+00 1e-16 3e+00
1: -4.4277e-03 -1.6905e-02 1e-02 1e-16 3e-02
2: -4.7857e-03 -6.9748e-03 2e-03 9e-17 6e-03
3: -6.6127e-03 -7.4588e-03 8e-04 7e-16 3e-04
4: -6.7053e-03 -6.7207e-03 2e-05 7e-18 5e-06
5: -6.7132e-03 -6.7134e-03 2e-07 1e-16 5e-08
6: -6.7133e-03 -6.7133e-03 2e-09 3e-16 5e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4209e-03 -1.0069e+00 1e+00 2e-16 3e+00
1: -4.4261e-03 -1.6893e-02 1e-02 1e-16 3e-02
2: -4.7824e-03 -6.9600e-03 2e-03 9e-17 6e-03
3: -6.5735e-03 -7.4120e-03 8e-04 4e-16 3e-04
4: -6.6690e-03 -6.6858e-03 2e-05 2e-16 5e-06
5: -6.6783e-03 -6.6785e-03 2e-07 1e-16 5e-08
6: -6.6784e-03 -6.6784e-03 2e-09 3e-16 5e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4191e-03 -1.0069e+00 1e+00 1e-16 3e+00
1: -4.4244e-03 -1.6880e-02 1e-02 1e-16 3e-02
2: -4.7787e-03 -6.9436e-03 2e-03 7e-17 6e-03
3: -6.5305e-03 -7.3598e-03 8e-04 1e-16 2e-04
4: -6.6280e-03 -6.6465e-03 2e-05 2e-16 5e-06
5: -6.6391e-03 -6.6393e-03 2e-07 1e-16 5e-08
6: -6.6393e-03 -6.6393e-03 2e-09 3e-16 5e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4172e-03 -1.0069e+00 1e+00 3e-16 3e+00
1: -4.4224e-03 -1.6866e-02 1e-02 9e-17 3e-02
2: -4.7747e-03 -6.9252e-03 2e-03 8e-17 6e-03
3: -6.4837e-03 -7.3020e-03 8e-04 1e-16 2e-04
4: -6.5819e-03 -6.6025e-03 2e-05 1e-16 4e-06
5: -6.5952e-03 -6.5954e-03 2e-07 1e-16 4e-08
6: -6.5953e-03 -6.5954e-03 2e-09 3e-16 4e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4150e-03 -1.0069e+00 1e+00 0e+00 3e+00
1: -4.4202e-03 -1.6850e-02 1e-02 1e-16 3e-02
2: -4.7701e-03 -6.9049e-03 2e-03 8e-17 6e-03
3: -6.4330e-03 -7.2380e-03 8e-04 1e-15 1e-04
4: -6.5300e-03 -6.5532e-03 2e-05 2e-16 3e-06
5: -6.5459e-03 -6.5461e-03 3e-07 6e-16 3e-08
6: -6.5461e-03 -6.5461e-03 3e-09 2e-16 3e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4126e-03 -1.0068e+00 1e+00 2e-16 3e+00
1: -4.4177e-03 -1.6831e-02 1e-02 1e-16 3e-02
2: -4.7651e-03 -6.8822e-03 2e-03 1e-16 6e-03
3: -6.3786e-03 -7.1673e-03 8e-04 4e-16 9e-05
4: -6.4721e-03 -6.4981e-03 3e-05 2e-16 2e-06
5: -6.4904e-03 -6.4909e-03 4e-07 1e-16 3e-08
6: -6.4908e-03 -6.4908e-03 4e-09 4e-16 3e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4099e-03 -1.0068e+00 1e+00 1e-16 3e+00
1: -4.4150e-03 -1.6811e-02 1e-02 9e-17 3e-02
2: -4.7596e-03 -6.8571e-03 2e-03 7e-17 5e-03
3: -6.3209e-03 -7.0898e-03 8e-04 4e-16 5e-05
4: -6.4082e-03 -6.4367e-03 3e-05 2e-16 1e-06
5: -6.4280e-03 -6.4288e-03 9e-07 1e-16 2e-08
6: -6.4288e-03 -6.4288e-03 9e-09 2e-16 2e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.4068e-03 -1.0068e+00 1e+00 1e-16 3e+00
1: -4.4119e-03 -1.6788e-02 1e-02 7e-17 3e-02
2: -4.7535e-03 -6.8292e-03 2e-03 8e-17 5e-03
3: -6.2605e-03 -7.0051e-03 7e-04 3e-16 2e-18
4: -6.3389e-03 -6.3689e-03 3e-05 2e-16 2e-18
5: -6.3573e-03 -6.3593e-03 2e-06 2e-16 1e-18
6: -6.3591e-03 -6.3592e-03 3e-08 2e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.4033e-03 -1.0068e+00 1e+00 2e-16 3e+00
1: -4.4084e-03 -1.6762e-02 1e-02 1e-16 3e-02
2: -4.7469e-03 -6.7983e-03 2e-03 1e-16 5e-03
3: -6.1880e-03 -6.9106e-03 7e-04 2e-16 2e-18
4: -6.2652e-03 -6.2956e-03 3e-05 1e-16 1e-18
5: -6.2797e-03 -6.2824e-03 3e-06 2e-16 1e-18
6: -6.2809e-03 -6.2813e-03 4e-07 2e-16 1e-18
7: -6.2811e-03 -6.2811e-03 5e-08 1e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3995e-03 -1.0067e+00 1e+00 1e-16 3e+00
1: -4.4045e-03 -1.6733e-02 1e-02 1e-16 3e-02
2: -4.7396e-03 -6.7642e-03 2e-03 1e-16 5e-03
3: -6.1161e-03 -6.8108e-03 7e-04 4e-16 2e-18
4: -6.1915e-03 -6.2191e-03 3e-05 3e-17 1e-18
5: -6.2024e-03 -6.2036e-03 1e-06 3e-17 2e-18
6: -6.2026e-03 -6.2026e-03 2e-08 2e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3952e-03 -1.0067e+00 1e+00 2e-16 3e+00
1: -4.4001e-03 -1.6701e-02 1e-02 1e-16 3e-02
2: -4.7316e-03 -6.7267e-03 2e-03 6e-17 5e-03
3: -6.0460e-03 -6.7067e-03 7e-04 1e-16 3e-18
4: -6.1211e-03 -6.1432e-03 2e-05 2e-16 2e-18
5: -6.1312e-03 -6.1314e-03 3e-07 2e-16 2e-18
6: -6.1313e-03 -6.1313e-03 3e-09 1e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3903e-03 -1.0067e+00 1e+00 2e-16 3e+00
1: -4.3952e-03 -1.6665e-02 1e-02 1e-16 3e-02
2: -4.7228e-03 -6.6853e-03 2e-03 7e-17 5e-03
3: -5.9789e-03 -6.5997e-03 6e-04 4e-16 1e-18
4: -6.0561e-03 -6.0724e-03 2e-05 2e-16 2e-18
5: -6.0660e-03 -6.0662e-03 2e-07 1e-16 2e-18
6: -6.0661e-03 -6.0661e-03 2e-09 1e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3849e-03 -1.0066e+00 1e+00 0e+00 3e+00
1: -4.3897e-03 -1.6624e-02 1e-02 9e-17 3e-02
2: -4.7134e-03 -6.6399e-03 2e-03 3e-17 5e-03
3: -5.9161e-03 -6.4916e-03 6e-04 4e-16 1e-18
4: -5.9951e-03 -6.0185e-03 2e-05 3e-16 9e-19
5: -6.0061e-03 -6.0064e-03 2e-07 2e-16 2e-18
6: -6.0062e-03 -6.0062e-03 2e-09 2e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3788e-03 -1.0066e+00 1e+00 2e-16 3e+00
1: -4.3836e-03 -1.6579e-02 1e-02 7e-17 3e-02
2: -4.7031e-03 -6.5903e-03 2e-03 1e-16 5e-03
3: -5.8582e-03 -6.3845e-03 5e-04 4e-16 7e-19
4: -5.9400e-03 -5.9627e-03 2e-05 3e-16 2e-18
5: -5.9507e-03 -5.9510e-03 2e-07 1e-16 1e-18
6: -5.9509e-03 -5.9509e-03 2e-09 1e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3720e-03 -1.0065e+00 1e+00 0e+00 3e+00
1: -4.3767e-03 -1.6528e-02 1e-02 9e-17 3e-02
2: -4.6921e-03 -6.5362e-03 2e-03 5e-17 5e-03
3: -5.8058e-03 -6.2810e-03 5e-04 2e-16 1e-18
4: -5.8896e-03 -5.9068e-03 2e-05 2e-16 2e-18
5: -5.8991e-03 -5.8993e-03 2e-07 1e-16 2e-18
6: -5.8992e-03 -5.8992e-03 2e-09 2e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3643e-03 -1.0065e+00 1e+00 2e-16 3e+00
1: -4.3690e-03 -1.6471e-02 1e-02 1e-16 3e-02
2: -4.6803e-03 -6.4775e-03 2e-03 1e-16 5e-03
3: -5.7590e-03 -6.1833e-03 4e-04 1e-16 1e-18
4: -5.8424e-03 -5.8542e-03 1e-05 3e-16 1e-18
5: -5.8506e-03 -5.8507e-03 1e-07 1e-16 2e-18
6: -5.8507e-03 -5.8507e-03 1e-09 2e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3558e-03 -1.0064e+00 1e+00 1e-16 3e+00
1: -4.3604e-03 -1.6407e-02 1e-02 2e-16 3e-02
2: -4.6677e-03 -6.4140e-03 2e-03 7e-17 5e-03
3: -5.7174e-03 -6.0935e-03 4e-04 1e-16 2e-18
4: -5.7967e-03 -5.8078e-03 1e-05 2e-16 1e-18
5: -5.8045e-03 -5.8046e-03 1e-07 1e-16 1e-18
6: -5.8046e-03 -5.8046e-03 1e-09 2e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3462e-03 -1.0063e+00 1e+00 0e+00 3e+00
1: -4.3508e-03 -1.6335e-02 1e-02 1e-16 3e-02
2: -4.6544e-03 -6.3458e-03 2e-03 2e-16 4e-03
3: -5.6801e-03 -6.0130e-03 3e-04 1e-16 1e-18
4: -5.7527e-03 -5.7632e-03 1e-05 1e-16 3e-18
5: -5.7603e-03 -5.7604e-03 1e-07 1e-16 1e-18
6: -5.7603e-03 -5.7603e-03 1e-09 1e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3355e-03 -1.0063e+00 1e+00 0e+00 3e+00
1: -4.3400e-03 -1.6255e-02 1e-02 1e-16 3e-02
2: -4.6404e-03 -6.2729e-03 2e-03 7e-17 4e-03
3: -5.6460e-03 -5.9423e-03 3e-04 1e-16 3e-18
4: -5.7099e-03 -5.7198e-03 1e-05 1e-16 2e-18
5: -5.7172e-03 -5.7173e-03 1e-07 1e-16 1e-18
6: -5.7173e-03 -5.7173e-03 1e-09 6e-17 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3235e-03 -1.0062e+00 1e+00 0e+00 3e+00
1: -4.3279e-03 -1.6166e-02 1e-02 4e-17 3e-02
2: -4.6257e-03 -6.1955e-03 2e-03 6e-17 4e-03
3: -5.6137e-03 -5.8809e-03 3e-04 2e-16 1e-18
4: -5.6677e-03 -5.6772e-03 1e-05 2e-16 1e-18
5: -5.6748e-03 -5.6750e-03 1e-07 1e-16 2e-18
6: -5.6749e-03 -5.6749e-03 1e-09 6e-17 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.3100e-03 -1.0061e+00 1e+00 0e+00 3e+00
1: -4.3145e-03 -1.6065e-02 1e-02 1e-16 3e-02
2: -4.6105e-03 -6.1142e-03 2e-03 1e-16 4e-03
3: -5.5792e-03 -5.8275e-03 2e-04 2e-16 1e-05
4: -5.6253e-03 -5.6349e-03 1e-05 2e-16 3e-07
5: -5.6325e-03 -5.6327e-03 2e-07 1e-16 3e-09
6: -5.6327e-03 -5.6327e-03 2e-09 1e-16 3e-11
Optimal solution found.
pcost dcost gap pres dres
0: -4.2950e-03 -1.0060e+00 1e+00 0e+00 3e+00
1: -4.2994e-03 -1.5953e-02 1e-02 1e-16 3e-02
2: -4.5948e-03 -6.0295e-03 1e-03 7e-17 4e-03
3: -5.5394e-03 -5.7799e-03 2e-04 3e-16 4e-05
4: -5.5824e-03 -5.5923e-03 1e-05 1e-16 1e-06
5: -5.5897e-03 -5.5900e-03 3e-07 6e-17 1e-08
6: -5.5900e-03 -5.5900e-03 3e-09 2e-16 1e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.2782e-03 -1.0058e+00 1e+00 2e-16 3e+00
1: -4.2826e-03 -1.5827e-02 1e-02 7e-17 3e-02
2: -4.5787e-03 -5.9423e-03 1e-03 7e-17 4e-03
3: -5.4995e-03 -5.7333e-03 2e-04 2e-16 6e-05
4: -5.5388e-03 -5.5490e-03 1e-05 1e-16 2e-06
5: -5.5457e-03 -5.5463e-03 5e-07 3e-16 1e-08
6: -5.5462e-03 -5.5462e-03 6e-09 2e-16 1e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.2595e-03 -1.0057e+00 1e+00 1e-16 3e+00
1: -4.2639e-03 -1.5687e-02 1e-02 1e-16 3e-02
2: -4.5625e-03 -5.8537e-03 1e-03 7e-17 3e-03
3: -5.4593e-03 -5.6851e-03 2e-04 2e-16 7e-05
4: -5.4946e-03 -5.5046e-03 1e-05 6e-17 2e-06
5: -5.5000e-03 -5.5010e-03 9e-07 2e-16 6e-09
6: -5.5008e-03 -5.5009e-03 3e-08 2e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.2385e-03 -1.0057e+00 1e+00 0e+00 3e+00
1: -4.2430e-03 -1.5683e-02 1e-02 1e-16 3e-02
2: -4.5429e-03 -5.7659e-03 1e-03 7e-17 3e-03
3: -5.4185e-03 -5.6357e-03 2e-04 4e-16 7e-05
4: -5.4498e-03 -5.4590e-03 9e-06 2e-16 2e-06
5: -5.4530e-03 -5.4541e-03 1e-06 1e-16 3e-09
6: -5.4534e-03 -5.4535e-03 1e-07 2e-18 2e-18
7: -5.4534e-03 -5.4534e-03 1e-08 2e-16 1e-18
Optimal solution found.
pcost dcost gap pres dres
0: -4.2151e-03 -1.0059e+00 1e+00 2e-16 3e+00
1: -4.2197e-03 -1.5847e-02 1e-02 1e-16 3e-02
2: -4.5197e-03 -5.6787e-03 1e-03 8e-17 3e-03
3: -5.3768e-03 -5.5844e-03 2e-04 4e-16 8e-05
4: -5.4048e-03 -5.4126e-03 8e-06 7e-17 2e-06
5: -5.4064e-03 -5.4069e-03 5e-07 1e-16 1e-08
6: -5.4064e-03 -5.4064e-03 7e-09 6e-17 1e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.1890e-03 -1.0060e+00 1e+00 0e+00 3e+00
1: -4.1938e-03 -1.6030e-02 1e-02 2e-16 3e-02
2: -4.4947e-03 -5.6268e-03 1e-03 1e-16 3e-03
3: -5.3331e-03 -5.5359e-03 2e-04 6e-17 1e-04
4: -5.3601e-03 -5.3662e-03 6e-06 2e-16 2e-06
5: -5.3610e-03 -5.3611e-03 1e-07 1e-16 2e-08
6: -5.3610e-03 -5.3610e-03 1e-09 1e-16 2e-10
Optimal solution found.
pcost dcost gap pres dres
0: -4.1599e-03 -1.0062e+00 1e+00 1e-16 3e+00
1: -4.1650e-03 -1.6234e-02 1e-02 5e-17 3e-02
2: -4.4665e-03 -5.6630e-03 1e-03 0e+00 3e-03
3: -5.2910e-03 -5.4880e-03 2e-04 3e-16 1e-04
4: -5.3159e-03 -5.3207e-03 5e-06 6e-17 2e-06
5: -5.3165e-03 -5.3166e-03 6e-08 6e-17 2e-08
Optimal solution found.
pcost dcost gap pres dres
0: -4.1274e-03 -1.0065e+00 1e+00 2e-16 3e+00
1: -4.1329e-03 -1.6460e-02 1e-02 6e-17 3e-02
2: -4.4375e-03 -5.6930e-03 1e-03 7e-17 3e-03
3: -5.2545e-03 -5.4240e-03 2e-04 4e-16 8e-05
4: -5.2721e-03 -5.2753e-03 3e-06 8e-17 1e-06
5: -5.2724e-03 -5.2725e-03 3e-08 1e-16 1e-08
Optimal solution found.
pcost dcost gap pres dres
0: -4.0913e-03 -1.0067e+00 1e+00 2e-16 3e+00
1: -4.0974e-03 -1.6711e-02 1e-02 1e-16 3e-02
2: -4.4077e-03 -5.7160e-03 1e-03 8e-17 3e-03
3: -5.2195e-03 -5.3491e-03 1e-04 2e-16 1e-05
4: -5.2280e-03 -5.2299e-03 2e-06 1e-16 1e-07
5: -5.2282e-03 -5.2282e-03 2e-08 8e-17 1e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.0512e-03 -1.0070e+00 1e+00 0e+00 3e+00
1: -4.0579e-03 -1.6988e-02 1e-02 1e-16 3e-02
2: -4.3767e-03 -5.7308e-03 1e-03 7e-17 3e-03
3: -5.1664e-03 -5.2761e-03 1e-04 1e-16 9e-19
4: -5.1829e-03 -5.1843e-03 1e-06 2e-16 1e-18
5: -5.1831e-03 -5.1831e-03 1e-08 1e-16 8e-19
Optimal solution found.
pcost dcost gap pres dres
0: -4.0065e-03 -1.0073e+00 1e+00 2e-16 3e+00
1: -4.0143e-03 -1.7296e-02 1e-02 5e-17 4e-02
2: -4.3443e-03 -5.7362e-03 1e-03 1e-16 3e-03
3: -5.1087e-03 -5.2041e-03 1e-04 8e-17 2e-18
4: -5.1363e-03 -5.1374e-03 1e-06 2e-18 1e-18
5: -5.1367e-03 -5.1367e-03 1e-08 1e-16 2e-18
Optimal solution found.
pcost dcost gap pres dres
0: -3.9570e-03 -1.0077e+00 1e+00 2e-16 3e+00
1: -3.9661e-03 -1.7635e-02 1e-02 1e-16 4e-02
2: -4.3101e-03 -5.7309e-03 1e-03 7e-17 3e-03
3: -5.0494e-03 -5.1324e-03 8e-05 3e-16 2e-18
4: -5.0878e-03 -5.0887e-03 9e-07 3e-17 4e-18
5: -5.0882e-03 -5.0882e-03 9e-09 1e-16 3e-18
Optimal solution found.
pcost dcost gap pres dres
0: -3.9020e-03 -1.0081e+00 1e+00 0e+00 3e+00
1: -3.9128e-03 -1.8009e-02 1e-02 9e-17 4e-02
2: -4.2737e-03 -5.7137e-03 1e-03 8e-17 3e-03
3: -4.9752e-03 -5.0730e-03 1e-04 1e-16 6e-05
4: -5.0365e-03 -5.0375e-03 1e-06 1e-16 6e-07
5: -5.0371e-03 -5.0371e-03 1e-08 1e-16 6e-09
Optimal solution found.
pcost dcost gap pres dres
0: -3.8412e-03 -1.0085e+00 1e+00 1e-16 3e+00
1: -3.8542e-03 -1.8420e-02 1e-02 9e-17 4e-02
2: -4.2347e-03 -5.6831e-03 1e-03 8e-17 3e-03
3: -4.8954e-03 -5.0198e-03 1e-04 7e-17 1e-04
4: -4.9818e-03 -4.9831e-03 1e-06 1e-16 1e-06
5: -4.9827e-03 -4.9827e-03 1e-08 1e-16 1e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.7739e-03 -1.0090e+00 1e+00 2e-16 3e+00
1: -3.7898e-03 -1.8871e-02 2e-02 7e-17 4e-02
2: -4.1927e-03 -5.6378e-03 1e-03 8e-17 3e-03
3: -4.8187e-03 -4.9622e-03 1e-04 1e-16 2e-04
4: -4.9232e-03 -4.9247e-03 2e-06 1e-16 2e-06
5: -4.9242e-03 -4.9242e-03 2e-08 8e-17 2e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.6996e-03 -1.0095e+00 1e+00 2e-16 3e+00
1: -3.7190e-03 -1.9365e-02 2e-02 7e-17 4e-02
2: -4.1470e-03 -5.5765e-03 1e-03 7e-17 3e-03
3: -4.7426e-03 -4.9000e-03 2e-04 9e-17 2e-04
4: -4.8598e-03 -4.8615e-03 2e-06 6e-17 2e-06
5: -4.8609e-03 -4.8610e-03 2e-08 2e-16 2e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.6177e-03 -1.0101e+00 1e+00 0e+00 3e+00
1: -3.6416e-03 -1.9903e-02 2e-02 7e-17 4e-02
2: -4.0971e-03 -5.4978e-03 1e-03 1e-16 3e-03
3: -4.6648e-03 -4.8323e-03 2e-04 1e-16 3e-04
4: -4.7908e-03 -4.7926e-03 2e-06 6e-17 2e-06
5: -4.7920e-03 -4.7920e-03 2e-08 6e-17 2e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.5277e-03 -1.0107e+00 1e+00 0e+00 3e+00
1: -3.5571e-03 -2.0488e-02 2e-02 1e-16 5e-02
2: -4.0424e-03 -5.4007e-03 1e-03 1e-16 3e-03
3: -4.5836e-03 -4.7580e-03 2e-04 2e-16 3e-04
4: -4.7152e-03 -4.7172e-03 2e-06 1e-16 2e-06
5: -4.7165e-03 -4.7165e-03 2e-08 8e-17 2e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.4287e-03 -1.0114e+00 1e+00 0e+00 3e+00
1: -3.4651e-03 -2.1121e-02 2e-02 6e-17 5e-02
2: -3.9824e-03 -5.2843e-03 1e-03 3e-17 2e-03
3: -4.4973e-03 -4.6755e-03 2e-04 8e-17 2e-04
4: -4.6320e-03 -4.6342e-03 2e-06 1e-16 2e-06
5: -4.6334e-03 -4.6334e-03 2e-08 2e-20 2e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.3203e-03 -1.0122e+00 1e+00 2e-16 3e+00
1: -3.3651e-03 -2.1804e-02 2e-02 1e-16 5e-02
2: -3.9162e-03 -5.1481e-03 1e-03 9e-17 2e-03
3: -4.4046e-03 -4.5828e-03 2e-04 8e-17 2e-04
4: -4.5402e-03 -4.5425e-03 2e-06 6e-17 1e-06
5: -4.5416e-03 -4.5416e-03 2e-08 1e-16 1e-08
Optimal solution found.
pcost dcost gap pres dres
0: -3.2017e-03 -1.0130e+00 1e+00 2e-16 3e+00
1: -3.2568e-03 -2.2536e-02 2e-02 6e-17 5e-02
2: -3.8432e-03 -4.9922e-03 1e-03 8e-17 2e-03
3: -4.3062e-03 -4.4797e-03 2e-04 8e-17 2e-04
4: -4.4385e-03 -4.4409e-03 2e-06 8e-17 9e-07
5: -4.4399e-03 -4.4400e-03 2e-08 2e-16 9e-09
Optimal solution found.
pcost dcost gap pres dres
0: -3.0723e-03 -1.0139e+00 1e+00 2e-16 3e+00
1: -3.1399e-03 -2.3317e-02 2e-02 1e-16 5e-02
2: -3.7615e-03 -4.9378e-03 1e-03 2e-16 2e-03
3: -4.1810e-03 -4.3713e-03 2e-04 8e-17 2e-04
4: -4.3253e-03 -4.3284e-03 3e-06 1e-16 3e-07
5: -4.3270e-03 -4.3271e-03 3e-08 6e-17 3e-09
Optimal solution found.
pcost dcost gap pres dres
0: -2.9314e-03 -1.0149e+00 1e+00 0e+00 3e+00
1: -3.0140e-03 -2.4145e-02 2e-02 7e-17 6e-02
2: -3.6709e-03 -4.9346e-03 1e-03 1e-16 2e-03
3: -4.0433e-03 -4.2546e-03 2e-04 5e-17 2e-04
4: -4.1982e-03 -4.2038e-03 6e-06 2e-18 4e-18
5: -4.2014e-03 -4.2014e-03 6e-08 2e-19 9e-18
Optimal solution found.
pcost dcost gap pres dres
0: -2.7784e-03 -1.0159e+00 1e+00 0e+00 3e+00
1: -2.8789e-03 -2.5017e-02 2e-02 9e-17 6e-02
2: -3.5715e-03 -4.9251e-03 1e-03 7e-17 2e-03
3: -3.9004e-03 -4.1230e-03 2e-04 2e-16 2e-04
4: -4.0557e-03 -4.0652e-03 9e-06 1e-16 3e-18
5: -4.0613e-03 -4.0613e-03 1e-07 2e-16 5e-18
Optimal solution found.
pcost dcost gap pres dres
0: -2.6126e-03 -1.0171e+00 1e+00 0e+00 3e+00
1: -2.7343e-03 -2.5927e-02 2e-02 9e-17 6e-02
2: -3.4623e-03 -4.9075e-03 1e-03 1e-16 2e-03
3: -3.7505e-03 -3.9746e-03 2e-04 2e-16 3e-04
4: -3.8962e-03 -3.9102e-03 1e-05 7e-17 4e-18
5: -3.9049e-03 -3.9050e-03 2e-07 1e-16 1e-17
6: -3.9050e-03 -3.9050e-03 2e-09 2e-16 7e-18
Optimal solution found.
pcost dcost gap pres dres
0: -2.4334e-03 -1.0183e+00 1e+00 2e-16 3e+00
1: -2.5798e-03 -2.6868e-02 2e-02 6e-17 6e-02
2: -3.3423e-03 -4.8798e-03 2e-03 2e-16 2e-03
3: -3.5923e-03 -3.8069e-03 2e-04 1e-16 2e-04
4: -3.7184e-03 -3.7365e-03 2e-05 2e-16 1e-17
5: -3.7300e-03 -3.7303e-03 3e-07 1e-16 1e-17
6: -3.7303e-03 -3.7303e-03 3e-09 1e-16 7e-18
Optimal solution found.
pcost dcost gap pres dres
0: -2.2400e-03 -1.0195e+00 1e+00 2e-16 3e+00
1: -2.4151e-03 -2.7831e-02 3e-02 4e-17 7e-02
2: -3.2103e-03 -4.8397e-03 2e-03 9e-17 3e-03
3: -3.4246e-03 -3.6180e-03 2e-04 5e-17 2e-04
4: -3.5242e-03 -3.5419e-03 2e-05 1e-16 3e-18
5: -3.5338e-03 -3.5351e-03 1e-06 1e-16 1e-17
6: -3.5349e-03 -3.5349e-03 5e-08 3e-19 2e-17
Optimal solution found.
pcost dcost gap pres dres
0: -2.0319e-03 -1.0209e+00 1e+00 0e+00 3e+00
1: -2.2399e-03 -2.8805e-02 3e-02 4e-17 7e-02
2: -3.0651e-03 -4.7847e-03 2e-03 2e-16 3e-03
3: -3.2466e-03 -3.4058e-03 2e-04 9e-17 2e-04
4: -3.3259e-03 -3.3313e-03 5e-06 9e-17 5e-18
5: -3.3293e-03 -3.3294e-03 6e-08 1e-16 5e-18
Optimal solution found.
pcost dcost gap pres dres
0: -1.8082e-03 -1.0223e+00 1e+00 0e+00 3e+00
1: -2.0535e-03 -2.9775e-02 3e-02 8e-17 7e-02
2: -2.9054e-03 -4.7117e-03 2e-03 7e-17 3e-03
3: -3.0566e-03 -3.1896e-03 1e-04 2e-16 1e-04
4: -3.1437e-03 -3.1588e-03 2e-05 2e-16 5e-06
5: -3.1446e-03 -3.1448e-03 2e-07 2e-16 6e-08
6: -3.1446e-03 -3.1446e-03 2e-09 1e-16 6e-10
Optimal solution found.
pcost dcost gap pres dres
0: -1.5681e-03 -1.0238e+00 1e+00 1e-16 3e+00
1: -1.8553e-03 -3.0724e-02 3e-02 1e-16 8e-02
2: -2.7297e-03 -4.6174e-03 2e-03 9e-17 4e-03
3: -2.8533e-03 -3.0381e-03 2e-04 6e-17 3e-04
4: -2.9734e-03 -3.0149e-03 4e-05 2e-16 2e-05
5: -2.9781e-03 -2.9802e-03 2e-06 8e-17 5e-07
6: -2.9792e-03 -2.9793e-03 2e-07 6e-17 3e-09
7: -2.9793e-03 -2.9793e-03 3e-09 1e-16 2e-11
Optimal solution found.
pcost dcost gap pres dres
0: -1.3107e-03 -1.0253e+00 1e+00 2e-16 3e+00
1: -1.6442e-03 -3.1633e-02 3e-02 1e-16 8e-02
2: -2.5364e-03 -4.4977e-03 2e-03 1e-16 4e-03
3: -2.6425e-03 -2.8669e-03 2e-04 6e-17 4e-04
4: -2.7979e-03 -2.8543e-03 6e-05 6e-17 3e-05
5: -2.8069e-03 -2.8082e-03 1e-06 1e-16 6e-07
6: -2.8075e-03 -2.8075e-03 1e-08 6e-17 6e-09
Optimal solution found.
pcost dcost gap pres dres
0: -1.0347e-03 -1.0269e+00 1e+00 0e+00 3e+00
1: -1.4187e-03 -3.2479e-02 3e-02 8e-17 8e-02
2: -2.3237e-03 -4.3482e-03 2e-03 7e-17 4e-03
3: -2.4236e-03 -2.6735e-03 2e-04 8e-17 5e-04
4: -2.6047e-03 -2.6685e-03 6e-05 2e-16 5e-05
5: -2.6146e-03 -2.6155e-03 9e-07 1e-16 6e-07
6: -2.6149e-03 -2.6149e-03 9e-09 8e-17 6e-09
Optimal solution found.
pcost dcost gap pres dres
0: -7.3892e-04 -1.0284e+00 1e+00 1e-16 3e+00
1: -1.1771e-03 -3.3238e-02 3e-02 8e-17 9e-02
2: -2.0897e-03 -4.1635e-03 2e-03 1e-16 4e-03
3: -2.1976e-03 -2.4558e-03 3e-04 6e-17 5e-04
4: -2.3909e-03 -2.4538e-03 6e-05 8e-17 5e-05
5: -2.3988e-03 -2.3995e-03 7e-07 1e-16 6e-07
6: -2.3989e-03 -2.3989e-03 7e-09 1e-16 6e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.2147e-04 -1.0300e+00 1e+00 0e+00 3e+00
1: -9.1698e-04 -3.3885e-02 3e-02 7e-17 9e-02
2: -1.8325e-03 -3.9379e-03 2e-03 7e-17 5e-03
3: -1.9675e-03 -2.2119e-03 2e-04 7e-17 5e-04
4: -2.1533e-03 -2.2045e-03 5e-05 1e-16 4e-05
5: -2.1563e-03 -2.1572e-03 9e-07 6e-17 6e-07
6: -2.1567e-03 -2.1567e-03 1e-08 2e-16 6e-09
Optimal solution found.
pcost dcost gap pres dres
0: -8.0258e-05 -1.0316e+00 1e+00 1e-16 3e+00
1: -6.3522e-04 -3.4392e-02 3e-02 4e-17 9e-02
2: -1.5499e-03 -3.6645e-03 2e-03 5e-17 5e-03
3: -1.7363e-03 -1.9973e-03 3e-04 7e-17 5e-04
4: -1.8999e-03 -1.9159e-03 2e-05 1e-16 2e-05
5: -1.9016e-03 -1.9017e-03 2e-07 1e-16 2e-07
6: -1.9016e-03 -1.9016e-03 2e-09 3e-17 2e-09
Optimal solution found.
pcost dcost gap pres dres
0: 2.8734e-04 -1.0332e+00 1e+00 2e-16 3e+00
1: -3.2812e-04 -3.4730e-02 3e-02 1e-16 9e-02
2: -1.2397e-03 -3.3360e-03 2e-03 7e-17 5e-03
3: -1.5063e-03 -1.8631e-03 4e-04 9e-17 6e-04
4: -1.6338e-03 -1.6867e-03 5e-05 2e-16 7e-05
5: -1.6720e-03 -1.6729e-03 1e-06 1e-16 8e-07
6: -1.6723e-03 -1.6724e-03 1e-08 1e-16 8e-09
Optimal solution found.
pcost dcost gap pres dres
0: -4.4351e-03 -1.0070e+00 1e+00 1e-16 3e+00
1: -4.4405e-03 -1.6999e-02 1e-02 1e-16 3e-02
2: -4.8141e-03 -7.0991e-03 2e-03 1e-16 6e-03
3: -6.8270e-03 -7.8151e-03 1e-03 2e-15 9e-04
4: -6.9914e-03 -7.0072e-03 2e-05 2e-16 1e-05
5: -6.9990e-03 -6.9992e-03 2e-07 2e-16 1e-07
6: -6.9991e-03 -6.9991e-03 2e-09 3e-16 1e-09
Optimal solution found.
###Markdown
Above I have generated a pool of 5000 random portfolios comprised of different combinations of the five longest stocks.The following line (if uncommented) will generate a CSV filed called "all_five" that contains all risk and return values for all data points associated with our "longest portfolio".
###Code
# pd.DataFrame({'Risks': risks, 'Returns': returns}).to_csv('all_five.csv', index=False)
all_five_EF = pd.read_csv('all_five.csv')
# plot the set of portfolios on the EF
plt.plot(risks, returns, 'y-o')
plt.ylabel('Expected Returns',fontsize=14)
plt.xlabel('Volatility (Std. Deviation)',fontsize=14)
plt.title('Efficient Frontier')
single_asset_std=np.sqrt(np.diagonal(cov_monthly))
plt.scatter(single_asset_std,expected_returns,marker='X',color='red',s=200)
plt.plot(all_five_EF['Risks'], all_five_EF['Returns'], 'g-o')
plt.show();
###Output
_____no_output_____
###Markdown
Finding our required table from where data to be retrieved.
###Code
right_table=soup.find('table', class_='wikitable sortable')
right_table
###Output
_____no_output_____
###Markdown
Storing the table column values to different lists
###Code
#Generate lists
A=[]
B=[]
C=[]
for row in right_table.findAll("tr"):
states = row.findAll('th') #To store second column data
cells = row.findAll('td')
if len(cells)==3: #Only extract table body not heading
A.append(cells[0].find(text=True))
#B.append(states[0].find(text=True))
B.append(cells[1].find(text=True))
C.append(cells[2].find(text=True))
###Output
_____no_output_____
###Markdown
Make a Pandas Dataframe from the above lists
###Code
#import pandas to convert list to data frame
import pandas as pd
df=pd.DataFrame(A,columns=['Postcode'])
df['Borough']=B
df['Neighbourhood']=C
df
###Output
_____no_output_____
###Markdown
Removing those rows whose Borough value is 'Not assigned'
###Code
df = df.drop(df[(df.Borough == 'Not assigned')].index)
# reset index, because we droped two rows
df.reset_index(drop = True, inplace = True)
df
###Output
_____no_output_____
###Markdown
Combining the rows with more than one neighborhood in one postal code area with the neighborhoods separated with a comma.
###Code
aggregations = {
#'Neighbourhood': {lambda x: x.str.cat(x, sep =", ")}
'Neighbourhood': {lambda x: ",".join(tuple(x.str.rstrip()))}
}
df_final = df.groupby(['Postcode', 'Borough'], as_index=False).agg(aggregations)
df_final
###Output
_____no_output_____
###Markdown
Displaying proper column names
###Code
df_final.columns = ['Postcode', 'Borough', 'Neighbourhood']
df_final
###Output
_____no_output_____
###Markdown
Replacing Neighbourhood value with Borough value if Neighbourhood value is Not assigned!
###Code
df_final.loc[df_final['Neighbourhood'] == 'Not assigned', 'Neighbourhood'] = df_final['Borough']
df_final
###Output
_____no_output_____
###Markdown
Showing Dimension of the Dataframe
###Code
df_final.shape
###Output
_____no_output_____
###Markdown
This notebook will be mainly used for the Coursera Applied Data Science Capstone project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Introduction
###Code
# Import libraries and check the versions
import pandas as pd
import sys
import numpy as np
import sklearn
import matplotlib as mpl
import seaborn as sns
import missingno as msno
import xgboost as xgb
print('Python version: {}'.format(sys.version))
print('Numpy version {}'.format(np.__version__))
print('Pandas version {}'.format(pd.__version__))
print('Matplotlib version {}'.format(mpl.__version__))
print('Seaborn version {}'.format(sns.__version__))
print('Sklearn version: {}'.format(sklearn.__version__))
print('Missingno version: {}'.format(msno.__version__))
print("Xgboost version: {}".format(xgb.__version__))
# Pretty display for notebooks
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# for more clear plots
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
###Output
Python version: 3.6.5 |Anaconda custom (64-bit)| (default, Apr 26 2018, 08:42:37)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
Numpy version 1.13.3
Pandas version 0.23.0
Matplotlib version 2.2.2
Seaborn version 0.8.1
Sklearn version: 0.19.1
Missingno version: 0.3.5
Xgboost version: 0.7
###Markdown
1. Data Collection This dataset can be found at kaggle's website. First column of the dataset is the index column and we specify that with index_col = 0. Let's see the first five records of the dataset.
###Code
# retrieve the data
df = pd.read_csv('h1b_kaggle.csv', index_col=[0])
df.head()
###Output
/anaconda/lib/python3.6/site-packages/numpy/lib/arraysetops.py:463: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
mask |= (ar1 == a)
###Markdown
2. Data Wrangling Before we do explonatary data analysis, we need to select necessary features and clean the data.
###Code
# select the features that will be used creating the model
data = df[['CASE_STATUS', 'SOC_NAME',
'FULL_TIME_POSITION', 'PREVAILING_WAGE', 'WORKSITE']]
###Output
_____no_output_____
###Markdown
Missigno is a library that allows us to visualize missing data in the dataset.
###Code
# missing values
msno.matrix(data.sample(1000))
msno.dendrogram(data)
#check the missing data
data.isnull().sum()
# remove the missing values
data = data.dropna()
# convert all strings to uppercase
data['SOC_NAME'] = data['SOC_NAME'].str.upper()
# remove everthing after comma from job title
data['SOC_NAME'] = data['SOC_NAME'].apply(lambda x: x.split(', ')[0])
# There
data[data['SOC_NAME'].str.contains('CARPI')]
data = data[~data['SOC_NAME'].str.contains('CARPI')]
###Output
_____no_output_____
###Markdown
Current format of the worksite column is **City Name, State**, for this study we will focus on only state.
###Code
# remove city names from worksite column
data['WORKSITE'] = data['WORKSITE'].apply(lambda x: x.split(', ')[1])
pd.options.display.float_format = '{:,.2f}'.format
data['PREVAILING_WAGE'].describe()
###Output
_____no_output_____
###Markdown
Clearly, there are outliers in the dataset.
###Code
data[(data['PREVAILING_WAGE'] > 500000) | (data['PREVAILING_WAGE'] < 25000)].shape
###Output
_____no_output_____
###Markdown
Approxiametly, 12000 wages are below 25000 or above 500000 dollars, those records will be removed.
###Code
cleaned_data = data[(data['PREVAILING_WAGE'] < 500000)]
cleaned_data = cleaned_data[(cleaned_data['PREVAILING_WAGE'] > 25000)]
###Output
_____no_output_____
###Markdown
3. Data Exploring **CASE_STATUS** : This is our target feature. There were 7 possible values in the dataset and we reduced it to 2. Because only one status has a positive result and rest of the statues have a negative result. **SOC_NAME** : Type of the job. There are 1584 unique jobs in the dataset.**FULL_TIME_POSITION** : This column indicates if the job is full time or not. **WORKSITE** : Location of the job. Original column had the state and city information. I removed the cities. The model is going to make predictions based on the state information.
###Code
# type of columns
cleaned_data.dtypes
print ('Number of records: ', cleaned_data.shape[0])
print ('Number of positive cases: ', cleaned_data['CASE_STATUS'].value_counts()[0])
print ('Number of negative cases: ', cleaned_data['CASE_STATUS'].value_counts()[1])
print ('Percentage of positive cases: ', \
cleaned_data['CASE_STATUS'].value_counts()[0] * 100 / cleaned_data.shape[0])
###Output
Number of records: 2972646
Number of positive cases: 2593332
Number of negative cases: 200845
Percentage of positive cases: 87.2398529795
###Markdown
After removing the null values, we still have close to 3 million records. There are 4 features which are SOC_NAME, FULL_TIME_POSITION, PREVAILING_WAGE and WORKSITE. Our target value is CASE_STATUS.
###Code
cleaned_data['CASE_STATUS'].value_counts().plot(kind='bar', alpha=0.5)
plt.title('Distribution of case statuses')
plt.ylabel('Frequency')
plt.savefig('Distribution_of_case_status.png');
###Output
_____no_output_____
###Markdown
We have more positive case results than negative results.
###Code
# number of unique values in each column
for column in cleaned_data:
print(column, cleaned_data[column].nunique())
cleaned_data['WORKSITE'].groupby(cleaned_data['WORKSITE']).count()\
.sort_values(ascending=False).head(10).plot(kind='bar', alpha=0.5)
plt.title('Top 10 cities for H1-B visa')
plt.savefig('Top_cities.png');
cleaned_data['FULL_TIME_POSITION'].value_counts().plot(kind='bar', alpha=0.5)
plt.title('Distribution of Full Time - Part Time')
plt.ylabel('Frequency');
cleaned_data.groupby(['CASE_STATUS','FULL_TIME_POSITION']).count()['SOC_NAME'].\
unstack().plot(kind='barh',figsize=(12,5), alpha=0.5)
plt.title('Case Status versus Type of position')
plt.ylabel('Frequency');
cleaned_data.pivot_table(values=['CASE_STATUS'], index=['FULL_TIME_POSITION'], aggfunc=('count'))
i = 'PREVAILING_WAGE'
plt.figure(figsize=(10,8))
plt.subplot(211)
plt.xlim(cleaned_data[i].min(), cleaned_data[i].max()*1.1)
ax = cleaned_data[i].plot(kind='kde')
plt.subplot(212)
plt.xlim(cleaned_data[i].min(), cleaned_data[i].max()*1.1)
sns.boxplot(x=cleaned_data[i]);
###Output
_____no_output_____
###Markdown
Here we have two plots, the density plot and the box plot. This is a good way to view the data as we can see in the density plot (top) that there is some data points in the tails but it is difficult to see, however it is clear in the box plot. 4. Data Transformation and Processing 4.1 Data Transformation For highly sckewed features, it is always good to do transformation. **PREVAILING_WAGE** column has tail on the right and we will apply logarithmic transformation on it.
###Code
# log transform the data
cleaned_data['Log_' + i] = np.log(cleaned_data[i])
i = 'Log_PREVAILING_WAGE'
plt.figure(figsize=(10,8))
plt.subplot(211)
plt.xlim(cleaned_data[i].min(), cleaned_data[i].max()*1.1)
ax = cleaned_data[i].plot(kind='kde')
plt.subplot(212)
plt.xlim(cleaned_data[i].min(), cleaned_data[i].max()*1.1)
sns.boxplot(x=cleaned_data[i]);
###Output
_____no_output_____
###Markdown
time to scale transformed data
###Code
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import RobustScaler
# Initialize a scaler, then apply it to the features
scaler = RobustScaler() # default=(0, 1)
numerical = ['Log_PREVAILING_WAGE']
transformed_data = pd.DataFrame(data = cleaned_data)
transformed_data[numerical] = scaler.fit_transform(cleaned_data[numerical])
# remove original wage column
del transformed_data['PREVAILING_WAGE']
transformed_data['Log_PREVAILING_WAGE'].plot(kind='hist');
###Output
_____no_output_____
###Markdown
4.1 Data Processing
###Code
transformed_data['CASE_STATUS'].unique()
###Output
_____no_output_____
###Markdown
There are seven types of case statues but only the "CERTIFIED" have a positive result.
###Code
# only certified is 1 others 0
transformed_data['CASE_STATUS'] = transformed_data['CASE_STATUS'].apply(lambda x: 1 if x == 'CERTIFIED' else 0)
# One-hot encode the transformed data using pandas.get_dummies()
features_final = pd.get_dummies(transformed_data, columns=['SOC_NAME', 'FULL_TIME_POSITION', 'WORKSITE'])
# Print the number of features after one-hot encoding
encoded = list(features_final.columns)
print ("total features after one-hot encoding: ", len(encoded))
# name of features after one-hot encoding
#print (encoded)
print ("Shape of final features: ", (features_final.shape))
#first 5 rows
features_final.head()
###Output
Shape of final features: (2972646, 1307)
###Markdown
4.2 Train-Test Split
###Code
# select 500,000 samples
features_final = features_final.sample(n=500000)
X = features_final.iloc[:,1:]
y = features_final['CASE_STATUS']
# Import train_test_split
from sklearn.model_selection import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print ("Training set has samples: ", (X_train.shape[0]))
print ("Testing set has samples: ", (X_test.shape[0]))
###Output
Training set has samples: 400000
Testing set has samples: 100000
###Markdown
5. Data Modeling
###Code
import time
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
# Train logistic regression model
start = time.time()
clf_log = LogisticRegression(random_state = 0)
clf_log.fit(X_train, y_train)
end = time.time()
training_time = end - start
print ("Trainig time - Logistic Regression: ",training_time)
start = time.time()
clf_random = RandomForestClassifier(random_state = 0)
clf_random.fit(X_train, y_train)
end = time.time()
training_time = end - start
print ("Trainig time - Random Forest: ",training_time)
start = time.time()
clf_xg = XGBClassifier(random_state = 0)
clf_xg.fit(X_train, y_train)
end = time.time()
training_time = end - start
print ("Trainig time - XGBoost: ",training_time)
training_times = {'model': ['Logistic Regression', 'Random Forest', 'XGBoost'],
'time': [24, 70, 3038]
}
training_times_df = pd.DataFrame(training_times, columns = ['model','time'])
training_times_df.plot('model', 'time', kind='bar');
###Output
_____no_output_____
###Markdown
6. Model Evaluation Naive predictor
###Code
# Calculate accuracy, precision and recall
TP = np.sum(y) # positive resulted visas
TN = 0
FP = y.count() - np.sum(y) # negative visas
FN = 0
accuracy = TP / (TP + FP)
recall = TP / (TP + FN)
precision = TP / (TP + FP)
# Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
beta = 0.5
fscore = (1 + beta**2) * (precision * recall) / ((beta**2 * precision) + recall)
# Print the results
print ("Naive Predictor\nAccuracy score:", accuracy, "\nF(0.5)-score:" ,fscore)
###Output
Naive Predictor
Accuracy score: 0.872764
F(0.5)-score: 0.895553324561
###Markdown
Measuring accuracy using Cross Validation
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score, fbeta_score, roc_curve, roc_auc_score, accuracy_score
cross_val_accuracy = cross_val_score(clf_log, X_train, y_train, cv=3, scoring="accuracy").mean()
print ("CV accuracy score:", cross_val_accuracy)
y_train_pred = cross_val_predict(clf_log, X_train, y_train, cv=3)
print ("")
plt.figure(figsize=(10,5))
mat = confusion_matrix(y_train, y_train_pred)
sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False)
plt.title('Confusion Matrix - Logistic Regression')
plt.ylabel('True labels')
plt.xlabel('Predicted labels');
print ("Precision score: ",precision_score(y_train, y_train_pred))
print ("F(0.5) score", fbeta_score(y_train, y_train_pred, beta=0.5))
y_scores_log = cross_val_predict(clf_log, X_train, y_train, cv=3, method='predict_proba')
y_scores_log = y_scores_log[:,1] # ROC curve requires scores not probability
fpr_log, tpr_log, thresholds_log = roc_curve(y_train, y_scores_log)
plt.plot(fpr_log, tpr_log)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for logistic regression')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print ("ROC AUC score: ", roc_auc_score(y_train, y_scores_log))
cross_val_accuracy = cross_val_score(clf_random, X_train, y_train, cv=3, scoring="accuracy").mean()
print ("CV accuracy score:", cross_val_accuracy)
y_train_pred = cross_val_predict(clf_random, X_train, y_train, cv=3)
print ("")
plt.figure(figsize=(10,5))
mat = confusion_matrix(y_train, y_train_pred)
sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False)
plt.title('Random Forest')
plt.ylabel('True labels')
plt.xlabel('Predicted labels');
print ("Precision_score: ",precision_score(y_train, y_train_pred))
print ("f0.5_score", fbeta_score(y_train, y_train_pred, beta=0.5))
y_scores_random = cross_val_predict(clf_random, X_train, y_train, cv=3, method='predict_proba')
y_scores_random = y_scores_random[:,1] # ROC curve requires scores not probability
fpr_random, tpr_random, thresholds_random = roc_curve(y_train, y_scores_random)
plt.plot(fpr_random, tpr_random)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for random forest')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print ("ROC AUC score: ", roc_auc_score(y_train, y_scores_random))
cross_val_accuracy = cross_val_score(clf_xg, X_train, y_train, cv=3, scoring="accuracy").mean()
print ("CV accuracy score:", cross_val_accuracy)
y_train_pred = cross_val_predict(clf_xg, X_train, y_train, cv=3)
print ("")
plt.figure(figsize=(10,5))
mat = confusion_matrix(y_train, y_train_pred)
sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False)
plt.title('XGBoost')
plt.ylabel('True labels')
plt.xlabel('Predicted labels');
print ("Precision_score: ",precision_score(y_train, y_train_pred))
print ("f0.5_score", fbeta_score(y_train, y_train_pred, beta=0.5))
y_scores_xg = cross_val_predict(clf_xg, X_train, y_train, cv=3, method='predict_proba')
y_scores_xg = y_scores_xg[:,1] # ROC curve requires scores not probability
fpr_xg, tpr_xg, thresholds_xg = roc_curve(y_train, y_scores_xg)
plt.plot(fpr_xg, tpr_xg)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for XGBoost')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print ("ROC AUC score: ", roc_auc_score(y_train, y_scores_xg))
plt.figure()
plt.plot(fpr_log, tpr_log, "b", label='Logistic Regression')
plt.plot(fpr_random, tpr_random, "r", label='Random Forest')
plt.plot(fpr_xg, tpr_xg, "g", label='XGBoost')
plt.plot([0,1], [0,1], "k--")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for all classifiers')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
plt.legend(loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
Model tuning
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
# Initialize the classifier
clf = LogisticRegression(random_state=0)
# Create the parameters list you wish to tune, using a dictionary if needed.
parameters = {'penalty':['l1','l2']
,'C':[0.1, 1, 5, 10]
,'tol':[0.00001, 0.0001, 0.001]
}
# Make an fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score, beta=0.5)
# Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(clf, param_grid=parameters, scoring=scorer)
# Fit the grid search object to the training data and find the optimal parameters using fit()
grid_fit = grid_obj.fit(X_train, y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
print ("Best clf's hyperparameters:\n")
print (best_clf)
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print ("\nUnoptimized model\n------")
print ("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print ("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print ("\nOptimized Model\n------")
print ("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print ("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
###Output
Best clf's hyperparameters:
LogisticRegression(C=1, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=0, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
Unoptimized model
------
Accuracy score on testing data: 0.8726
F-score on testing data: 0.8954
Optimized Model
------
Final accuracy score on the testing data: 0.8726
Final F-score on the testing data: 0.8954
###Markdown
Train the model with smaller dataset
###Code
accuracy_scores = []
f_scores = []
sample_size = [100, 1000, 10000, 100000]
for i in sample_size:
X_train_small = X_train.sample(n=i)
y_train_small = y_train.sample(n=i)
# train the small dataset with best classifier
best_clf.fit(X_train_small, y_train_small)
#make predictions
predictions_small = best_clf.predict(X_test)
accuracy_scores.append(accuracy_score(y_test, predictions_small))
f_scores.append(fbeta_score(y_test, predictions_small, beta = 0.5))
accuracy_scores
f_scores
###Output
_____no_output_____
###Markdown
Feature importance
###Code
features = X_train.columns
importances = clf_random.feature_importances_[:10]
indices = np.argsort(importances)
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), features[indices])
plt.xlabel('Relative Importance')
plt.show()
###Output
_____no_output_____
###Markdown
Data Engineering Capstone Project Project SummaryThe Capstone project analyses and process data from Brazilian stock market since 2013 until now. Also,the Brazilian economic data from [World Bank](https://data.worldbank.org/).With those data in mind, it is possible to evaluate and take insights, relate country data with stock market.Basically to answer questions like:* Does the stock market influence real economy?* Is the population's education rate related to the increase in GDP?* The raising of company in stock market help us somehow?The project follows the follow steps:* Step 1: Scope the Project and Gather Data* Step 2: Explore and Assess the Data* Step 3: Define the Data Model* Step 4: Run ETL to Model the Data* Step 5: Complete Project Write Up
###Code
!pip install bovespa
!pip install pyspark
!pip install wbgapi
import os
import bovespa
import etl_functions
import wbgapi as wb
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DateType, DoubleType, MapType
from pyspark.sql import types as T
from pyspark.sql import functions as F
###Output
_____no_output_____
###Markdown
Step 1: Scope the Project and Gather Data ScopeThe Capstone project analyses and process data from Brazilian stock market since 2013 until now. Also,the Brazilian economic data from [World Bank](https://data.worldbank.org/). Describe and Gather DataDescribe the data sets you're using. Where did it come from? What type of information is included?Bovespa is the main Brazilian stock Exchange. Here is listed all trading data since 2013.It includes the day of the trading, company name, stock id, stock data, such as: price open, price close, price high, price low.World Bank collects all data from all country in the world, since 1960.It has information about environment, education, GDP, economy, social.http://www.b3.com.br/en_us/market-data-and-indices/data-services/market-data/historical-data/equities/historical-quotes/https://api.worldbank.org/v2/en/country/BRA?downloadformat=csvResult will be written in disk, to make it simpler.
###Code
os.environ['JAVA_HOME'] = '/home/charl3ff/.sdkman/candidates/java/current'
spark = SparkSession.builder.enableHiveSupport()\
.appName("Capstone")\
.getOrCreate()
###Output
_____no_output_____
###Markdown
ETL stock market
###Code
# Load data trading data from bovespa
input_data = "sample_data/"
trading_file = input_data + "COTAHIST*.txt"
df_data = spark.read.text(trading_file)
schema = StructType([
StructField('date', DateType(), True),
StructField('year', IntegerType(), True),
StructField('month', IntegerType(), True),
StructField('day', IntegerType(), True),
StructField('money_volume', T.DoubleType(), True),
StructField('volume', T.IntegerType(), True),
StructField('stock_code', StringType(), True),
StructField('company_name', StringType(), True),
StructField('price_open', DoubleType(), True),
StructField('price_close', DoubleType(), True),
StructField('price_mean', DoubleType(), True),
StructField('price_high', DoubleType(), True),
StructField('price_low', DoubleType(), True),
StructField('variation', DoubleType(), True)
])
# Parse data
@F.udf(returnType=schema)
def record_to_dict(row):
"""
Transform string into bovespa.Record
:param row: (string) position string from bovespa.
:return: parsed Record
"""
try:
record = bovespa.Record(row)
except:
return None
return {
'date': record.date, 'year': record.date.year,
'month': record.date.month, 'day': record.date.day,
'money_volume': record.volume,
'volume': record.quantity,
'stock_code': record.stock_code, 'company_name': record.company_name,
'price_open': record.price_open, 'price_close': record.price_close,
'price_mean': record.price_mean, 'price_high': record.price_high,
'price_low': record.price_low
}
dict_df = df_data.select(record_to_dict('value').alias('dict'))
trading_df = dict_df.select("dict.*", "*").drop('dict')
trading_df =trading_df.withColumn('variation', (trading_df['price_close'] - trading_df['price_open']) / 100)
trading_df.printSchema()
trading_df = trading_df.repartition("stock_code")
trading_df.orderBy(F.col('variation'), ascending=False).show(10)
trading_df.orderBy(F.col('money_volume'), ascending=False).show(10)
trading_df.show(10)
###Output
+----------+----+-----+---+------------+--------+----------+------------+----------+-----------+----------+----------+---------+--------------------+
| date|year|month|day|money_volume| volume|stock_code|company_name|price_open|price_close|price_mean|price_high|price_low| variation|
+----------+----+-----+---+------------+--------+----------+------------+----------+-----------+----------+----------+---------+--------------------+
|2020-01-02|2020| 1| 2|2.80483738E8|13591400| GGBR4| GERDAU| 20.13| 20.76| 20.63| 20.79| 20.12|0.006300000000000025|
|2020-01-02|2020| 1| 2| 23011.02| 1751| NVHO11|FII NOVOHORI| 13.01| 13.02| 13.14| 13.25| 13.0|9.999999999999786E-5|
|2020-01-02|2020| 1| 2| 10306.76| 73| CGAS3F| COMGAS| 142.88| 140.3| 141.18| 142.88| 140.0|-0.02579999999999984|
|2020-01-02|2020| 1| 2| 34460.3| 11854| DMMO3F| DOMMO| 3.0| 2.92| 2.9| 3.0| 2.86|-8.00000000000000...|
|2020-01-02|2020| 1| 2| 4109.24| 509| FRTA3F| POMIFRUTAS| 7.51| 7.81| 8.07| 8.56| 7.51|0.002999999999999...|
|2020-01-03|2020| 1| 3| 19101.5| 136| CGAS3F| COMGAS| 142.83| 138.0| 140.45| 142.88| 138.0|-0.04830000000000013|
|2020-01-06|2020| 1| 6| 29925.62| 212| CGAS3F| COMGAS| 142.0| 140.0| 141.15| 145.99| 138.89| -0.02|
|2020-01-03|2020| 1| 3| 55974.95| 19383| DMMO3F| DOMMO| 2.92| 2.72| 2.88| 2.99| 2.72|-0.00199999999999...|
|2020-01-06|2020| 1| 6| 65411.91| 25245| DMMO3F| DOMMO| 2.79| 2.45| 2.59| 2.9| 2.43|-0.00339999999999...|
|2020-01-03|2020| 1| 3| 2186.44| 262| FRTA3F| POMIFRUTAS| 8.49| 8.47| 8.34| 8.49| 7.84|-1.99999999999995...|
+----------+----+-----+---+------------+--------+----------+------------+----------+-----------+----------+----------+---------+--------------------+
only showing top 10 rows
###Markdown
So, we can take all insights from it.Such as:
###Code
# Most traded stocks in terms of money
stocks = trading_df.orderBy('money_volume', ascending=False).show(10)
# Most up variation in a day
trading_df.orderBy('variation', ascending=False).show(10)
# Rows
print('Amount of rows', trading_df.count())
###Output
Amount of rows 1251648
###Markdown
ETL World BankThe `wbgapi` package is the official Python/R library to interact with the [World Bank open data](https://data.worldbank.org/),it is useful to easily retrieve data, instead of download it ourselves.The purpose is to fetch data from Brazil, then it will be possible to compare the stock market and the real economy.
###Code
# Where data comes from
wb.source.info().items[:3]
metric_to_schema = {
'GC.DOD.TOTL.GD.ZS': 'debt',
'GC.XPN.TOTL.CN': 'total_expense',
'NY.GDP.MKTP.KD.ZG': 'gdp_growth',
'NY.GDP.MKTP.CD': 'gdp',
'NY.GDP.PCAP.CD': 'gdp_per_capita',
'SP.POP.TOTL': 'population',
'SP.DYN.LE00.IN': 'life_expectancy',
'GC.XPN.TOTL.GD.ZS': 'expense_per_gdp',
'FI.RES.TOTL.CD': 'total_reserves',
'SE.ADT.LITR.ZS': 'pop_literacy_rate',
'SE.XPD.TOTL.GD.ZS': 'expenditure_education_per_gdp',
'BX.KLT.DINV.CD.WD': 'foreign_investment'
}
wb.series.info().items[:3]
# Create spark dataframe
df_raw = wb.data.DataFrame(metric_to_schema.keys(), economy='BRA', numericTimeKeys=True)
economy_data = df_raw.rename(metric_to_schema, axis=0)
years= list(economy_data.columns)
economy_data = economy_data.transpose()
economy_data['year'] = years
economy_data
economy_df = spark.createDataFrame(economy_data)
economy_df.printSchema()
economy_df.show(10)
###Output
+------------------+--------------+----+-------------+---------------+-------------------+-----------------+----------------+-----------------+-----------------------------+---------------+-----------+----+
|foreign_investment|total_reserves|debt|total_expense|expense_per_gdp| gdp| gdp_growth| gdp_per_capita|pop_literacy_rate|expenditure_education_per_gdp|life_expectancy| population|year|
+------------------+--------------+----+-------------+---------------+-------------------+-----------------+----------------+-----------------+-----------------------------+---------------+-----------+----+
| NaN| 4.328488E8| NaN| NaN| NaN|1.51655699125199E10| NaN|210.109899384623| NaN| NaN| 54.143|7.2179226E7|1960|
| NaN| 4.7115615E8| NaN| NaN| NaN| 1.5236854859469E10| 10.275911554301| 205.04076826426| NaN| NaN| 54.634|7.4311343E7|1961|
| NaN| 3.360009E8| NaN| NaN| NaN|1.99262938390163E10| 5.21605942017898|260.425653075282| NaN| NaN| 55.13|7.6514328E7|1962|
| NaN| 3.5504232E8| NaN| NaN| NaN|2.30214772922093E10|0.874672592408302|292.252136324528| NaN| NaN| 55.627|7.8772657E7|1963|
| NaN| 2.454876E8| NaN| NaN| NaN|2.12118922599904E10| 3.4855823042772| 261.66661956418| NaN| NaN| 56.121|8.1064571E7|1964|
| NaN| 4.8425112E8| NaN| NaN| NaN| 2.179003511719E10| 3.05348789366924|261.354354519834| NaN| NaN| 56.61| 8.337353E7|1965|
| NaN| 4.2579662E8| NaN| NaN| NaN|2.70627165779111E10| 4.15036023303348|315.797202907062| NaN| NaN| 57.091|8.5696505E7|1966|
| NaN| 2.001496E8| NaN| NaN| NaN|3.05918340539653E10| 4.91526567501124|347.493056109702| NaN| NaN| 57.563|8.8035814E7|1967|
| NaN| 2.664229E8| NaN| NaN| NaN|3.38758818763672E10| 11.4272823832672|374.786775401462| NaN| NaN| 58.025|9.0387079E7|1968|
| NaN| 6.567732E8| NaN| NaN| NaN|3.74588982438609E10| 9.73582688991277|403.884267342212| NaN| NaN| 58.475|9.2746614E7|1969|
+------------------+--------------+----+-------------+---------------+-------------------+-----------------+----------------+-----------------+-----------------------------+---------------+-----------+----+
only showing top 10 rows
###Markdown
Step 2: Explore and Assess the Data Explore the DataIdentify data quality issues, like missing values, duplicate data, etc. Cleaning StepsDocument steps necessary to clean the data
###Code
# Not necessary
###Output
_____no_output_____
###Markdown
Step 3: Define the Data Model 3.1 Conceptual Data ModelMap out the conceptual data model and explain why you chose that model Trading Table| Feature | Type| ------------- |:-------------:|| date | date || year | integer || month | integer || day | integer || money_volume | double || volume | integer || stock_code | string || company_name | string || price_open | double || price_close | double || price_mean | double || price_high | double || price_low | double || variation | double | Economy Table| Feature | Type || ------------- |:-------------:|| foreign_investment | double || total_reserves | double || debt | double || total_expense | double || expense_per_gdp | double || gdp | double || gdp_growth | double || gdp_per_capita | double || pop_literacy_rate | double || expenditure_education_per_gdp | double || life_expectancy | double || population | double || year | long | 3.2 Mapping Out Data Pipelines1. Read trading data from bovespa2. Parse it then load into spark3. Process data creating custom columns4. Write it as parquet6. Read trading data from world bank as pandas7. Parse it then load into spark8. Process data creating custom columns9. Write it as parquet Step 4: Run Pipelines to Model the Data 4.1 Create the data modelBuild the data pipelines to create the data model.Complete flow can be found in `etl.py`
###Code
# Write code here
def process_economy_data(spark, output_data):
"""
ETL Brazilian economy data from world bank
:param spark: (SparkSession) spark session instance
:param output_data: (string) output file path
:return: spark dataframe representing economy table
"""
economy_df = etl_functions.create_economy_df()
economy_df = spark.createDataFrame(economy_df)
return etl_functions.create_economy_table(economy_df, output_data)
def process_trading_data(spark, trading_files, output_data):
"""
ETL trading data.
:param spark: (SparkSession) spark session instance
:param trading_files: (string) input file path
:param output_data: (string) output file path
:return: spark dataframe of trading data
"""
trading_df = spark.read.text(paths=trading_files)
trading_df = etl_functions.raw_trading_to_spark(trading_df)
trading_df = etl_functions.trading_columns(trading_df)
return etl_functions.create_trading_table(trading_df, output_data)
def etl():
input_data = "sample_data"
output_data = "sample_data/output"
trading_files = os.path.join(input_data, "COTAHIST_A*.txt")
trading_df = process_trading_data(spark, trading_files, output_data)
economy_df = process_economy_data(spark, output_data)
etl_functions.quality_check(economy_df, 'economy')
etl_functions.quality_check(trading_df, 'trading')
etl_functions.quality_check_column(trading_df, 'stock_code')
etl_functions.quality_check_column(trading_df, 'date')
etl_functions.quality_check_column(trading_df, 'volume')
etl_functions.quality_check_column(economy_df, 'gdp')
etl_functions.quality_check_column(economy_df, 'year')
###Output
_____no_output_____
###Markdown
4.2 Data Quality ChecksExplain the data quality checks you'll perform to ensure the pipeline ran as expected. These could include: * Integrity constraints on the relational database (e.g., unique key, data type, etc.) * Unit tests for the scripts to ensure they are doing the right thing * Source/Count checks to ensure completeness Run Quality Checks
###Code
# Perform quality checks here
etl_functions.quality_check(economy_df, 'economy')
etl_functions.quality_check(trading_df, 'trading')
etl_functions.quality_check_column(trading_df, 'stock_code')
etl_functions.quality_check_column(trading_df, 'date')
etl_functions.quality_check_column(trading_df, 'volume')
etl_functions.quality_check_column(economy_df, 'gdp')
etl_functions.quality_check_column(economy_df, 'year')
###Output
Data quality check passed for economy with 61 records.
Data quality check passed for trading with 1251648 records.
Data quality check passed for table stock_code with 1251648 records.
Data quality check passed for table date with 1251648 records.
Data quality check passed for table volume with 1251646 records.
Data quality check passed for table gdp with 61 records.
Data quality check passed for table year with 61 records.
###Markdown
This Notebook will be mainly used for the Capstone Project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Capstone Project Notebook This notebook will be mainly used for the capstone project.
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
###Markdown
Notebook for Capstone Project
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
|
dora/models/LGBM AccFeature.ipynb | ###Markdown
LGBM - Accumulated Sales of Category 3
###Code
import numpy as np
import pandas as pd
from utils import read_data, process_time, merge_data, promo_detector, promo_detector_fixed, promotionAggregation, dataset_builder, cumulative_sale_by_category
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error as mse
import sys
import xgboost as xgb
import lightgbm as lgb
from datetime import datetime
NUMBER_OF_LAGS = 4
sys.path.append("../../main/datasets/")
!ls ../../main/datasets/
###Output
1.0v 1.0v.zip
###Markdown
Defining metrics Baseline_score function
###Code
def baseline_score(prediction, target, simulatedPrice):
prediction = prediction.astype(int)
return np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulatedPrice)
###Output
_____no_output_____
###Markdown
Evaluation Metric
###Code
def feval(prediction, dtrain):
prediction = prediction.astype(int)
target = dtrain.get_label()
simulatedPrice = dtrain.get_weight()
return 'feval', np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulatedPrice), True
###Output
_____no_output_____
###Markdown
Objective Metric
###Code
def gradient(predt, dtrain):
y = dtrain.get_label()
sp = dtrain.get_weight()
return -2 * (predt - np.maximum(predt - y, 0) * 1.6) * (1 - (predt > y) * 1.6) * sp
def hessian(predt, dtrain):
y = dtrain.get_label()
sp = dtrain.get_weight()
return -2 * ((1 - (predt > y) * 1.6) ** 2) * sp
def objective(predt, dtrain):
grad = gradient(predt, dtrain)
hess = hessian(predt, dtrain)
return grad, hess
###Output
_____no_output_____
###Markdown
Building our datasetThis notebook makes this step cleaner than the previous versions. So It'll be tidier and shorter than before!
###Code
infos, items, orders = read_data("../../main/datasets/")
print("Sanity checks...", infos.shape, items.shape, orders.shape)
# Changing our time signatures
process_time(orders)
df = dataset_builder(orders, items)
###Output
_____no_output_____
###Markdown
Feature building
###Code
# percentage_accum_cat_3 feature...
df = cumulative_sale_by_category(df)
# This cell lags and diffs our feature 'orderSum'
shifting = df.copy()
for i in range(1, NUMBER_OF_LAGS + 1):
# Carrying the data of weeks t-1
shifting[f'orderSum_{i}'] = shifting.groupby('itemID')['orderSum'].shift(i)
# shifting[f'percentage_accum_cat_3_{i}'] = shifting.groupby('itemID')['percentage_accum_cat_3'].shift(i)
# Getting the difference of the orders and promotions between weeks t-1 and t-2...
shifting[f'orderSum_diff_{i}'] = shifting.groupby('itemID')[f'orderSum_{i}'].diff()
# shifting[f'percentage_accum_cat_3_{i}'] = shifting.groupby('itemID')[f'percentage_accum_cat_3_{i}'].diff()
# LGBM Says on docs that it automatically handles zero values as NaN
shifting.fillna(-1, inplace=True)
shifting
###Output
_____no_output_____
###Markdown
Maximum errorThe maximum error we could get in this dataset would be just guessing the mean of our sales from weeks 1 to 12, and that's what the cell below is computing.
###Code
worst_possible_prediction = shifting.loc[shifting.group_backwards < 13]['orderSum'].mean()
prediction = np.full(shifting.loc[shifting.group_backwards == 13]['orderSum'].shape, worst_possible_prediction) # Array filled with the mean...
target = shifting.loc[shifting.group_backwards == 13]['orderSum']
print("Guessing the mean of 'orderSum' for all items in target", mse(target, prediction) ** 0.5)
###Output
Guessing the mean of 'orderSum' for all items in target 90.29706562119341
###Markdown
Dataset SplittingAll my experiments will use weeks 13 to 3 as a train set, week 2 as our validation set and week 1 as a test set.
###Code
train = shifting.loc[shifting.group_backwards >= 3]
val = shifting.loc[shifting.group_backwards == 2]
test = shifting.loc[shifting.group_backwards == 1]
weights = infos.set_index('itemID')['simulationPrice'].to_dict()
w_train = train['itemID'].map(weights)
w_val = val['itemID'].map(weights)
# I recommend to the other members of the team keeping the
# datatypes of our datasets as Pandas DataFrames instead of Numpy,
# since It will easier to use Boosting Analysis frameworks
y_train = train['orderSum']
y_val = val['orderSum']
X_train = train.drop(columns=["orderSum"])
X_val = val.drop(columns=["orderSum"])
params = {
# "objective" : "poisson",
"objective" : "l1",
"metric" :"rmse",
"learning_rate" : 0.1,
'verbosity': 1,
'max_depth': 6,
'num_leaves': 15,
"min_data_in_leaf":2000,
}
lgbtrain = lgb.Dataset(X_train, label = y_train, weight=w_train)
lgbvalid = lgb.Dataset(X_val, label = y_val, weight=w_val)
num_round = 1000
model = lgb.train(params,
lgbtrain,
num_round,
valid_sets = [lgbtrain, lgbvalid],
verbose_eval=5,
early_stopping_rounds=5,
# fobj=objective,
feval=feval,
)
###Output
Training until validation scores don't improve for 5 rounds
[5] training's rmse: 39.6231 training's feval: 5.28752e+06 valid_1's rmse: 44.5311 valid_1's feval: 622657
[10] training's rmse: 39.2267 training's feval: 9.97676e+06 valid_1's rmse: 44.0393 valid_1's feval: 1.15422e+06
[15] training's rmse: 38.4458 training's feval: 1.572e+07 valid_1's rmse: 43.0619 valid_1's feval: 1.85649e+06
[20] training's rmse: 37.0015 training's feval: 2.28787e+07 valid_1's rmse: 41.2627 valid_1's feval: 2.91295e+06
[25] training's rmse: 34.7292 training's feval: 2.97199e+07 valid_1's rmse: 38.2173 valid_1's feval: 3.9056e+06
[30] training's rmse: 33.0462 training's feval: 3.46339e+07 valid_1's rmse: 36.089 valid_1's feval: 4.58112e+06
[35] training's rmse: 31.0925 training's feval: 3.89827e+07 valid_1's rmse: 33.835 valid_1's feval: 5.05089e+06
[40] training's rmse: 28.9986 training's feval: 4.28315e+07 valid_1's rmse: 31.3516 valid_1's feval: 5.4625e+06
[45] training's rmse: 27.9759 training's feval: 4.44193e+07 valid_1's rmse: 30.102 valid_1's feval: 5.63644e+06
[50] training's rmse: 26.7289 training's feval: 4.61641e+07 valid_1's rmse: 28.5622 valid_1's feval: 5.84053e+06
[55] training's rmse: 26.4101 training's feval: 4.67021e+07 valid_1's rmse: 28.158 valid_1's feval: 5.89381e+06
[60] training's rmse: 26.1238 training's feval: 4.71472e+07 valid_1's rmse: 27.7698 valid_1's feval: 5.91117e+06
[65] training's rmse: 26.0425 training's feval: 4.7234e+07 valid_1's rmse: 27.6532 valid_1's feval: 5.92273e+06
[70] training's rmse: 26.0082 training's feval: 4.73459e+07 valid_1's rmse: 27.5905 valid_1's feval: 5.93221e+06
Early stopping, best iteration is:
[69] training's rmse: 26.0185 training's feval: 4.7322e+07 valid_1's rmse: 27.6122 valid_1's feval: 5.93243e+06
###Markdown
Utilities **Predicting at test time**
###Code
y_test = test['orderSum']
X_test = test.drop(columns=["orderSum"])
final_predictions = model.predict(X_test)
final_predictions
final_predictions[final_predictions < 0] = 0
###Output
_____no_output_____
###Markdown
**Baseline calculation**
###Code
baseline_score(final_predictions, y_test.values, infos['simulationPrice'])
###Output
_____no_output_____
###Markdown
**Creating our Kaggle CSV**
###Code
final = pd.Series(0, index=np.arange(1, len(items)+1))
final[items.itemID] = final_predictions.astype(int)
final.to_csv("lgbm_kaggle_df.csv", header=["demandPrediction"],
index_label="itemID", sep="|")
###Output
_____no_output_____
###Markdown
**Saving our model in disk**
###Code
now = datetime.now().strftime("%d-%m-%Y-%Hh%Mm%Ss")
modelName = 'lgbm-' + now
xgb.save_model(modelName)
###Output
_____no_output_____ |