path
stringclasses 28
values | content_id
stringclasses 28
values | detected_licenses
sequence | license_type
stringclasses 2
values | repo_name
stringclasses 28
values | repo_url
stringclasses 28
values | star_events_count
int64 0
94
| fork_events_count
int64 0
80
| gha_license_id
stringclasses 1
value | gha_event_created_at
timestamp[us] | gha_updated_at
timestamp[us] | gha_language
stringclasses 1
value | language
stringclasses 1
value | is_generated
bool 1
class | is_vendor
bool 1
class | conversion_extension
stringclasses 1
value | size
int64 1.73k
10.1M
| script
stringclasses 28
values | script_size
int64 1.88k
116k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
/assignments/01_Intro2images-stVer.ipynb | 4c19b92543a0a199321351440743bd0c5a184586 | [] | no_license | iknyazeva/ML2020 | https://github.com/iknyazeva/ML2020 | 0 | 13 | null | 2020-09-21T09:56:38 | 2020-09-21T09:53:56 | Jupyter Notebook | Jupyter Notebook | false | false | .py | 10,090,911 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + Collapsed="false"
import numpy as np
from skimage import data, io, color
import matplotlib.patches as patches
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams["figure.figsize"] = (6,3)
# + [markdown] Collapsed="false"
# # Изображение как массив: начало
#
# В pyhton есть встроенный модель skimage для работы с изображениями. С помощью него, не устанавливая ничего дополнительно, уже можно многое сделать. Ваша задача изучить самостоятельно материал про то, как устроено зрение
# [Наш опорный курс по компьютерному зрению](https://courses.cs.washington.edu/courses/cse455/18sp/), посмотрите лекцию по Human vision и восприятию зрения. Если вам не нравится материал этого лектора - можете использовать любой другой. В результате у вас должна появиться заметка в маркдауне (используйте картинки с прямыми ссылками или загружайте их к себе в репозиторий и оттуда подгружайте потом). Эту заметку вы добавляете к себе на сайт к следующему занятию, а также отправляете в гугл класс. Также надо доделать эту тетрадку и тоже отправить ее на проверку
#
# + Collapsed="false"
print("Доступные изображения в skimage \n")
'; '.join(data.__all__[2:])
# + [markdown] Collapsed="false"
# ## Как прочитать изображение?
# Во многих пакетах есть средства работы с изображениям, посмотрим встроенные
# - средствами matplotlib.image (используется Pillow (PIL))
# - skiimage
#
# Есть разница откуда читать изображение, если оно находится по ссылке, то напрямую PIL не будет работать, skimage умеет читать прямо из ссылки
# + Collapsed="false"
# если на диске
fig, ax = plt.subplots(ncols = 2, figsize = (10,8))
ax[0].imshow(io.imread('imgs/Kyoto.jpg'));ax[0].axis('off');
ax[1].imshow(mpimg.imread('imgs/Kyoto.jpg')); ax[1].axis('off');
# + Collapsed="false"
import requests
from io import BytesIO
from PIL import Image
# + Collapsed="false"
Image.open('imgs/Kyoto.jpg')
# + [markdown] Collapsed="false"
# А если хотим прочитать сразу из урла - не получится, то skimage справится, а PIL - нет
# + Collapsed="false"
im_link = 'https://images.squarespace-cdn.com/content/v1/55ee34aae4b0bf70212ada4c/1479762511721-P1Z10B8ZJDWMPJO9C9TY/ke17ZwdGBToddI8pDm48kPmLlvCIXgndBxNq9fzeZb1Zw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PIFMLRh9LbupWL4Bv1SDYZc4lRApws2Snwk0j_RSxbNHMKMshLAGzx4R3EDFOm1kBS/Kyoto+3.jpg'
fig, ax = plt.subplots(ncols = 2, figsize = (10,8))
ax[0].imshow(io.imread(im_link));ax[0].axis('off');
ax[1].imshow(mpimg.imread(im_link)); ax[1].axis('off');
# + Collapsed="false"
Image.open(im_link)
# + Collapsed="false"
response = requests.get(im_link)
rdata = BytesIO(response.content)
Image.open(rdata)
# + Collapsed="false"
kyoto = np.array(Image.open(rdata))
# + [markdown] Collapsed="false"
# Эта картинка будет вашим заданием, а потренируемся на кошечках, еще и встроенных
# + [markdown] Collapsed="false"
# ## Изображение как матрица
# Изображение - массив в numpy, можем использовать любые пиксельные трансформации. Как устроен массив чуть попозже, а пока научимся накладывать различные маски
# + Collapsed="false"
#считываем картинку
image = data.chelsea()
#показываем
io.imshow(image);
print('Image dimensions:', image.shape)
print('Image size:', image.size)
# + Collapsed="false"
from skimage.draw import ellipse
# + Collapsed="false"
rr, cc = ellipse(120, 170, 40, 50, image.shape)
img = image.copy()
mask = np.zeros_like(img)
mask[rr,cc] = 1
fig, ax = plt.subplots(ncols = 2, figsize = (10,8))
img[mask==0] = 1
ax[0].imshow(img); ax[0].axis('off');
img = image.copy()
img[mask==1] = 255
ax[1].imshow(img); ax[1].axis('off');
# + [markdown] Collapsed="false"
# ## Задание 1. Bounding box
# Очень часто какой-то объект (лицо например) выделяют с помощью бокса
# возьмите любую картинку, выделите бокс, и рядом отрисуйте содержимое бокса
# + [markdown] Collapsed="false"
# ## Playing with colors
#
# Цветовые схемы, вы с ними еще познакомитесь. А мы пока посмотрим что с этим можно сделать
# Обычно цветное изображение загружается в RGB схеме. То есть изображение это три сконкатенированные матрицы: {Red, Green, Blue}
# + Collapsed="false"
image = data.coffee()
f, ax = plt.subplots(1, 3, figsize=(20,10))
chans = ['R','G','B']
for i in range(3):
ax[i].set_title(chans[i]+' channel')
ax[i].imshow(image[:,:,i], cmap='gray')
ax[i].axis('off')
# + [markdown] Collapsed="false"
# Как получить из цветной картинки изображение в градациях серого? А можно обратно?
# В skimage есть конвертеры skimage.color. Что-то произошло в ячейке ниже. Что я хотела сказать этим примером?
# + Collapsed="false"
grayscale = color.rgb2gray(image)
rgb = color.gray2rgb(grayscale)
fig, ax = plt.subplots(1,3, figsize = (20,10))
ax[0].imshow(image);ax[1].imshow(grayscale);ax[2].imshow(rgb);
# + [markdown] Collapsed="false"
# ## RGB to HUE
#
# Хороший инструмент посмотреть как соответсвтуют карты
# http://math.hws.edu/graphicsbook/demos/c2/rgb-hsv.html
# + Collapsed="false"
from skimage.color import rgb2hsv, hsv2rgb
# + Collapsed="false"
rgb_img = data.coffee()
hsv_img = rgb2hsv(rgb_img)
hue_img = hsv_img[:, :, 0]
sat_img = hsv_img[:, :, 1]
value_img = hsv_img[:, :, 2]
fig, ax = plt.subplots(ncols=4, figsize=(10, 4))
titles = ["RGB image","Hue channel","Saturation channel", "Value channel"]
imgs = [rgb_img, hue_img,sat_img,value_img]
cmaps = [None,'hsv', None,None]
for i in range(4):
ax[i].imshow(imgs[i], cmap = cmaps[i])
ax[i].set_title(titles[i])
ax[i].axis('off')
fig.tight_layout()
# + Collapsed="false"
data.camera().shape
# + [markdown] Collapsed="false"
# Допустим у вас есть любимый цвет и вы хотите видеть все в нем, тогда для этого отлично подходит hsv цветовая схема.
# Посмотрим как наше кофе будет выглятедь в разных цветах
# + Collapsed="false"
def colorize(image, hue):
hsv = color.rgb2hsv(color.gray2rgb(image))
hsv[:, :, 0] = hue
return color.hsv2rgb(hsv)
image = data.coffee()
hue_rotations = np.linspace(0, 1, 6)
colorful_images = [colorize(image, hue) for hue in hue_rotations]
fig, axes = plt.subplots(nrows=2, ncols=3, figsize = (10,8))
for ax, array, hue in zip(axes.flat, colorful_images,hue_rotations):
ax.imshow(array, vmin=0, vmax=1)
ax.set_title(f'Hue equal to {round(hue,3)}')
ax.set_axis_off()
fig.tight_layout()
# + [markdown] Collapsed="false"
# ### Сохранение изображения
# тут все просто:
#
# `io.imsave(filename, source)`
#
# Но если вы посмотрите на внтуреннее представление, то там каждый слой переведен в масштаб от 0 до 1, а стандартное представление это диапазон 0-255, поэтому вы переводите либо руками, либо оставляете это функции, она тогда выдаст предупреждение
# + Collapsed="false"
plt.imshow(colorize(image, 0.5))
io.imsave('imgs/blue_coffee.png', colorize(image, 0.5))
io.imsave('imgs/blue_coffee.png', (255*colorize(image, 0.5)).astype(np.uint8))
# + [markdown] Collapsed="false"
# ## А как быть если захотелось не только базового цвета, но еще и оттенков?
#
# Сделайте модификацию функции colorize, в которой можно задать диапазон базового цвета hue_min, hue_max
# + Collapsed="false"
def colorize_band(image, hue_min,hue_max):
#to do
pass
# + [markdown] Collapsed="false"
# Все оттенки фиолетового у вас будут выглядеть как-то так
# + Collapsed="false"
Image.open('imgs/purple_kyoto.jpg')
# + Collapsed="false"
| 7,627 |
/Featureselection1.ipynb | 6436b45c27083bb89cf6a9ae3aa1bd3bcc1ff9f1 | [] | no_license | gabibu/unsupervisedLearning | https://github.com/gabibu/unsupervisedLearning | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 247,907 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Secret-key or symmetric cryptography
#
# ## 1 DES S-box $S_3$
#
# The input to the DES S-box $S_3$ is $110111$. What’s the output? Use Wikipedia, google, a book or some other source to find the table for $S_3$.
# Source: http://celan.informatik.uni-oldenburg.de/kryptos/info/des/sbox/
# ![Des-Box3.png](img/Des-Box3.png)
#
# Output: 0011
# ## 2 3DES
#
# What is the effective key size of 3DES and why is it not 168 bits?
# + active=""
# it's 112 bits, not 168 due to meet-in-the-middle attack threat.
# -
# ## 3 Differences between AES and Rijndeal
#
# What are the differences between the AES candidate Rijndeal and AES with respect to block size, key size and number of rounds?
# As described in "[The Design of Rijandel](https://www.springer.com/us/book/9783540425809)": "The _only_ difference between Rijandel and the AES is the range of supported values for the block length and cipher key length".
#
# Rijndael is a block cipher with both a variable block length and a variable key length. The block length and the key length can be independently specified to any multiple of 32 bits, with a minimum of 128 bits and a maximum of 256 bits. It would be possible to define versions of Rijndael with a higher block length or key length, but currently there seems no need for it.
#
# The AES fixes the block length to 128 bits, and supports key lengths of 128, 192 or 256 bits only. The extra block and key lengths in Rijndael were not evaluated in the AES selection process, and consequently they are not adopted in the current FIPS standard.
# ## 4 AES S-box
#
# If we input the byte $11011101$ into the AES S-box, what’s the output? Use the table in slides!
# $1101 -> D -> row$
#
# $1101 -> D -> column$
#
# $11011101 -> C1 -> 11000001$
#
# ![AES-S-Box.png](img/AES-S-Box.png)
# ## 5 Other Block ciphers
#
# Compare DES, 3DES and AES with other block ciphers like IDEA, Blowfish, Twofisch, RC5, RC6, Serpent and three more of Your choice. Make a table that shows key size, effective key size, block size, number of rounds, relative velocity of a hard- or software implementation.
# - https://pdfs.semanticscholar.org/e684/4c748d38997bf0de71cd7d05e58b09e310f6.pdf
# - https://www.cse.wustl.edu/~jain/cse567-06/ftp/encryption_perf/
# - http://www.ijcseonline.org/pub_paper/IJCSE-00187.pdf
#
# |Ciphers|key size| effective keysize|block size| number of rounds| relative velocity|
# |:--- |:--- |:--- |:--- |:--- |:--- |
# |DES|56 bits||64bits|16|1|
# |3DES| 112 bits ||64bits|48|0.3-0.5|
# |AES|128,192 or 256||128, 192 or 256|10, 12 or 14|0.6|
# |IDEA|128 bits||64 bits|8.5
# |Blowfish|32-448 bits||64 bits|16|1.2-3|
# |Twofish|
# |RC5|
# |RC6|128,192 or 256||128 bits|20|
# ## 6 Modes of operation
#
# You should be able to produce sketches of the 5 modes of operation and You should be able to write down the equations, relating, IVs (if any), plaintext block, key, ciphertext block, encryption and decryption, XOR.
# You should also understand the influence of a one-bit error in the ciphertext block.
# | Modes of Operation | Long Name | Cipher Type |
# |:--- |:--- |:--- |
# | ECB | Electronic Code Book Mode | Block |
# | CBC | Chained Block Cipher Mode | Block |
# | CFB | Cipher FeedBack Mode | Stream |
# | OFB | Output FeedBack Mode| Stream |
# | CTR | Counter Mode | Stream |
# ### ECB
#
# ![Electronic CodeBook Mode Diagram](img/ECB_Diagram.png)
#
# #### Encryption
# $c_k = E(k, m_k),\ k=1,2,3,...$
#
# #### Decryption
# $m_k = D(k, c_k),\ k=1,2,3,...$
#
# #### Error Propagation
# An error in the ciphertext produces garbage output but does not propagate.
# ### CBC
#
# ![Chained Block Cipher ModeDiagram](img/CBC_Diagram.png)
#
# #### Encryption
# $c_0 = IV$<br/>
# $c_k = E(k,m_k\oplus c_{k-1}),\ k = 1,2,3,...$
#
# #### Decryption
# $c_0 = IV$<br/>
# $m_k = D(k, c_k)\oplus c_{k-1},\ k = 1,2,3,...$
#
# #### Error Propagation
# An error in the ciphertext $c_k$ affects all bits of the corresponding plaintext $m_k$ and the one bit of $m_{k+1}$ with which the erroneous bit in $c_k$ is XOR-ed
# ### CFB
#
# ![Cipher FeedBack Mode Diagram](img/CFB_Diagram.png)
#
# #### Encryption
# $c_0 = IV$<br/>
# $c_i = m_i \oplus E(k, c_{i-1},\ i=1,2,3...$
#
# #### Decryption
# $c_0 = IV$<br/>
# $m_i = c_i \oplus E(k, c_{i-1},\ i=1,2,3...$
#
# #### Error Propagation
# An error in the cipher block $c_k$ produces one error in the plaintext block $m_k$ at the bit position where the error has occured (as it is XOR-ed), and produces garbage in the next plaintext block $m_{k+1}$ as $E(k,c_{k_{faulty}})$ should produce a completely different output than $E(k, c_k)$, and therefore $c_{k+1}\oplus E(k,c_{k_{faulty}})$ should be complete gibberish.
# ### OFB
#
# ![Output FeedBack Mode Diagram](img/OFB_Diagram.png)
#
# #### Encryption
# $z_0 = IV$<br/>
# $z_i = E_k(z_{i-1}),\ i=1,2,3,...$<br/>
# $c_i = m_i\oplus z_i,\ i=1,2,3,...$
#
# #### Decryption
# $z_0 = IV$<br/>
# $z_i = E_k(z_{i-1}),\ i=1,2,3,...$<br/>
# $m_i = c_i\oplus z_i,\ i=1,2,3,...$
#
# #### Error Propagation
# An error in cipher bit $c_i$ leads to an erroneous bit $m_i$ but does not propagate.
# ### CTR
#
# ![Counter Mode Diagram](img/CTR_Diagram.png)
#
# #### Encryption
# $z_0 = IV$<br/>
# $z_i = IV\oplus i,\ i=1,2,3,...$<br/>
# $y_i = x_i\oplus E_k(z_i),\ i=1,2,3,...$
#
# #### Decryption
# $z_0 = IV$<br/>
# $z_i = IV\oplus i,\ i=1,2,3,...$<br/>
# $y_i = x_i\oplus E_k(z_i),\ i=1,2,3,...$
#
# #### Note on the IV
# The IV should be a nonce, but same nonce can be used throughout the session. It's main goal is to offset the counter startpoint, so that using the same key and first message does not generate the same ciphertext (think of handshakes/authentication).
#
# #### Error Propagation
# An error in $y_0$ generates one error in the decrypted $x_0$, but does not propagate.
# ## 7 RC4
#
# Use python in Jupyter Notebook to programm RC4. Do some research on RC4 and find out, why it should not be used any more!
# Siehe auch [Webbrowser: Endgültig Schluss mit RC4](https://www.heise.de/security/meldung/Webbrowser-Endgueltig-Schluss-mit-RC4-2805770.html) und [Der Lange Abschied von RC4](https://www.golem.de/news/verschluesselung-der-lange-abschied-von-rc4-1507-114877.html).
# +
def KSA(key):
keylength = len(key)
S = list(range(256))
j = 0
for i in range(256):
j = (j + S[i] + key[i % keylength]) % 256
S[i], S[j] = S[j], S[i]
return S
def PRGA(S):
i = 0
j = 0
while True:
i = (i + 1) % 256
j = (j + S[i]) % 256
S[i], S[j] = S[j], S[i]
yield S[(S[i] + S[j]) % 256]
def RC4(key):
S = KSA(key)
return PRGA(S)
def convert_key(s):
return [ord(c) for c in s]
# +
key = "Key"
plaintext = "Plaintext"
# ciphertext should be BBF316E8D940AF0AD3
key = convert_key(key)
keystream = RC4(key)
import sys
for c in plaintext:
sys.stdout.write("%02X" % (ord(c) ^ next(keystream)))
# -
# Vulnerabilities:
#
# - Pseudo Random Number Generator PRNG has higher probabilities for some numbers to appear.<br/>
# This lets an attacker analyse some input/output-pairs and find out the key
# - No nonce as input therefore it needs a new key for each stream.<br/>
# Since most applications just concatenate the nonce and the key, this is a problem because "over all possible RC4 keys, the statistics for the first few bytes of output keystream are strongly non-random, leaking information about the key."
# ## 8 Trivium
#
# Use python in Jupyter Notebook to programm Trivium. This is not an easy task: do it in groups of two!
#
# Use $0x00000000000000000000000000000000$ for the key, IV, and plaintext for initial testing.
#
# The expected ciphertext for this should be $0xFBE0BF265859051B517A2E4E239FC97F$.
#
# In the algorithm on slide “_Trivium — Initialization_”, the $+$ represents XOR (which in python is “^”), ·
# represents logical AND (which in python is “&”). The key-stream is
#
# $z_i = t_1 + t_2 + t_3$
#
# and the $i$th byte of the ciphertext $c_i$ of the plaintext $m_i$ is
#
# $c_i = z_i \oplus m_i$
#
# The following [site](https://asecuritysite.com/encryption/trivium) might be of some help!
# +
from collections import deque
from itertools import repeat
from sys import version_info
class Trivium:
def __init__(self, key, iv):
"""in the beginning we need to transform the key as well as the IV.
Afterwards we initialize the state."""
self.state = None
self.counter = 0
self.key = key # self._setLength(key)
self.iv = iv # self._setLength(iv)
# Initialize state
# len 93
init_list = list(map(int, list(self.key)))
init_list += list(repeat(0, 13))
# len 84
init_list += list(map(int, list(self.iv)))
init_list += list(repeat(0, 4))
# len 111
init_list += list(repeat(0, 108))
init_list += list([1, 1, 1])
self.state = deque(init_list)
# Do 4 full cycles, drop output
for i in range(4*288):
self._gen_keystream()
def encrypt(self, message):
"""To be implemented"""
pass
def decrypt(self, cipher):
"""To be implemented"""
#maybe with code from here https://github.com/mortasoft/Trivium/blob/master/trivium.py
# Line 119
pass
def keystream(self):
"""output keystream
only use this when you know what you are doing!!"""
while self.counter < 2**64:
self.counter += 1
yield self._gen_keystream()
def _setLength(self, input_data):
"""we cut off after 80 bits, alternatively we pad these with zeros."""
input_data = "{0:080b}".format(input_data)
if len(input_data) > 80:
input_data = input_data[:(len(input_data)-81):-1]
else:
input_data = input_data[::-1]
return input_data
def _gen_keystream(self):
"""this method generates triviums keystream"""
t_1 = self.state[65] ^ self.state[92]
t_2 = self.state[161] ^ self.state[176]
t_3 = self.state[242] ^ self.state[287]
out = t_1 ^ t_2 ^ t_3
u_1 = t_1 ^ self.state[90] & self.state[91] ^ self.state[170]
u_2 = t_2 ^ self.state[174] & self.state[175] ^ self.state[263]
u_3 = t_3 ^ self.state[285] & self.state[286] ^ self.state[68]
self.state.rotate(1)
self.state[0] = u_3
self.state[93] = u_1
self.state[177] = u_2
return out
import sys
k1="00000000000000000000"
i1="00000000000000000000"
print ("Key: "+k1)
print ("IV: "+i1)
def main():
KEY = hex_to_bits(k1)[::-1]
IV = hex_to_bits(i1)[::-1]
trivium = Trivium(KEY, IV)
next_key_bit = trivium.keystream().__next__
for i in range(1):
keystream = []
for j in range(128):
keystream.append(next_key_bit())
print ("Stream: "+bits_to_hex(keystream))
# Convert strings of hex to strings of bytes and back, little-endian style
_allbytes = dict([("%02X" % i, i) for i in range(256)])
def _hex_to_bytes(s):
return [_allbytes[s[i:i+2].upper()] for i in range(0, len(s), 2)]
def hex_to_bits(s):
return [(b >> i) & 1 for b in _hex_to_bytes(s)
for i in range(8)]
def bits_to_hex(b):
return "".join(["%02X" % sum([b[i + j] << j for j in range(8)])
for i in range(0, len(b), 8)])
if __name__ == "__main__":
main()
# -
# ## 9 OTP
#
# Make your own example with one-time pad. Why is it perfectly secure? Make sure, the key is truly random not used more than once and kept secret from adversaries.
# $m = 0110100001100101011011000110110001101111001000000111011101101111011100100110110001100100$<br />
# $k = 0110110111011101100100110001101100000001010001110010110111101010101110010001101100011100$
# +
m = '0110100001100101011011000110110001101111001000000111011101101111011100100110110001100100'
k = '0110110111011101100100110001101100000001010001110010110111101010101110010001101100011100'
c = int(m,2)^int(k,2)
print('m: ' + m)
print('k: ' + k)
print('c: ' + bin(c)[2:].zfill(len(m)))
print('d: ' + bin(c^int(k,2))[2:].zfill(len(m)))
print('m: ' + m)
| 12,425 |
/code_back_up/backuped_on_sharefolder_2021-01-06_000/00227_Performance_measurement_updated_1221.ipynb | 8335f10935d9168b511306fb2b27f3a6bd5534ca | [] | no_license | TrellixVulnTeam/jian_projects_AAPL | https://github.com/TrellixVulnTeam/jian_projects_AAPL | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 82,546 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Coursework 2: Data Processing
#
# ## Task 1
# This coursework will assess your understanding of using NoSQL to store and retrieve data. You will perform operations on data from the Enron email dataset in a MongoDB database, and write a report detailing the suitability of different types of databases for data science applications. You will be required to run code to answer the given questions in the Jupyter notebook provided, and write a report describing alternative approaches to using MongoDB.
#
# Download the JSON version of the Enron data (using the “Download as zip” to download the data file from http://edshare.soton.ac.uk/19548/, the file is about 380MB) and import into a collection called messages in a database called enron. You do not need to set up any authentication. In the Jupyter notebook provided, perform the following tasks, using the Python PyMongo library.
#
# Answers should be efficient in terms of speed. Answers which are less efficient will not get full marks.
import pymongo
from pymongo import MongoClient
from datetime import datetime
from pprint import pprint
# ### 1)
# Write a function which returns a MongoDB connection object to the "messages" collection. [4 points]
# + nbgrader={"grade": false, "grade_id": "get_collection", "locked": false, "schema_version": 1, "solution": true}
def get_collection():
"""
Connects to the server, and returns a collection object
of the `messages` collection in the `enron` database
"""
# YOUR CODE HERE
return None
# -
# ### 2)
#
# Write a function which returns the amount of emails in the messages collection in total. [4 points]
# + nbgrader={"grade": false, "grade_id": "get_amount_of_messages", "locked": false, "schema_version": 1, "solution": true}
def get_amount_of_messages(collection):
"""
:param collection A PyMongo collection object
:return the amount of documents in the collection
"""
# YOUR CODE HERE
pass
# -
# ### 3)
#
# Write a function which returns each person who was BCCed on an email. Include each person only once, and display only their name according to the X-To header. [4 points]
#
#
# + nbgrader={"grade": false, "grade_id": "get_bcced_people", "locked": false, "schema_version": 1, "solution": true}
def get_bcced_people(collection):
"""
:param collection A PyMongo collection object
:return the names of the people who have received an email by BCC
"""
# YOUR CODE HERE
pass
# -
# ### 4)
#
# Write a function with parameter subject, which gets all emails in a thread with that parameter, and orders them by date (ascending). “An email thread is an email message that includes a running list of all the succeeding replies starting with the original email.”, check for detail descriptions at https://www.techopedia.com/definition/1503/email-thread [4 points]
# + nbgrader={"grade": false, "grade_id": "get_emails_in_thread", "locked": false, "schema_version": 1, "solution": true}
def get_emails_in_thread(collection, subject):
"""
:param collection A PyMongo collection object
:return All emails in the thread with that subject
"""
# YOUR CODE HERE
pass
# -
# ### 5)
#
# Write a function which returns the percentage of emails sent on a weekend (i.e., Saturday and Sunday) as a `float` between 0 and 1. [6 points]
# + nbgrader={"grade": false, "grade_id": "get_percentage_sent_on_weekend", "locked": false, "schema_version": 1, "solution": true}
def get_percentage_sent_on_weekend(collection):
"""
:param collection A PyMongo collection object
:return A float between 0 and 1
"""
# YOUR CODE HERE
pass
# -
# ### 6)
#
# Write a function with parameter limit. The function should return for each email account: the number of emails sent, the number of emails received, and the total number of emails (sent and received). Use the following format: [{"contact": "[email protected]", "from": 42, "to": 92, "total": 134}] and the information contained in the To, From, and Cc headers. Sort the output in descending order by the total number of emails. Use the parameter limit to specify the number of results to be returned. If limit is null, the function should return all results. If limit is higher than null, the function should return the number of results specified as limit. limit cannot take negative values. [10 points]
# + nbgrader={"grade": false, "grade_id": "get_emails_between_contacts", "locked": false, "schema_version": 1, "solution": true}
def get_emails_between_contacts(collection, limit):
"""
Shows the communications between contacts
Sort by the descending order of total emails using the To, From, and Cc headers.
:param `collection` A PyMongo collection object
:param `limit` An integer specifying the amount to display, or
if null will display all outputs
:return A list of objects of the form:
[{
'contact': <<Another email address>>
'from':
'to':
'total':
},{.....}]
"""
# YOUR CODE HERE
pass
# -
# ### 7)
# Write a function to find out the number of senders who were also direct receivers. Direct receiver means the email is sent to the person directly, not via cc or bcc. [4 points]
def get_from_to_people(collection):
"""
:param collection A PyMongo collection object
:return the NUMBER of the people who have sent emails and received emails as direct receivers.
"""
# YOUR CODE HERE
pass
# ### 8)
# Write a function with parameters start_date and end_date, which returns the number of email messages that have been sent between those specified dates, including start_date and end_date [4 points]
def get_emails_between_dates(collection, start_date, end_date):
"""
:param collection A PyMongo collection object
:return All emails between the specified start_date and end_date
"""
# YOUR CODE HERE
pass
# ## Task 2
# This task will assess your ability to use the Hadoop Streaming API and MapReduce to process data. For each of the questions below, you are expected to write two python scripts, one for the Map phase and one for the Reduce phase. You are also expected to provide the correct parameters to the `hadoop` command to run the MapReduce process. Write down your answers in the specified cells below.
#
# To get started, you need to download and unzip the YouTube dataset (available at http://edshare.soton.ac.uk/19547/) onto the machine where you have Hadoop installed (this should be the virtual machine provided).
#
# To help you, `%%writefile` has been added to the top of the cells, automatically writing them to "mapper.py" and "reducer.py" respectively when the cells are run.
# ### 1)
# Using Youtube01-Psy.csv, find the hourly interval in which most spam was sent. The output should be in the form of a single key-value pair, where the value is a datetime at the start of the hour with the highest number of spam comments. [9 points]
from datetime import datetime
import csv
import sys
# +
# DEBUGGING SCRIPT FOR MAPPER
dates = [
'2013-11-07T06:20:48',
'2013-11-07T12:37:15',
'2014-01-19T04:27:18',
'2014-01-19T08:55:53',
'2014-01-19T20:31:10'
]
spam_class = [1,1,0,0,1]
for x in range(len(dates)):
if spam_class[x] == 1:
date = dates[x].strip()
date_as_date = datetime.strptime(date, '%Y-%m-%dT%H:%M:%S')
day = date_as_date.date().day
month = date_as_date.date().month
year = date_as_date.date().year
hour = date_as_date.hour
print (str(day) + '|' + str(month) + '|' + str(year) + '|' + str(hour) + '\t' + '1')
# +
test = [1,2,3]
test = test[1:]
# +
# %%writefile mapper.py
# #!/usr/bin/env python
# MAPPER
import csv
import sys
from datetime import datetime
lines = sys.stdin.readlines()
csvreader = csv.reader(lines)
dates = []
spam_class = []
input_for_reducer = []
counter = 0
for row in csvreader:
if counter > 0:
dates.append(row[2])
spam_class.append(row[4])
counter += 1
if (len(dates) != len(spam_class)):
print ('Unequal number of entries in Date and Class columns... Aborting...')
sys.exit()
for x in range(len(dates)):
if spam_class[x] == '1':
date = dates[x].strip()
date_as_date = datetime.strptime(date, '%Y-%m-%dT%H:%M:%S')
day = date_as_date.date().day
month = date_as_date.date().month
year = date_as_date.date().year
hour = date_as_date.hour
print (str(day) + '|' + str(month) + '|' + str(year) + '|' + str(hour) + '\t' + '1')
# -
# If the dates in our input file are arranged such that the dates (at an hourly interval) occur in groups, we can perform the Reduce operation in linear time.
#
# It is observed in the data that the column 'Date' is indeed sorted in ascending order
#
# So the dates (at an hourly interval) are in groups
#
#
#
# +
# DEBUGGING SCRIPT FOR REDUCER
input_pairs = [
'7|11|2013|6 1',
'7|11|2013|6 1',
'7|11|2013|12 1',
'7|11|2013|12 1',
'7|11|2013|12 1',
'19|1|2014|20 1'
]
dates_list = []
date_count_dict = dict()
final_dict = {
'hour_with_most_spam': None,
'value_of_max_spam_count': 0
}
for input_pair in input_pairs:
input_list = input_pair.split('\t', 1)
if (len(input_list) != 2):
continue
dates_list.append(input_list[0])
dates_list
for date in dates_list:
if date in date_count_dict.keys():
date_count_dict[date] += 1
else:
date_count_dict[date] = 1
date_count_dict_sorted = sorted(date_count_dict.items(), key=lambda date_count_value: date_count_value[1],
reverse=True)
final_dict['hour_with_most_spam'] = date_count_dict_sorted[0][0]
final_dict['value_of_max_spam_count'] = date_count_dict_sorted[0][1]
final_dict
# +
# %%writefile reducer.py
# #!/usr/bin/env python
# REDUCER
import sys
from datetime import datetime
input_pairs = sys.stdin.readlines()
dates_list = []
date_count_dict = dict()
final_dict = {
'hour_with_most_spam': None,
'value_of_max_spam_count': 0
}
for input_pair in input_pairs:
input_list = input_pair.split('\t', 1)
if (len(input_list) != 2):
continue
dates_list.append(input_list[0])
dates_list
for date in dates_list:
if date in date_count_dict.keys():
date_count_dict[date] += 1
else:
date_count_dict[date] = 1
date_count_dict_sorted = sorted(date_count_dict.items(), key=lambda date_count_value: date_count_value[1],
reverse=True)
final_dict['hour_with_most_spam'] = date_count_dict_sorted[0][0]
final_dict['value_of_max_spam_count'] = date_count_dict_sorted[0][1]
for key, value in final_dict.items():
print (key + "\t" + str(value))
# +
myList = [1,1,1,2,2,2,2,3,3]
max_count = 1
max_elem = myList[0]
curr_count = 1
for x in range(1, len(myList)):
if (myList[x] == myList[x-1]):
# same elem, inc counter
curr_count += 1
else:
# diff elem
if curr_count > max_count:
max_count = curr_count
max_elem = myList[x - 1]
curr_count = 1
# last element check
if curr_count > max_count:
max_count = curr_count
max_elem = myList[x - 1]
print (max_elem)
# + language="bash"
# cat ./Youtube01-Psy.csv | ./mapper.py | ./reducer.py
# + language="bash"
#
# # Clear output
# rm -rf output1
#
# # Make sure hadoop is in standalone mode
# hadoop-standalone-mode.sh
#
# # Main pipeline command
# hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \
# -files mapper.py,reducer.py \
# -input Youtube01-Psy.csv \
# -mapper ./mapper.py \
# -reducer ./reducer.py \
# -output output1
# + language="bash"
# #Hadoop command to run the map reduce.
#
# hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \
# -files mapper.py,reducer.py \
# -input Youtube01-Psy.csv \
# -mapper ./mapper.py \
# -reducer ./reducer.py \
# -output output
# +
#Expected key-value output format:
#hour_with_most_spam "2013-11-10T10:00:00"
#Additional key-value pairs are acceptable, as long as the hour_with_most_spam pair is correct.
# -
# ### 2)
# Find all comments associated with a username (the AUTHOR field). Return a JSON array of all comments associated with that username. (This should use the data from all 5 data files: Psy, KatyPerry, LMFAO, Eminem, Shakira) [11 points]
# +
# %%writefile mapper1.py
# #!/usr/bin/env python
#Answer for mapper.py
# importing the libraries
import csv
import sys
def mapper_function(required_username):
# function that accepts an username as input
# counter keeps track of number of rows left, so that we can skip the first row (headers)
counter = 0
for row in csvreader:
if counter > 0:
usernames.append(row[1])
comments.append(row[3])
counter += 1
if (len(usernames) != len(comments)):
print ('Unequal number of entries in Author and Content... Aborting...')
sys.exit()
# pass the required username and the comments for that username to reducer stage
for x in range(len(usernames)):
if required_username == usernames[x]:
print (str(usernames[x]) + '\t' + str(comments[x]))
lines = sys.stdin.readlines()
# read from csv
csvreader = csv.reader(lines)
usernames = []
comments = []
# get username from command line argument
required_username = str(sys.argv[1])
mapper_function(required_username)
# +
# %%writefile reducer1.py
# #!/usr/bin/env python
#Answer for reducer.py
import sys
final_dict = {
'username': None,
'comments': []
}
# get input from mapper job
input_pairs = sys.stdin.readlines()
for input_pair in input_pairs:
# split the tab separated input (username\tcomment)
input_list = input_pair.split('\t', 1)
if (len(input_list) != 2):
continue
# append each comment
final_dict['comments'].append(input_list[1])
# set the username if it is not set
if final_dict['username'] is None:
final_dict['username'] = input_list[0]
# print out the output in desired form: username\t[..comments..]
print (final_dict.values()[0] + '\t' + str(final_dict.values()[1]))
# + language="bash"
# cat ./test_files/Youtube02-KatyPerry.csv ./test_files/Youtube01-Psy.csv \
# ./test_files/Youtube03-LMFAO.csv ./test_files/Youtube04-Eminem.csv ./test_files/Youtube05-Shakira.csv | ./mapper1.py 'Mini' | ./reducer1.py
# + language="bash"
#
# # Clear output
# rm -rf output2
#
# # Make sure hadoop is in standalone mode
# hadoop-standalone-mode.sh
#
# # Main pipeline command
# hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \
# -files mapper1.py,reducer1.py \
# -input ./test_files/Youtube01-Psy.csv ./test_files/Youtube02-KatyPerry.csv ./test_files/Youtube03-LMFAO.csv \
# -mapper 'mapper1.py Mini' -file ./mapper1.py \
# -reducer ./reducer1.py \
# -output output2
# +
#Expected key-value output format:
#John Smith ["Comment 1", "Comment 2", "Comment 3", "etc."]
#Jane Doe ["Comment 1", "Comment 2", "Comment 3", "etc."]
# -
cel(writer,"audience_by_group_both",index=False)
df_output_audience_tireonly.to_excel(writer,"audience_by_group_tireonly",index=False)
df_both_by_trans.to_excel(writer,"trans_detail",index=False)
df_output_by_store_period.to_excel(writer,"store_by_each_period",index=False)
writer.save()
# -
df_both_by_trans.shape
| 15,859 |
/Jupyter/02. Python para ciencia de datos intermedio/Python para ciencia de datos intermedio.ipynb | d86c329ce86a16fff6ee4355e0d700457fd55cb7 | [] | no_license | juanesoc/Curso-python | https://github.com/juanesoc/Curso-python | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 291,183 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 控制迷宫寻宝机器人
#
# 在这个项目中,你将使用刚刚学到的知识,尝试根据要求,编写代码,来控制一个机器人,在模拟环境中行走,并找到目标宝藏。
#
# 机器人所在的模拟环境中,会包含这样几个因素:机器人的起点、障碍物、宝藏箱。你的任务包括:
#
# 1. 分析模拟环境的数据
# 2. 控制机器人随机行动
# 3. (可选)控制机器人走到终点
#
#
# * 一个良好的含有注释的代码,可以让你的程序可读性更高,尝试为你自己的代码添加相应的注释。
# ---
#
# ---
#
# ## 第一节 分析模拟环境的数据
#
# 首先,只有足够了解机器人所在的环境,我们的机器人才能成功找到目标宝藏,因此首先我们来对机器人所在环境的数据进行分析。在这个部分,会考察你对数据结构、控制流的了解。
#
# ### 1.1 理解模拟环境数据的储存格式
#
# 首先我们思考这样的问题:如何存储模拟环境的数据呢?
#
# 我们将我们的模拟环境抽象成一个格子世界,每个格子按照坐标编号进行标记;每个格子中会有四个情况,分别为普通格子(可通行)、机器人的起点(可通行)、障碍物(不可通行)、宝藏箱(目标点)。例如,一个模拟环境就可以抽象成3行4列的格子世界,并按这按这样的方法进行存储:
# ```
# environment = [[0,0,0,2],
# [1,2,0,0],
# [0,2,3,2]]
# ```
# 我们用了一个列表来保存虚拟世界的数据。外层列表中的每一个元素依然是一个列表,它代表模拟环境中每行的数据。而对于这个列表中的每个元素都是一个数,它们的含义是:
# - 0: 普通格子(可通行)
# - 1: 机器人的起点(可通行)
# - 2: 障碍物(不可通行)
# - 3: 宝藏箱(目标点)
#
# 那么,根据上述的数据,这个迷宫的第二行第一列,是我们机器人的起点。
#
# __注:我们描述的迷宫的坐标位置(第一行第一列),和迷宫下标索引的值(如 `(0,0)`)是不一样的,请注意下标的问题。__
#
#
# 如下的代码,使用了辅助函数,读取了模拟环境的数据,并保存在了 `env_data` 变量中。
#
# +
import helper
env_data = helper.fetch_maze()
# -
# ---
#
#
# **任务1:**在如下代码中,请写代码获得这些值:
#
# 1. 模拟环境的长和宽
# 2. 模拟环境中第3行第6列元素
# +
#DONE 1模拟环境的行数
rows = len(env_data)
#DONE 2模拟环境的列数
columns = len(env_data[0])
#DONE 3取出模拟环境第三行第六列的元素
row_3_col_6 = env_data[2] [5]
print("迷宫共有", rows, "行", columns, "列,第三行第六列的元素是", row_3_col_6)
# -
# ---
#
# ## 1.2 分析模拟环境数据
#
# 接着我们需要对模拟环境的中的数据进行分析。请根据如下的指示,计算相应的值。
#
# ---
#
# **任务2:**在如下代码中,请计算模拟环境中,第一行和第三列的障碍物个数。
#
# 提示:*可以用循环完成。*
# +
#DONE 4计算模拟环境中,第一行的的障碍物个数。
def count_row(list, num):
total = 0
for i in list:
if i == num:
total += 1
return total
number_of_barriers_row1 = count_row(env_data[0], 2)
#DONE in half 5计算模拟环境中,第三列的的障碍物个数。
def count_col(list, num, col):
total = 0
tem_col = []
for i in range(len(list)):
tem_col.append(list[i][col - 1])
for num in tem_col:
if num == 2:
total += 1
return total
number_of_barriers_col3 = count_col(env_data, 2, 3)
print("迷宫中,第一行共有", number_of_barriers_row1, "个障碍物,第三列共有", number_of_barriers_col3, "个障碍物。")
# -
# %run -i -e test.py RobotControllortTestCase.test_cal_barriers
# ---
#
# **任务3:**在如下代码中:
#
# 1. 创建一个名为 `loc_map` 的字典,它有两个键值,分别为 `start` 和 `destination`,对应的值分别为起点和目标点的坐标,它们以如 `(0,0)` 的形式保存为元组。
# 2. 从字典中取出 `start` 对应的值,保存在 `robot_current_loc` 对应的变量中,这个变量表示小车现在的位置。
# +
loc_map = {'start':(0, 8), 'destination':(0, 0)} #Done 6按照上述要求创建字典
robot_current_loc = loc_map['start'] #done 7保存机器人当前的位置
# -
# %run -i -e test.py RobotControllortTestCase.test_cal_loc_map
#
# ---
#
# ---
#
# ## 第二节 控制机器人随机漫步
#
# 在这一步中,你需发出指令,控制机器人在环境中随机行动。它会考察你对控制流、调用函数的知识。
#
#
# ## 2.1 控制机器人行动
#
# 我们的机器人能够执行四个动作:向上走 `u`、向下走 `d`、向左走 `l`、向右走 `r`。但是,由于有障碍,很多时候机器人的行动并不能成功。所以在这里,你需要实现一个函数,来判断机器人在某个位置,执行某个移动动作是否可行。
#
# ---
#
# **任务4:**在下方代码中,实现名为 `is_move_valid_special` 的函数,它有两个输入,分别为机器人所在的位置坐标 `loc`,以及即将执行的动作 `act`,如 `(1,1)` 及 `u`。接着它的返回是一个布尔值,表明小车在 `loc` 位置下,是否可以执行动作 `act`。
#
#
# 提示1:*可以读取上方定义的 `env_data` 变量,来读取模拟环境的数据。*
#
# 提示2:*在实现函数后,请删去下方的 `pass` 代码。*
#
# 提示3:*我们需要处理边界的情况,即机器人走到了虚拟环境边界时,是不能够走出虚拟环境的。*
# +
def is_move_valid_special(loc, act):
"""
Judge wether the robot can take action act
at location loc.
Keyword arguments:
loc -- tuple, robots current location
act -- string, robots meant action
"""
#DONE IN HALF
row = loc[0]
col = loc[1]
if act == 'u':
if row == 0:
return False
elif env_data[row - 1][col] != 2:
return True
elif act == 'd':
if row == len(env_data) - 1:
return False
elif env_data[row + 1][col] != 2:
return True
elif act == 'l':
if col == 0:
return False
elif env_data[row][col - 1] != 2:
return True
elif act == 'r':
if col == len(env_data[0]) - 1:
return False
elif env_data[row][col + 1] != 2:
return True
# -
# %run -i -e test.py RobotControllortTestCase.test_is_move_valid_special
# ---
# **任务5:**在下方代码中,重新实现一个名为 `is_move_valid` 的函数,它有三个输入,分别为模拟环境的数据 `env_data`、机器人所在的位置坐标 `loc`、以及即将执行的动作 `act`。它的返回值与此前一样,是一个布尔值,表明小车在给定的虚拟环境中的 `loc` 位置下,是否可以执行动作 `act`。
def is_move_valid(env_data, loc, act):
"""
Judge wether the robot can take action act
at location loc.
Keyword arguments:
env -- list, the environment data
loc -- tuple, robots current location
act -- string, robots meant action
"""
#TODO 9
pass
# %run -i -e test.py RobotControllortTestCase.test_is_move_valid
# ---
#
# **任务6:**请回答:
# 1. 在任务4及任务5中的实现的两个函数中,`env_data` 这个变量有什么不同?
# 2. 调用``is_move_valid``函数,参数为``env_data_``、``loc_``、``act_``,如果在函数内修改``env_data``是否会改变``env_data_``的值?为什么?
#
# 提示:_可以尝试从变量作用域的角度回答该问题1。_
#
#
# 提示:_可以尝试从可变类型变量和不可变类型变量的角度回答该问题2。_
#
#
# **回答:** (请在这里填写你的回答)
# ---
#
# ## 2.2 机器人可行动作
#
# ---
#
# **任务7:**编写一个名为 `valid_actions` 的函数。它有两个输入,分别为虚拟环境的数据 `env_data`,以及机器人所在的位置 `loc`,输出是一个列表,表明机器人在这个位置所有的可行动作。
#
# 提示:*可以尝试调用上方定义的`is_move_valid`函数。*
#
# +
## TODO 10 从头定义、实现你的函数
# -
# %run -i -e test.py RobotControllortTestCase.test_valid_actions
# ---
#
# ## 2.3 移动机器人
#
# 当机器人收到一个动作的时候,你机器人的位置应发生相应的变化。
#
# **任务8:**编写一个名为 `move_robot` 的函数,它有两个输入,分别为机器人当前所在的位置 `loc` 和即将执行的动作 `act`。接着会返回机器人执行动作之后的新位置 `new_loc`。
# +
##TODO 11 从头定义、实现你的函数
# -
# %run -i -e test.py RobotControllortTestCase.test_move_robot
# ---
#
# ## 2.4 随机移动机器人
#
# 接着,我们尝试在虚拟环境中随机移动机器人,看看会有什么效果。
#
# **任务9:**编写一个名为 `random_choose_actions` 的函数,它有两个输入,分别为虚拟环境的数据 `env_data`,以及机器人所在的位置 `loc`。机器人会执行一个300次的循环,每次循环,他会执行以下任务:
#
# 1. 利用上方定义的 `valid_actions` 函数,找出当前位置下,机器人可行的动作;
# 2. 利用 `random` 库中的 `choice` 函数,从机器人可行的动作中,随机挑选出一个动作;
# 3. 接着根据这个动作,利用上方定义的 `move_robot` 函数,来移动机器人,并更新机器人的位置;
# 4. 当机器人走到终点时,输出“在第n个回合找到宝藏!”。
#
# 提示:如果机器人无法在300个回合内找到宝藏的话,试试看增大这个数字,也许会有不错的效果 :P
# +
##TODO 12 从头实现你的函数
# -
# 运行
random_choose_actions(env_data, robot_current_loc)
#
# ---
#
# ---
#
# ## (可选)第三节 控制机器人走到终点
#
# ## 3.1 控制机器人走到终点
#
# 在这里,你将综合上述的知识,编码控制机器人走到终点。这个任务对刚刚入门的你来说可能有些挑战,所以它是一个选做题。
#
# **任务10**:尝试实现一个算法,能够对给定的模拟环境,输出机器人的行动策略,使之能够走到终点。
#
# 提示:_你可以尝试参考:_
# * 深度/广度优先算法。
# 以及以下参考资料:
# 1. https://blog.csdn.net/raphealguo/article/details/7523411
# 2. https://www.cnblogs.com/yupeng/p/3414736.html
# * A星算法。
# 以及以下参考资料:
# 1. https://baike.baidu.com/item/A%2A算法
# 2. https://blog.csdn.net/hitwhylz/article/details/23089415
# +
##TODO 13 实现你的算法
# -
# > 注意: 当你写完了所有的代码,并且回答了所有的问题。你就可以把你的 iPython Notebook 导出成 HTML 文件。你可以在菜单栏,这样导出**File -> Download as -> HTML (.html)**把这个 HTML 和这个 iPython notebook 一起做为你的作业提交。
| 6,789 |
/hw2/Alexander_Telepov_hw2_p2.ipynb | 05bf7d0796fb7a35670f223717f0f8d1a4c0d287 | [] | no_license | alexander-telepov/ml-course-skoltech | https://github.com/alexander-telepov/ml-course-skoltech | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 431,082 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="bWEEBnVC-Irv"
# # Home Assignment No. 2 - part two
#
# To solve this task, you will write a lot of code to try several machine learning methods for classification and regression.
# * You are **HIGHLY RECOMMENDED** to read relevant documentation, e.g. for [python](https://docs.python.org/3/), [numpy](https://docs.scipy.org/doc/numpy/reference/), [matlpotlib](https://matplotlib.org/) and [sklearn](https://scikit-learn.org/stable/). Also remember that seminars, lecture slides, [Google](http://google.com) and [StackOverflow](https://stackoverflow.com/) are your close friends during this course (and, probably, whole life?).
#
# * If you want an easy life, you have to use **BUILT-IN METHODS** of `sklearn` library instead of writing tons of your own code. There exists a class/method for almost everything you can imagine (related to this homework).
#
# * You have to write **CODE** directly inside specified places marked by comments: **BEGIN/END Solution**. Do not create new cells.
#
# * In some problems you are asked to provide a short discussion of the results. For that find the specific place marked via **Your text answer: \<write your answer\>**.
#
# * For every separate problem or subproblem (if specified) you can get only 0 points or maximal points for this problem. There are **NO INTERMEDIATE scores**. So make sure that you did everything required in the task.
#
# * Your **SOLUTION** notebook **MUST BE REPRODUCIBLE**, i.e., if the reviewer decides to restart the notebook and run all cells, after all the computation he will obtain exactly the same solution (with all the corresponding plots) as in your uploaded notebook. For this purpose, we suggest fixing random `seed` or (better) define `random_state=` inside every algorithm that uses some pseudorandomness.
#
# * Your code must be clear to the reviewer. For this purpose, try to include necessary comments inside the code. But remember: **GOOD CODE MUST BE SELF-EXPLANATORY** without any additional comments.
#
# * Many `sklearn` algorithms support multithreading (Ensemble Methods, Cross-Validation, etc.). Check if the particular algorithm has `n_jobs` parameters and set it to `-1` to use all the cores.
#
# + [markdown] id="ddR3sf3P82Ht"
# ## Task 6. Deep ANNs. (3 points)
#
# - **(1 pt.)** Activation functions; **(sub tasks 6.1)**
# - **(2 pt.)** MNIST classification. **(sub tasks 6.2)**
#
#
#
# ### Task 6.1 Activation functions.
# Plot the following [activation functions](https://pytorch.org/docs/master/nn.html#non-linear-activation-functions) using their PyTorch implementation and their derivatives using [autograd](https://pytorch.org/docs/stable/autograd.html) functionality `grad()`:
#
# 1. Plot `ReLU`, `ELU` ($\alpha = 1$), `Softplus` ($\beta = 1$) and `Sign`, `Sigmoid`, `Softsign`, `Tanh`.
#
# + colab={"base_uri": "https://localhost:8080/", "height": 369} id="hcF-2GHz8wMz" outputId="150fa83f-feb5-4fbd-9aa0-b9d02f47707f"
# %matplotlib inline
import torch.nn.functional as F
import matplotlib.pyplot as plt
import torch
x = torch.arange(-2, 2, .01, requires_grad=True)
x_np = x.detach().numpy()
x.sum().backward() # to create x.grad
f, axes = plt.subplots(2, 2, sharex=True, figsize=(14, 5))
axes[0, 0].set_title('Values')
axes[0, 1].set_title('Derivatives')
for i, function_set in (0, (('ReLU', F.relu), ('ELU', F.elu), ('Softplus', F.softplus))), \
(1, (('Sign', torch.sign), ('Sigmoid', torch.sigmoid), ('Softsign', F.softsign), ('Tanh', torch.tanh))):
for function_name, activation in function_set:
### BEGIN Solution
axes[i, 0].plot(x_np, activation(x).detach().numpy(), label=function_name)
x.grad.zero_()
activation(x).sum().backward()
axes[i, 1].plot(x_np, x.grad.detach().numpy(), label=function_name)
### END Solution
axes[i, 0].legend()
axes[i, 1].legend()
plt.tight_layout()
plt.show()
# + [markdown] id="_misNcjO8wXF"
# Which of these functions may be, and which - definitely, are a poor choise as an activation function in a neural network? Why? Do not forget that output of the current layer servers as an input for the following one. Imagine a situation where we have many layers, what happens with the activation values?
#
#
#
#
#
#
# + id="ribXsHDSmlYN"
# BEGIN SOLUTION (do not delete this comment!)
# * ReLU good choice, but have zero grad in big range
# * ELU good choice
# * Softplus good choice
# * Sign bad choice: almost everywhere zero derivative
# * Sigmoid bad choice: saturates fast - derivative nonzero in small range
# * SoftSign maybe bad choice: saturates but slowly then Sigmoid, Tanh
# * Tanh bad choice: saturates fast - derivative nonzero in small range
#END SOLUTION (do not delete this comment!)
# + [markdown] id="sW9OYyIw8wz4"
# ### Task 6.2 MNIST classification.
#
# At one of the seminars we have discussed an MLP (Multilayer perceptron) with one hidden layer, logistic activation functions and softmax. In this task, you are to:
#
# 1. Implement the MLP modules, including the Softmax cross entropy between `logits` and `labels` using numpy.
#
# 2. Train your numpy realization of MLP to classify the MNIST from `sklearn.datasets()`. The required accuracy on validation is `> 90%`.
#
# 3. Compare the acccuracy of classification to your scores from `Part 1` with and without dimensionality reduction. Is this comparison fair?:) Derive the confusion matrix for all digits classes. Which digits are predicted better or worse than others, why?
# + id="RKxe88YT_p9P"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score
# + id="E7_2lHue_r0x"
# fetch the dataset.
digits, targets = load_digits(return_X_y=True)
digits = digits.astype(np.float32) / 255
digits_train, digits_test, targets_train, targets_test = train_test_split(digits, targets, random_state=0)
train_size = digits_train.shape[0]
test_size = digits_test.shape[0]
input_size = 8*8
classes_n = 10
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="feWQtoKtn1vV" outputId="47edde13-7736-4ef6-f45a-8a315638e00e"
N = 10
sample_idx = np.random.choice(1797, N, replace=False)
digits_sample = digits[sample_idx]
targets_sample = targets[sample_idx]
f, ax = plt.subplots(1,10, figsize=(10, 5))
for i in range(N):
ax[i].imshow(digits_sample[i].reshape(8,8))
ax[i].set_title('label: '+str(targets_sample[i]))
# + [markdown] id="Pj6EctS6yTJK"
# A short recap on what we are going to achieve here.
# <br>
# 1. Forward pass:
# $$
# h_1 = X\theta_1+\beta_1
# $$
#
# $$
# O_1 = sig(h_1)
# $$
#
# $$
# h_2 = O_1\theta_2+\beta_2
# $$
# $$
# O_2 = softmax(h_2)
# $$
# $$
# Loss = CrossEntropy(O_2, true \space labels)
# $$
#
# 2. Compute gradients:
#
# To update weights first we need to compute loss gradients with respect to $\theta_1$ and $\theta_2$ and then update both $\theta$ and $\beta$.
#
# $$
# \frac{ \partial{loss} }{\partial{\theta_2}} = \frac{ \partial{loss} }{\partial{O_2}}\frac{ \partial{O_2} }{\partial{h_2}}\frac{ \partial{h_2} }{\partial{\theta_2}}
# $$
# Note, that $\frac{ \partial{h_2} }{\partial{\theta_2}}=O_1$, so we can cache this value during forward pass to speed up our computation.
# $$
# \frac{ \partial{loss} }{\partial{\theta_1}} = \frac{ \partial{loss} }{\partial{O_2}}\frac{ \partial{O_2} }{\partial{h_2}}\frac{ \partial{h_2} }{\partial{O_1}}\frac{ \partial{O_1} }{\partial{h_1}}\frac{ \partial{h_1} }{\partial{\theta_1}}
# $$
# Note, that $\frac{ \partial{h_1} }{\partial{\theta_1}}=X$.
#
# Since we are using sigmoid avtivation function here and
# $$
# \frac{ \partial{sig} }{\partial{h}} = sig(h)(1 - sig(h))
# $$
# It also makes sense to cache sig(h) during forward pass.
#
# 3. Update weights:
#
# $\theta:= \theta - \frac{ \partial{loss} }{\partial{\theta}}\alpha$, where $\alpha$ is some learning rate.
#
# Note, it was not shown here how to update and compute $\beta$ but you can do it!
# + [markdown] id="CaBenjDI_x6k"
# ### Implement the MLP with backprop
# + id="ffpXAKqQ_vfg"
### YOUR TASK STARTS HERE ###
#Here you should implement by yourself MLP class and its constituents including forward and backward propagation methods
class Linear:
def __init__(self, input_size, output_size):
# Trainable parameters of the layer and their gradients
self.thetas = np.random.randn(input_size, output_size) # the weight matrix of the layer (W)
self.thetas_grads = np.empty_like(self.thetas) # gradient w.r.t. the weight matrix of the layer
self.bias = np.random.randn(output_size) # bias terms of the layer (b)
self.bias_grads = np.empty_like(self.bias) # gradient w.r.t. bias terms of the linear layer
def forward(self, x):
# keep x for backward computation
self.x = x
output = np.matmul(x, self.thetas) + self.bias
return output
def backward(self, output_grad, learning_rate):
"""
Calculate and return gradient of the loss w.r.t. the input of linear layer given the input x and the gradient
w.r.t output of linear layer. You should also calculate and update gradients of layer parameters.
:param x: np.array, input tensor for linear layer;
:param output_grad: np.array, grad tensor w.r.t output of linear layer;
:return: np.array, grad w.r.t input of linear layer
"""
# BEGIN SOLUTION (do not delete this comment!)
input_grad = output_grad @ self.thetas.T
# calculate mean of gradients across batch w.r.t weights, bias
n = output_grad.shape[0]
self.thetas_grads = self.x.T @ output_grad / n
self.bias_grads = output_grad.mean(axis=0)
self.step(learning_rate)
# END Solution (do not delete this comment!)
return input_grad
def step(self, learning_rate):
self.thetas -= self.thetas_grads * learning_rate
self.bias -= self.bias_grads * learning_rate
class LogisticActivation:
def __init__(self):
# the layer has no parameters
pass
def sig(self, x):
return 1/(1 + np.exp(-x))
def forward(self, x):
# keep o for backward computation
self.o = self.sig(x)
return self.o
def backward(self, output_grad, learning_rate):
"""
Calculate and return the gradient of the loss w.r.t. the input
of logistic non-linearity (given input x and the gradient
w.r.t output of logistic non-linearity).
:param x: np.array, input tensor for logistic non-linearity;
:param output_grad: np.array, grad tensor w.r.t output of logistic non-linearity;
:return: np.array, grad w.r.t input of logistic non-linearity
"""
# BEGIN SOLUTION (do not delete this comment!)
o = self.o
input_grad = o * (1 - o) * output_grad
### END Solution (do not delete this comment!)
return input_grad
class MLP:
def __init__(self, input_size, hidden_layer_size, output_size):
self.linear1 = Linear(input_size, hidden_layer_size)
self.activation1 = LogisticActivation()
self.linear2 = Linear(hidden_layer_size, output_size)
def forward(self, x):
h1 = self.linear1.forward(x)
h1a = self.activation1.forward(h1)
out = self.linear2.forward(h1a)
return out
def backward(self, output_grad, learning_rate):
"""
Calculate and return the gradient of the loss w.r.t. the input of MLP given the input and the gradient
w.r.t output of MLP. You should also update gradients of paramerters of MLP layers.
Hint - you should chain backward operations of modules you have already implemented. You may also
need to calculate intermediate forward results.
:param x: np.array, input tensor for MLP;
:param output_grad: np.array, grad tensor w.r.t output of MLP;
:return: np.array, grad w.r.t input of MLP
"""
# BEGIN SOLUTION (do not delete this comment!)
linear2_input_grad = self.linear2.backward(output_grad, learning_rate)
activation1_input_grad = self.activation1.backward(linear2_input_grad, learning_rate)
out = self.linear1.backward(activation1_input_grad, learning_rate)
# END Solution (do not delete this comment!)
return out
# + id="07DUqp86_0To"
# BEGIN SOLUTION (do not delete this comment!)
def softmax_crossentropy_with_logits(logits, reference_answers):
reference_answers_ = np.zeros_like(logits)
I = np.arange(logits.shape[0])
reference_answers_[I, reference_answers] = 1
loss = np.sum(reference_answers_ * (-logits + np.log(np.sum(np.exp(logits)))))
### END Solution
return loss
def grad_softmax_crossentropy_with_logits(logits, reference_answers):
reference_answers_ = np.zeros_like(logits)
I = np.arange(logits.shape[0])
reference_answers_[I, reference_answers] = 1
grad = logits - reference_answers_
return grad
# BEGIN Solution
# + colab={"base_uri": "https://localhost:8080/"} id="DWkD2V1y_4QU" outputId="d544a97a-41f5-4e64-afae-00b2737e4167"
np.random.seed(42)
mlp = MLP(input_size=input_size, hidden_layer_size=100, output_size=classes_n)
epochs_n = 100
learning_curve = [0] * epochs_n
test_curve = [0] * epochs_n
x_train = digits_train
x_test = digits_test
y_train = targets_train
y_test = targets_test
learning_rate = 1e-2
for epoch in range(epochs_n):
y_pred = []
for sample_i in range(train_size):
x = x_train[sample_i].reshape((1, -1))
target = np.array([y_train[sample_i]])
### BEGIN Solution
# ... perform forward pass and compute the loss
# ... compute the gradients w.r.t. the input of softmax layer
# ... perform backward pass
# ... and update the weights with weight -= grad * learning_rate
logits = mlp.forward(x)
loss = softmax_crossentropy_with_logits(logits, target)
logits_grad = grad_softmax_crossentropy_with_logits(logits, target)
mlp.backward(logits_grad, learning_rate)
### END Solution
y_pred.extend(logits.argmax(1))
if epoch % 10 == 0:
y_pred_test = []
for sample_i in range(test_size):
x = x_test[sample_i].reshape((1, -1))
target = np.array([y_test[sample_i]])
logits = mlp.forward(x)
y_pred_test.extend(logits.argmax(1))
print('Starting epoch {}'.format(epoch), \
', Loss : {:.3}'.format(loss), \
', Accuracy on train: {:.3}'.format(accuracy_score(y_train, y_pred)), \
', Accuracy on test: {:.3}'.format(accuracy_score(y_test, y_pred_test)) )
# + colab={"base_uri": "https://localhost:8080/"} id="0DNQhxXaARCy" outputId="df7b0f28-cb53-4e7b-d1a3-dfcd222636eb"
# BEGIN SOLUTION (do not delete this comment!)
# confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = np.empty_like(y_test)
for sample_i in range(test_size):
x = x_test[sample_i].reshape((1, -1))
target = np.array([y_test[sample_i]])
logits = mlp.forward(x)
y_pred[sample_i] = logits.argmax(1)
confusion_matrix(y_test.astype(np.int), y_pred.astype(np.int))
# END Solution (do not delete this comment!)
# + [markdown] id="MkSdyrpn8xdE"
# ## Task 7. Autoencoders on tabular data (2 points)
# **From now on we will be using pytorch for all the tasks.**
#
# We will build a latent representation for tabular data with simple Autoencoder (AE). We are going to work with the cancer dataset from scikit-learn package. You are to follow the instructions.
#
# 1. **(1 pt.)** Implement AE modules for tabular data. Train AE to get latent representation of the cancer dataset from `sklearn.datasets()`. Use `MSE` loss and get < $0.3$ on validation, with AE "bottleneck" = $2$; **(sub tasks 7.1 - 7.5)**
#
# 2. **(1 pt.)** Plot the latent representation of whole dataset in 2D, use colors to show object of different classes. **(sub tasks: 7.6)**
#
# + id="Sg5fX833AX9q"
# imports
import torch
import torch.nn as nn
import torch.utils.data as torch_data
import sklearn.datasets as sk_data
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
# + [markdown] id="AYtA62xA8xgB"
# #### 7.1 Fetch the data. Scale it and split on train and test.
# + colab={"base_uri": "https://localhost:8080/"} id="BinFOZc7Abpx" outputId="fa0f0e2d-5114-4343-916a-39bff7ba0722"
cancer_dset = sk_data.load_breast_cancer()
X_train, X_val, y_train, y_val = train_test_split(cancer_dset['data'], cancer_dset['target'], test_size=0.2, random_state=42)
print('\nTrain size: ', len(X_train))
print('Validation size: ', len(X_val))
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_val = scaler.transform(X_val)
print('Features: ', list(cancer_dset['feature_names']))
print('\nShape:', X_train.shape)
# + [markdown] id="x7Dzo8VIAaaf"
# #### 7.2 Let us firstly create the dataset, which we'll be able to use with pytorch dataloader.
# Implement `__len__` and `__getitem__` methods.
# + id="Vi4Cq7DtAl8u"
### BEGIN Solution
class CancerData(torch_data.Dataset):
def __init__(self, X, y):
super(CancerData, self).__init__()
self.X = torch.tensor(X, dtype=torch.float32)
self.y = torch.tensor(y, dtype=torch.float32)
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
return self.X[idx], self.y[idx]
### END Solution
# + colab={"base_uri": "https://localhost:8080/"} id="f4emJDB1ApHh" outputId="d695a094-f426-4cf4-880b-c05037673a1e"
train_dset = CancerData(X_train, y_train)
val_dset = CancerData(X_val, y_val)
print(train_dset[5])
# + [markdown] id="ksiBurhhAapc"
# #### 7.3 Now, we'll make a base class for our autoencoder.
# AE takes as input encoder and decoder (it will be two neural networks). Your task is to implement the forward pass.
# + id="wVlgW3_rAqgu"
class MyFirstAE(nn.Module):
def __init__(self, encoder, decoder):
super(MyFirstAE, self).__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, x):
"""
Take a mini-batch as an input, encode it to the latent space and decode back to the original space
x_out = decoder(encoder(x))
:param x: torch.tensor, (MB, x_dim)
:return: torch.tensor, (MB, x_dim)
"""
# BEGIN SOLUTION (do not delete this comment!)
x = self.encoder(x)
x = self.decoder(x)
# END Solution (do not delete this comment!)
return x
# + [markdown] id="rp39DZHDAzfQ"
# #### It is high time to create encoder and decoder neural networks!
# Make hidden size of the network to be equal to `2`.
#
# **Hint.** You can use `nn.Sequential` to create your own archtectures.
# + id="gQ5Zuro3Aqmu"
encoder = lambda hid: nn.Sequential(
nn.Linear(30, 20),
nn.LeakyReLU(inplace=True),
nn.Linear(20, 10),
nn.LeakyReLU(inplace=True),
nn.Linear(10, hid)
)
decoder = lambda hid: nn.Sequential(
nn.Linear(hid, 10),
nn.LeakyReLU(inplace=True),
nn.Linear(10, 20),
nn.LeakyReLU(inplace=True),
nn.Linear(20, 30),
)
# + id="NOiJCm00A2t5"
device = 'cpu'
from torch.optim.lr_scheduler import StepLR
net = MyFirstAE(encoder(2), decoder(2))
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.01, weight_decay=0.001)
scheduler = StepLR(optimizer, 30, gamma=0.5)
train_loader = torch_data.DataLoader(train_dset, batch_size=50, shuffle=True)
val_loader = torch_data.DataLoader(val_dset, batch_size=200, shuffle=False)
# + [markdown] id="hgOC1-iZA6ev"
# #### 7.4 Implement the missing parts of the `train` function
# + id="3SvYM1dxA5kI"
def train(epochs, net, criterion, optimizer, train_loader, val_loader,scheduler=None, verbose=True, save_dir=None):
freq = max(epochs//20,1)
net.to(device)
for epoch in range(1, epochs+1):
net.train()
losses_train = []
for X, _ in train_loader:
### BEGIN Solution
# Perform one step of minibatch stochastic gradient descent
reconstruction = net.forward(X)
optimizer.zero_grad()
loss = criterion(X, reconstruction)
loss.backward()
optimizer.step()
losses_train.append(loss.item())
# define NN evaluation, i.e. turn off dropouts, batchnorms, etc.
net.eval()
## replace from loop
losses_val = []
for X, _ in val_loader:
# Compute the validation loss
with torch.no_grad():
reconstruction = net.forward(X)
loss = criterion(X, reconstruction)
losses_val.append(loss.item())
### END Solution
if scheduler is not None:
scheduler.step()
if verbose and epoch%freq==0:
mean_val = sum(losses_val)/len(losses_val)
mean_train = sum(losses_train)/len(losses_train)
print('Epoch {}/{} || Loss: Train {:.4f} | Validation {:.4f}'\
.format(epoch, epochs, mean_train, mean_val))
# + [markdown] id="IsrutdnJAasT"
# #### 7.5 Train your AE on breast cancer dataset.
# Your goal is to get validation error <0.3.
#
# Some features that may help you to improve the performance:
# * `Dropout`
# * `Batchnorm`
# * lr scheduler
# * Batch size increase/decrease
# + colab={"base_uri": "https://localhost:8080/"} id="Gj9Bk-RQBHcD" outputId="63f79d3b-ff91-4238-e8f3-fe4a32fdcb73"
# for `MSE` loss get < 0.3 on validation, with AE "bottleneck" = 2
train(100, net, criterion, optimizer, train_loader, val_loader, scheduler)
# + [markdown] id="9Tq4AMlDBCjW"
# #### 7.5 Let us take a look at the latent space.
# Encode the whole dataset, using your AE, plot it in 2D and use colors to indicate objects of differrent classes
# + colab={"base_uri": "https://localhost:8080/", "height": 336} id="_DD8qANbBN1s" outputId="31562596-70df-4719-a1d0-7bf4ba625541"
### BEGIN Solution
plt.figure(figsize=(14, 5))
net.eval()
with torch.no_grad():
enc = net.forward(torch.from_numpy(scaler.transform(cancer_dset['data'])).float()).detach().cpu()
plt.scatter(enc[:,0], enc[:,1], c=cancer_dset['target'], alpha=0.7);
plt.title('Latent space from the Autoencoder bottle neck, purple dots go for malignant samples. ');
### END Solution
# + [markdown] id="ufty_3qKBCwD"
# ### Task 8. Autoencoder on kMNIST. (2 points)
#
#
# We will build a latent representation for `kMNIST` dataset by using our AE.
#
# 1. **(1 pt.)** Train AE to get latent representation of the `kMNIST` dataset from `sklearn.datasets()`. Follow the instructions. Use `MSE` loss and obtain < $0.035$ on validation, with AE "bottleneck" $\leq 40$; **(sub tasks 8.1 - 8.2)**
# 2. **(1 pt.)** Plot 10 images and their reconstructions 2d. **(sub tasks 8.3)**
# + colab={"base_uri": "https://localhost:8080/", "height": 437, "referenced_widgets": ["9ea812fbae8b49a394df152e81fd359b", "d5177c89f9f64109809b98e3f7d12dcc", "ce378fbbbb4241abb1ccef1dadb77d04", "ff482e7b527e469bb6b56ebeb86f1576", "36ed9e979101449ba503116f3edf275e", "5945a58a17d24d5693703666efa8d714", "ce0259dd657e42d3988c92f5342c77a2", "e5058123f36344a0a6fec92a14751443", "611486a8f5844a6b84cad64a25eddfd7", "9ed16aa24f52411d904293d9c133958c", "15be64c126e1483f97cbb316cdd1f387", "627c9bca8a3c48d8a0b2567060bffd06", "b1bd9def1565419eb45a9ed7791612d6", "3757c84e012f4185a54d142311afaa70", "b14c5c5076e740348e751c58947f309d", "af21568ed3de4580beecc796f2c1574f", "5128c5524e7a43bda2ee14b3ecf90cf2", "412eeabcdd4d44c5b57a6fef085f913a", "f44e9c37697b45be88e25cfaa8756d63", "bce0850f0c9447a3919601d75485fc2b", "6d6d75dc2de249a7b03f3f5395619313", "5f54d80b86044766a7dfe3e49a71b46a", "94b87f82e52c4a27972ed4ba92503310", "5756dd971756457abac2a4a8e717047b", "3efa3908aa3c4eacb1671f176bdff33e", "ee76ea60574d43bbbd2cde6d98bd60f3", "d38d3449e1844015819d76f560c5718e", "d078c6bc68d543d38cf0e56253c6d3d5", "f05bda5aaec34aeb99edfe3e648926d3", "9f78f5369e6644f79de3dd05d20328b4", "8a8dcda652ef41cea608d8537a4ae845", "6cb3a11cf2c24fbc89e14e0c054c2404"]} id="mldP_RZZN7bm" outputId="8124e750-c7ad-45c2-c0a3-50399581e997"
from torchvision.datasets import KMNIST
data_train = KMNIST(train=True, root='./kmnist', download=True)
data_test = KMNIST(train=False, root='./kmnist', download=True)
# + [markdown] id="KZsmSJ3vuQrd"
# #### 8.1 Prepare the data and necessary functions.
# + id="KoRlKg3vOZCW"
x_train = np.array(data_train.data)
y_train = np.array(data_train.targets)
x_test = np.array(data_test.data)
y_test = np.array(data_test.targets)
# + id="fM3bsc0wBWTH"
# Reshape the data and scale
from sklearn.preprocessing import MaxAbsScaler
scaler = MaxAbsScaler()
n_train, n_test = x_train.shape[0], x_test.shape[0]
scaler.fit(x_train.reshape((n_train, -1)))
x_train = scaler.transform(x_train.reshape((n_train, -1))).reshape(n_train, 1, 28, 28)
x_test = scaler.transform(x_test.reshape((n_test, -1))).reshape(n_test, 1, 28, 28)
# + colab={"base_uri": "https://localhost:8080/", "height": 125} id="Tz2892txBYJk" outputId="6473c845-2646-4555-e82f-8a9ccae48a1a"
fig, ax = plt.subplots(ncols=10, figsize=(20, 5))
for i in range(10):
ax[i].imshow(scaler.inverse_transform(x_train[i].reshape(1,-1)).reshape(28,28));
ax[i].axis('off')
# + id="b5sHmYVxBeCV"
# BEGIN SOLUTION (do not delete this comment!)
class kMNISTData(torch_data.Dataset):
def __init__(self, X, y):
super(kMNISTData, self).__init__()
self.X = torch.tensor(X, dtype=torch.float32)
self.y = torch.tensor(y, dtype=torch.float32)
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
return self.X[idx].to('cuda'), self.y[idx].to('cuda')
# END Solution (do not delete this comment!)
# + id="lfla6EKiBgNy"
train_kmnist = kMNISTData(x_train, y_train)
test_kmnist = kMNISTData(x_test, y_test)
# + [markdown] id="J1jIdmh9uI-r"
# #### 8.2 Create encoder and decoder network for kMNIST.
# You can either use convolutions or flatten the images and use linear layers. You can choose hidden size (not larger than 40) and any architecture you like.
# + id="uvECUsVcBkmB"
# BEGIN SOLUTION (do not delete this comment!)
class Reshape(nn.Module):
def __init__(self, *shape):
super(Reshape, self).__init__()
self.shape = shape
def forward(self, x):
return x.view(self.shape)
encoder = lambda hid: nn.Sequential(
nn.Conv2d(1, 12, 2, 2),
nn.BatchNorm2d(12),
nn.LeakyReLU(),
nn.Conv2d(12, 12, 3, 1, 1),
nn.BatchNorm2d(12),
nn.LeakyReLU(),
nn.Conv2d(12, 12, 2, 2),
nn.BatchNorm2d(12),
nn.LeakyReLU(),
nn.Conv2d(12, 6, 3, 1, 1),
nn.BatchNorm2d(6),
nn.Flatten(),
nn.Linear(6*7*7, hid)
)
decoder = lambda hid: nn.Sequential(
nn.Linear(hid, 6 * 7 * 7),
Reshape(-1, 6, 7, 7),
nn.BatchNorm2d(6),
nn.LeakyReLU(),
nn.Conv2d(6, 12, 3, 1, 1),
nn.BatchNorm2d(12),
nn.ConvTranspose2d(12, 12, 2, 2),
nn.BatchNorm2d(12),
nn.LeakyReLU(),
nn.Conv2d(12, 6, 3, 1, 1),
nn.BatchNorm2d(6),
nn.LeakyReLU(),
nn.ConvTranspose2d(6, 6, 2, 2),
nn.BatchNorm2d(6),
nn.LeakyReLU(),
nn.Conv2d(6, 1, 3, 1, 1),
)
# END Solution (do not delete this comment!)
# + id="Id-iNSswBpe9"
# BEGIN SOLUTION (do not delete this comment!)
device = 'cuda'
epochs = 25
net = MyFirstAE(encoder(40), decoder(40))
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.01, weight_decay=0.001)
scheduler = StepLR(optimizer, 10, gamma=0.2)
train_loader = torch_data.DataLoader(train_kmnist, batch_size=100, shuffle=True)
val_loader = torch_data.DataLoader(test_kmnist, batch_size=500, shuffle=False)
# END Solution (do not delete this comment!)
# + colab={"base_uri": "https://localhost:8080/"} id="ytDZI0spBsFl" outputId="62697d23-46b4-4a44-a050-ad5199e96e81"
train(epochs, net, criterion, optimizer, train_loader, val_loader, scheduler)
# + [markdown] id="cXCR-eKBBuRI"
# #### 8.3 Plot any 10 images and their reconstructions.
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="ggGOxCc4Bvm5" outputId="fbfbc06a-2622-45ac-fd8e-3f99be6b581f"
# BEGIN SOLUTION (do not delete this comment!)
fig, ax = plt.subplots(ncols=10, nrows=2, figsize=(20, 5))
for i in range(10):
im = train_kmnist[i][0]
rec = net.forward(im.reshape(1,1,28,28)).detach().cpu().numpy()
ax[0, i].imshow(scaler.inverse_transform(im.cpu().reshape(1,-1)).reshape(28,28));
ax[1, i].imshow(scaler.inverse_transform(rec.reshape(1,-1)).reshape(28,28))
ax[0, i].set_title('original')
ax[1, i].set_title('reconstruction')
ax[0, i].axis('off')
ax[1, i].axis('off')
# END Solution (do not delete this comment!)
# + [markdown] id="1seXNwq3KoYM"
# ## Task 9. Convolutional NN (4 points)
#
#
# In this task, you will need to answer two questions and train a convolution neural network for a task of sound classification.
#
# - **(1 pt.)** Debug the given convolutional neural network and explain what's wrong with it and how to fix it. You will need to identify at least 4 problems; **(sub-tasks 9.1)**
#
# - **(1 pt.)** Compute manually outputs of each layer, often when we build a neural network we need to know the output sizes of a layer before we add the next on; **(sub-tasks 9.2)**
#
# - **(2 pt.)** Build your own convolutional NN and train it for the task of sound classification. Your goal is to achieve maximum quality > 70% 1pt and > 90% 2pt. **(sub-tasks 9.3 - 9.6)**
# + [markdown] id="4fCPSsn3K22j"
# #### 9.1 Debug this convolutional neural network and write down proposed fixes. Ther are at least four fixes that can be applied. Explain your answers.
# + id="jQDXJDhFLI6a"
# assuming input shape [batch, 3, 32, 32]
cnn = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=512, kernel_size=(3,3)), # 30
nn.Conv2d(in_channels=512, out_channels=128, kernel_size=(3,3)), # 28
nn.Conv2d(in_channels=128, out_channels=10, kernel_size=(3,3)), # 26
nn.ReLU(),
nn.MaxPool2d((1,1)),
nn.Conv2d(in_channels=10, out_channels=3, kernel_size=(10,10)), # 17
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(10,10)), # 8
nn.MaxPool2d((15,15)),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(10,10)),
nn.Softmax(),
Flatten(),
nn.Linear(64, 256),
nn.Softmax(),
nn.Linear(256, 10),
nn.Sigmoid(),
nn.Dropout(0.5)
)
# + id="0qomga4ALpdi"
# BEGIN SOLUTION (do not delete this comment!)
# Your answers:
# 1. Max poling with kernel 1 doesn't do anything
# 2. After 5-th convolution spatial size equal 8, but used maxpooling
# with kernel size 15; also problems after with sizes: if suppose 6-th conv produce
# spatial size 1 linear layer should take 128 size instead of 64
# 3. Softmax shouldn't be used not as last layer (same problems as sigmoid have);
# for training purposes better use not softmax with nll loss, but use crossentropy
# which is more numerically stable and more computationally efficient
# 4. Sigmoid bad choice because it is saturates fast; also better use bce with
# logits function then place sigmoid as last layer with bce loss
# 5. Dropout should be inserted after linear layers except last layer (not before output)
# 6. Shortucts may increase quality, however network not very deep
# 7. No normalization layers which can increase stability
# 8. Fast dimension sizes changing commonly produce worse results than architectures with slowly changes sizes
# END Solution (do not delete this comment!)
# + [markdown] id="uh8BusFMQm-2"
# #### 9.2 Convolutional warm up, compute manually outpust shapes for each layer.
# + id="iGvJl1LxQoKw"
# Compute output shape for each and final layers wihout running the code.
# input size x = [8, 1 , 300, 303].
conv1 = Conv2d(in_chаnnels=1, out_channels=16, kernel_size=(5, 5), рadding = 0, stride=2)
conv2 = Conv2d(in_channels=16, out_chаnnels=16, kernel_size=(3, 3), рadding = 2, stride=1)
conv3 = Conv2d(in_channels=16, out_chаnnels=16, kernel_size=(5, 5), рadding =2, stride=2)
maxpool1 = MаxPool2d((2, 2))
cnn = nn.Sequential(conv1, conv2, conv3, maxpool1)
# + id="2ly8YywpLMKh"
# BEGIN SOLUTION (do not delete this comment!)
# example:
# conv1
# output_h = (300 - 5+0) /2 +1 = 148
# output_w = (303 - 5+0) /2 +1 = 150
# Continue for all the layers:
### BEGIN Souluion
# conv2
# output_h = (148 - 3 + 2 * 2) / 1 + 1 = 150
# output_w = (150 - 3 + 2 * 2) / 1 + 1 = 152
# conv3
# output_h = (150 - 5 + 2 * 2) / 2 + 1 = 75
# output_w = (152 - 5 + 2 * 2) / 2 + 1 = 76
# maxpool1 = MaxPool2d((2, 2))
# output_h = 75 / 2 = 37
# output_w = 76 / 2 = 38
# final layer output = [8, 16, 37, 38]
# END Solution (do not delete this comment!)
# + [markdown] id="1D1z6WEjfZwT"
# #### 9.3 Convolutional networks for sound classication
#
# - Now your task is to classify sounds using the convolutional network. You can use different network architectures. And your goal is to get the highest score possible.
#
# - First of all, we will preprocess audio into spectrograms, that you will be able to treat them as images.
# + colab={"base_uri": "https://localhost:8080/"} id="t7BsAPwYfv6X" outputId="d48bf5f3-53b7-4ed5-fd1a-836473edae21"
# imports
import os
import torch
import numpy as np
import torch.nn as nn
from torch import Tensor
# !pip install torchaudio
import torchaudio
from torchaudio import transforms
from IPython.display import Audio
import torch.nn.functional as F
from torch.utils.data import DataLoader,random_split,Dataset
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, accuracy_score
# + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["6d4a12e73a344069981447c9543053b9", "29c3287e1d1a476b89e8406f1c6f596a", "28af3e6e18824d7e9dcda0778c573883", "61216fae41b7448e8f724145e50d554e", "de7033ddcff84075be383e61ddb16c83", "c1996ab9c94d45c4aeea099f69441db3", "6b4ed0b03042452fa47a53f0313ddcba", "26aa4453751d4ced87119252e375ca11"]} id="lRCcakoVgFDK" outputId="1f9f1781-4e84-4cfb-dfe6-7dc549253328"
# Get the dataset
dataset = torchaudio.datasets.SPEECHCOMMANDS('./' , url = 'speech_commands_v0.02',
folder_in_archive= 'SpeechCommands', download = True)
# + [markdown] id="kUE5grAGj9SN"
# ### Let's look at the dataset.
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="85jqMauLhNER" outputId="076b5494-2ad8-41c1-d421-1a7caca8c449"
plt.figure()
plt.plot(dataset[0][0].t())
# + colab={"base_uri": "https://localhost:8080/", "height": 92} id="CYGRUGnghIYY" outputId="2ed60715-83b7-4903-d417-580d29ba17d9"
print('Label: ',dataset[11760][2])
Audio(np.array(dataset[11760][0].t()).reshape(-1), rate=16000)
# + [markdown] id="tMBl98nnkRcr"
# #### Actually, we could use really long sequences to classify our samples but it's better to work with them as spectrograms so we can use convolutional layers.
# + colab={"base_uri": "https://localhost:8080/", "height": 339} id="ZysQUugGkWPB" outputId="f56a9252-056e-4a17-be83-83d4a82a21e3"
specgram = torchaudio.transforms.Spectrogram(n_fft=200, normalized=True)(dataset[77][0])
print("Shape of spectrogram: {}".format(specgram.size()))
plt.figure(figsize=(10,5))
plt.imshow(specgram[0,:,:].numpy());
plt.colorbar()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="yylTFJjPo57s" outputId="7eac223f-3ec0-4c19-a027-9d8f973c2369"
# Some preprocessing routine
# Filter samples only with 16000 sampling rate
# Make labels dictionary
count = 0
wave = []
labels = []
labels_dict = {}
for i in range(0,105829):
if dataset[i][0].shape == (1,16000):
wave.append(dataset[i][0])
labels.append(dataset[i][2])
# + colab={"base_uri": "https://localhost:8080/"} id="YD4Ds2BhqX4T" outputId="146e7799-5433-4f21-c313-d4c6d3512a31"
set_labels = list(set(labels))
labels_dict = {set_labels[i] :i for i in range(len(set_labels))}
labels_dict
# + [markdown] id="W5tkVQDTnMvD"
# #### 9.4 Your task right now is to implement a speech dataloader it will be almost the same as in the previous tasks.
# + id="kuGvJ4EDm90d"
transformation = torchaudio.transforms.Spectrogram(n_fft=200, normalized=True)
### BEGIN Solution
class SpeechDataLoader(Dataset):
def __init__(self, data, labels, label_dict, transform=None):
self.data = data
self.labels = labels
self.label_dict = label_dict
self.transform = transform
def __len__(self):
return len(self.labels)
def __getitem__(self,idx):
waveform = self.data[idx]
specgram = self.transform(waveform)
if self.labels[idx] in self.label_dict:
label = self.label_dict[self.labels[idx]]
return specgram, label
# END Solution (do not delete this comment!)
# + id="OarKlZWooQbS"
torch.manual_seed(0)
dataset= SpeechDataLoader(wave, labels, labels_dict, transformation)
traindata, testdata = random_split(dataset, [round(len(dataset)*.8), round(len(dataset)*.2)], )
train_loader = DataLoader(traindata, batch_size=100, shuffle=True)
val_loader = DataLoader(testdata, batch_size=100, shuffle=True)
# + [markdown] id="gQxaAGoBuLRR"
# #### 9.5 Your task is to build a convolutional neural network that yields a high score.
# + id="ryDzP0l9s4Pi"
# BEGIN Solution (do not delete this comment!)
class BasicBlock(nn.Module):
def __init__(self, in_channels, out_channels, relu=True, cropw=None, croph=None):
super().__init__()
self.backbone = nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=2, stride=2),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=out_channels, out_channels=out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels)
)
self.shortcut = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=2)
self.cropw = cropw
self.croph = croph
self.relu = relu
def forward(self, x):
out = self.backbone(x)
if self.cropw is not None:
x = x[:, :, :, :-self.cropw]
if self.croph is not None:
x = x[:, :, :-self.croph, :]
out += self.shortcut(x)
if self.relu:
out = F.relu(out)
return out
class NN2D(nn.Module):
def __init__(self, num_class):
super(NN2D,self).__init__()
self.conv = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, padding=1)
self.backbone = nn.Sequential(
BasicBlock(32, 32, cropw=1, croph=1),
BasicBlock(32, 64),
BasicBlock(64, 128, croph=1),
BasicBlock(128, 256),
BasicBlock(256, 512, relu=False)
)
self.linear1 = nn.Linear(512, 128)
self.dropout = nn.Dropout(inplace=True)
self.linear2 = nn.Linear(128, num_class)
def forward(self, x):
out = self.conv(x)
out = self.backbone(out)
out = F.avg_pool2d(out, (3, 5))
out = out.view(out.shape[0], -1)
out = self.linear1(out)
out = self.dropout(out)
out = self.linear2(out)
return out
# END Solution (do not delete this comment!)
# + id="_Ev8ZHfquAMH"
# BEGIN Solution (do not delete this comment!)
from torch.optim import Adam
from torch.nn.functional import cross_entropy
net = NN2D(len(set_labels))
num_epochs = 10
criterion = cross_entropy
optimizer = Adam(net.parameters(), lr=0.001, weight_decay=0.001)
scheduler = StepLR(optimizer, 7, gamma=0.2)
# END Solution (do not delete this comment!)
# + [markdown] id="NpgUGDXJvIhk"
# #### 9.6 Almost there, now, we need to rewrite our training loop a little bit.
# + id="k7BqVHuSvEhZ"
def train(epochs, net, criterion, optimizer, train_loader, val_loader,scheduler=None, verbose=True, device='cpu'):
net.to(device)
freq = max(epochs//15,1)
for epoch in range(1, epochs+1):
net.train()
losses_train = []
for X, target in train_loader:
X, target = X.to(device), target.to(device)
### BEGIN Solution (do not delete this comment!)
# Perform one step of minibatch stochastic gradient descent
predict = net.forward(X)
optimizer.zero_grad()
loss = criterion(predict, target)
loss.backward()
optimizer.step()
losses_train.append(loss.item())
# END Solution (do not delete this comment!)
if scheduler is not None:
scheduler.step()
if verbose and epoch%freq==0:
y_pred_val = []
y_true_val = []
net.eval()
# move from loop
losses_val = []
for X, target in val_loader:
X, target = X.to(device), target.to(device)
# BEGIN Solution (do not delete this comment!)
# Compute the validation loss
with torch.no_grad():
target_hat_val = net.forward(X)
loss = criterion(target_hat_val, target)
losses_val.append(loss.item())
# END Solution (do not delete this comment!)
y_pred_val.extend(target_hat_val.argmax(1).tolist())
y_true_val.extend(target.tolist())
mean_val = sum(losses_val)/len(losses_val)
mean_train = sum(losses_train)/len(losses_train)
print('Val epoch {}'.format(epoch), \
', Loss : {:.3}'.format(mean_train), \
', Accuracy on test: {:.3}'.format(accuracy_score(y_true_val, y_pred_val)) )
# + colab={"base_uri": "https://localhost:8080/"} id="VjHENdc_uqc7" outputId="92400966-8348-46a9-f907-7f61234c963d"
train(num_epochs, net, criterion, optimizer, train_loader, val_loader, scheduler, device=0)
| 44,017 |
/data-structures/recursion/Staircase.ipynb | 2974220531bba573f441680fa2bd29175e5ba5e5 | [] | no_license | annahra/dsa-nanodegree | https://github.com/annahra/dsa-nanodegree | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 4,151 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_v5swjqy"
# ### Problem Statement
#
# Suppose there is a staircase that you can climb in either 1 step, 2 steps, or 3 steps. In how many possible ways can you climb the staircase if the staircase has `n` steps? Write a recursive function to solve the problem.
#
# **Example:**
#
# * `n == 1` then `answer = 1`
#
# * `n == 3` then `answer = 4`<br>
# The output is `4` because there are four ways we can climb the staircase:
# - 1 step + 1 step + 1 step
# - 1 step + 2 steps
# - 2 steps + 1 step
# - 3 steps
# * `n == 5` then `answer = 13`
#
# + [markdown] graffitiCellId="id_74s7rzj"
# ### Exercise - Write a recursive function to solve this problem
# + graffitiCellId="id_yv3ymjf"
"""
param: n - number of steps in the staircase
Return number of possible ways in which you can climb the staircase
"""
def staircase(n):
'''Hint'''
# Base Case - What holds true for minimum steps possible i.e., n == 0, 1, 2 or 3? Return the number of ways the child can climb n steps.
# Recursive Step - Split the solution into base case if n > 3.
pass
# +
# Solution
## Read input as specified in the question.
## Print output as specified in the question.
def staircase(n):
if n <= 0:
return 1
if n == 1:
return 1
elif n == 2:
return 2
elif n == 3:
return 4
return staircase(n - 1) + staircase(n - 2) + staircase(n - 3)
# + [markdown] graffitiCellId="id_w7lklez"
# <span class="graffiti-highlight graffiti-id_w7lklez-id_brqvnra"><i></i><button>Show Solution</button></span>
# + graffitiCellId="id_qnr80ry"
def test_function(test_case):
n = test_case[0]
solution = test_case[1]
output = staircase(n)
if output == solution:
print("Pass")
else:
print("Fail")
# + graffitiCellId="id_6g7yxbj"
n = 3
solution = 4
test_case = [n, solution]
test_function(test_case)
# + graffitiCellId="id_1q0pz7y"
n = 4
solution = 7
test_case = [n, solution]
test_function(test_case)
# + graffitiCellId="id_p3uxb7h"
n = 7
solution = 44
test_case = [n, solution]
test_function(test_case)
| 2,403 |
/Visual Genome - Regions.ipynb | aaca619c4e7f2a2edbc64ced4d5207349578bf36 | [
"MIT"
] | permissive | MeMAD-project/statistical-tools | https://github.com/MeMAD-project/statistical-tools | 1 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 2,475,773 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introducción a Python: Listas, Iteraciones y Strings
#
# <img style="float: right; margin: 0px 0px 15px 15px;" src="https://www.python.org/static/community_logos/python-logo.png" width="200px" height="200px" />
#
# > Ya conocemos un poco más de la sintaxis de Python, como hacer funciones y como usar condicionales. Es hora que veamos otros tipos de variables (arreglos) y cómo hacer líneas de código que ejecuten operaciones repetitivas.
#
# Referencias:
# - https://www.kaggle.com/learn/python
# ___
# # 1. Listas
#
# Las listas son objetos en Python representan secuencias ordenadas de valores.
#
# Veamos un par de ejemplos de como crearlas:
# Primeros números primos
primos = [2, 5, 3, 7]
# Planetas del sistema solar
planetas = ['Mercurio', 'Venus', 'Tierra', 'Marte',
'Jupiter', 'Saturno', 'Urano', 'Neptuno']
primos
planetas
# Vemos que las listas no son exclusivamente de números.
#
# Ya vimos listas de números, pero también de strings.
#
# Incluso, se pueden hacer listas de listas:
lista_primos_planetas = [primos, planetas]
lista_primos_planetas
# Aún más, se pueden hacer listas de diferentes tipos de objetos:
lista_diferentes_tipos = [2, 0., 'Hola', help, primos]
lista_diferentes_tipos
# Sin duda, en muchas ocasiones nos será muy útil tener una sola lista guardando varios resultados, que muchos resultados guardados en objetos individuales.
#
# Pero, una vez en la lista, ¿cómo accedemos a los objetos individuales?
# ## 1.1 Indizado
#
# Podemos acceder a los elementos individuales pertenecientes a la lista a través de brackets ([])
#
# Por ejemplo, ¿cuál planeta está más cercano al sol en nuestro sistema solar?
#
# - Acá una nota importante: Python usa índices comenzando en cero (0):
# Planeta más cercano al sol
planetas[0]
# Siguiente planeta
planetas[1]
# Todo bien...
#
# Ahora, ¿cuál es el planeta más alejado del sol?
#
# - Los elementos de una lista pueden tambien ser accedidos de atrás para adelante, utilizando números negativos:
# Planeta más alejado del sol
planetas[-1]
# Segundo planeta más alejado
planetas[-2]
# Muy bien...
#
# Y si quisiéramos averiguar, por ejemplo, ¿cuáles son los tres planetas más cercanos al sol?
# Tres primeros planetas
planetas[0:3]
# Entonces `lista[a:b]` es nuestra manera de preguntar por todos los elementos de la lista con índice comenzando en `a` y continuando hasta `b` sin incluir (es decir, hasta `b-1`).
#
# Los índices de comienzo y de término son opcionales:
# - Si no ponemos el índice de inicio, se asume que es cero (0): `lista[:b] == lista[0:b]`
# Reescribir la expresión anterior
planetas[:3]
planetas[-3:]
# - Equivalentemente, si no ponemos el índice de fin, se asume que este equivale a la longitud de la lista:
# Lista de todos los planetas comenzando desde el planeta tierra
planetas[2:]
# También podemos usar índices negativos cuando accedemos a varios objetos.
#
# Por ejemplo, ¿qué obtenemos con las siguientes expresión?
planetas[-1]
# ```python
# lista[n:n + N] = [lista[n], lista[n + 1], ..., lista[n + N - 1]]
# ```
planetas[1:-1]
planetas[-3:]
planetas[:4]
planetas[5:]
planetas[:4] + planetas[5:]
# Slice:
#
# ```python
# lista[n:n+N:s] = [lista[n], lista[n + s], lista[n + 2 * s], ..., ]
# ```
primos
primos[::2]
# Elementos de la lista en reverso (al revés)
primos[::-1]
# ## 1.2 Modificando listas
#
# Las listas son objetos "mutables", es decir, sus objetos pueden ser modificados directamente en la lista.
#
# Una manera de modificar una lista es asignar a un índice.
#
# Por ejemplo, supongamos que la comunidad científica, con argumentos basados en la composición del planeta, decidió modificar el nombre de "Planeta Tierra" a "Planeta Agua".
planetas
planetas[2] = 'Agua'
planetas
# También podemos cambiar varios elementos de la lista a la vez:
planetas[:3] = ['mer', 'ven', 'tie']
planetas
# ## 1.3 Funciones sobre listas
#
# Python posee varias funciones supremamente útiles para trabajar con listas.
#
# `len()` nos proporciona la longitud (número de elementos) de una lista:
# función len()
len(planetas)
len(primos)
# `sorted()` nos regresa una versión ordenada de una lista:
# Ayuda en la función sorted
help(sorted)
primos
# Llamar la función sorted sobre primos
sorted(primos)
sorted(primos, reverse=True)
planetas = ['Mercurio', 'Venus', 'Tierra', 'Marte',
'Jupiter', 'Saturno', 'Urano', 'Neptuno']
# Llamar la función sorted sobre planetas
sorted(planetas)
len('Jupiter')
def long_str(s):
return len(s)
long_str2 = lambda s: len(s)
long_str("Jupiter"), long_str2("Jupiter")
# **Paréntesis: Funciones anónimas**
#
# Las funciones anónimas comienzan con la palabra clave `lambda` seguidas por el (los) argumento(s) de la función. Después de `:` se escribe lo que retorna la función.
sorted(planetas, key=long_str)
sorted(planetas, key=lambda s: len(s))
# `sum()`, ya se imaginarán que hace:
primos
# Ayuda en la función sum
help(sum)
# sum
sum(primos)
# En la clase pasada utilizamos las funciones `min()` y `max()` sobre varios argumentos.
#
# También le podemos pasar un solo argumento tipo lista.
# min
min(primos)
# max
max(primos)
# ___
# ## Pausa: Objetos
#
# Hasta ahora he venido utilizando la palabra **objeto** sin darle mucha importancia. ¿Qué significa en realidad?
#
# - si han visto algo de Python, pueden haber escuchado que todo en Python es un objeto.
#
# En la siguiente semana estudiaremos a nivel muy básico qué es la programación orientada a objetos.
#
# Por ahora, nos basta con saber que los objetos cargan varias "cosas" con ellos, y podemos acceder a estas "cosas" utilizando la "sintaxis punto (.)" de Python.
#
# Por ejemplo, los números en Python tienen una variable asociada llamada `imag`, la cual representa su parte imaginaria:
# Atributos real e imag
a = 7
a.imag, a.real
dir(a)
a.denominator, a.numerator
b = (6 + 5j) / 3
b.real, b.imag
dir(b.imag)
c = 5 / 3
c.as_integer_ratio()
7505999378950827 / 4503599627370496
from fractions import Fraction
Fraction(c).limit_denominator(10)
help(Fraction().limit_denominator)
# Entre las "cosas" que los objetos cargan, también pueden haber funciones.
#
# Una función asociada a un objeto se llama **método**.
#
# Las "cosas" asociadas a los objetos, que no son funciones, son llamados **atributos** (ejemplo: imag).
# Método conjugate()
b.conjugate()
# Y si no sabemos qué hace un método determinado en un objeto, también podemos pasar métodos a la función `help()`, de la misma manera en que le pasamos funciones:
# help(objeto.metodo)
help(b.conjugate)
# Bueno, ¿y esto de que nos sirve?
#
# Pues las listas tienen una infinidad de métodos útiles que estaremos usando...
# ___
# ## 1.4 Métodos de las listas
#
# `list.append()` modifica una lista añadiéndole un elemento en el final:
planetas = ['Mercurio',
'Venus',
'Tierra',
'Marte',
'Jupiter',
'Saturno',
'Urano',
'Neptuno']
# Plutón también es un planeta
variable = planetas.append("Pluton")
print(variable)
planetas
# ¿Porqué no obtuvumos una salida en la celda de arriba?
#
# Verifiquemos la documentación del método append:
help(planetas.append)
help(list.append)
help(append)
# **Comentario:** append es un método de todos los objetos tipo `list`, de manera que habríamos podido llamar `help(list.append)`. Sin embargo, si intentamos llamar `help(append)`, Python nos dirá que no existe nada con el nombre "append", pues `append` solo existe en el contexto de listas.
# `list.pop()` remueve y devuelve el último elemento de una lista:
# Que Plutón siempre no es un planeta
planetas.pop()
planetas
help(planetas.pop)
planetas.pop(1)
planetas
# ### 1.4.1 Buscando en listas
#
# ¿En qué lugar de los planetas se encuentra la Tierra? Podemos obtener su índice usando el método `list.index()`:
planetas = ['Mercurio',
'Venus',
'Tierra',
'Marte',
'Jupiter',
'Saturno',
'Urano',
'Neptuno']
planetas
# índice del planeta tierra
planetas.index("Tierra")
planetas[2]
planetas[planetas.index('Tierra'):]
# Está en el tercer lugar (recordar que el indizado en Python comienza en cero)
#
# ¿En qué lugar está Plutón?
# índice del planeta plutón
planetas.index('Pluton')
# <font color=red> Error ... </font> ¡como debe ser!
#
# Para evitar este tipo de errores, existe el operador `in` para determinar si un elemento particular pertenece a una a una lista:
planetas
# ¿Es la Tierra un planeta?
'Tierra' in planetas
# ¿Es Plutón un planeta?
'Pluton' in planetas
# Usar esto para evitar el error de arriba
if 'Pluton' in planetas:
planetas.index("Pluton")
# Hay otros métodos interesantes de las listas que no veremos. Si quieren aprende más acerca de todos los métodos y atributos de un objeto particular, podemos llamar la función `help()` sobre el objeto.
#
# Por ejemplo:
dir(list)
help(list)
primos
primos.extend([11, 13])
primos
# ## 1.5 Tuplas
#
# También son arreglos de objetos similares a las listas. Se diferencian en dos maneras:
#
# - La sintaxis para crear tuplas usa paréntesis (o nada) en vez de brackets:
t = (1, 2, 3)
t
# O equivalentemente
t = 1, 2, 3
t
t[1:]
# - Las tuplas, a diferencia de las listas, no pueden ser modificadas (son objetos inmutables):
# Intentar modificar una tupla
t[1] = 5
# Las tuplas son usadas comúnmente para funciones que devuelven más de un valor.
#
# Por ejemplo, el método `as_integer_ratio()` de los objetos `float`, devuelve el numerador y el denominador en la forma de una tupla:
# as_integer_ratio
0.25.as_integer_ratio()
num, den = 0.25.as_integer_ratio()
num
den
# Ayuda en el método float.as_integer_ratio
help(float.as_integer_ratio)
# También pueden ser usadas como un atajo:
a = (1, 2)
b = (0, 'A')
a, b = b, a
print(a, b)
# # 2. Ciclos o iteraciones
#
# ## 2.1 Ciclos `for`
#
# Las iteraciones son una manera de ejecutar cierto bloque de código repetidamente:
# Planetas, de nuevo
planetas = ['Mercurio', 'Venus', 'Tierra', 'Marte', 'Jupiter', 'Saturno', 'Urano', 'Neptuno']
# Imprimir todos los planetas en la misma línea
for planeta in planetas:
print(planeta, end=', ')
# Para construir un ciclo `for`, se debe especificar:
#
# - el nombre de la variable que va a iterar (planeta),
#
# - el conjunto de valores sobre los que va a iterar la variable (planetas).
#
# Se usa la palabra `in`, en este caso, para hacerle entender a Python que *planeta* va a iterar sobre *planetas*.
#
# El objeto a la derecha de la palabra `in` puede ser cualquier objeto **iterable**. Básicamente, un iterable es cualquier arreglo (listas, tuplas, conjuntos, arreglos de numpy, series de pandas...).
#
# Por ejemplo, queremos hallar la multiplicación de todos los elementos de la siguiente tupla.
multiplicandos = (2, 2, 2, 3, 3, 5)
# +
# Multiplicación como ciclo
producto = 1
for number in multiplicandos:
producto *= number
producto
# -
# Incluso, podemos iterar sobre los caracteres de un string:
s = 'steganograpHy is the practicE of conceaLing a file, message, image, or video within another fiLe, message, image, Or video.'
# Imprimir solo los caracteres en mayúscula, sin espacios, uno seguido de otro
for char in s:
print(char if char.isupper() else '', end='')
# ### 2.1.1 Función `range()`
#
# La función `range()` es una función que devuelve una secuencia de números. Es extremadamente útil para escribir ciclos for.
#
# Por ejemplo, si queremos repetir una acción 5 veces:
# For de 5 iteraciones
for i in range(5):
print('Hola, ¡Mundo!')
help(range)
range(4, 8)
list(range(4, 8)), list(range(4, 8, 2))
# **Ejercicio:**
#
# 1. Escribir una función que devuelva los primeros $n$ elementos de la sucesión de Fibonacci, usando un ciclo `for`.
def fibonacci_for(n):
if n == 1:
fibonacci = [0]
elif n == 2:
fibonacci = [0, 1]
elif n >= 3:
fibonacci = [0, 1]
for i in range(n - 2):
fibonacci.append(fibonacci[-2] + fibonacci[-1])
return fibonacci
fibonacci_for(10)
# ## 2.2 Ciclos `while`
#
# Son otro tipo de ciclos en Python, los cuales iteran hasta que cierta condición deje de cumplirse.
#
# Por ejemplo:
i = 0
while i >= 0:
print(i, end=' ')
# i = i + 1 es equivalente a i += 1
i += 1
# El argumento de un ciclo `while` se evalúa como una condición lógica, y el ciclo se ejecuta hasta que dicha condición sea **False**.
# **Ejercicio:**
#
# 1. Escribir una función que devuelva los primeros $n$ elementos de la sucesión de Fibonacci, usando un ciclo `while`.
#
# 2. Escribir una función que devuelva los elementos menores a cierto número $x$ de la sucesión de Fibonacci, usando un ciclo `while`.
def fibonacci_while(n):
if n == 1:
fibonacci = [0]
elif n == 2:
fibonacci = [0, 1]
elif n >= 3:
i = 2
fibonacci = [0, 1]
while i < n:
fibonacci.append(fibonacci[-2] + fibonacci[-1])
i += 1
return fibonacci
fibonacci_while(10)
# ## Pausa: Recursión
#
# Una manera adicional de ejecutar iteraciones se conoce como *recursión*, y sucede cuando definimos una función en términos de sí misma.
#
# Por ejemplo, el $n$-ésimo número de la secuencia de Fibonacci, recursivamente sería:
def fibonacci_recursive(n):
if n == 1:
return 0
elif n == 2:
return 1
else:
return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2)
fibonacci_recursive(10)
# ## 2.3 List comprehensions (no encuentro una traducción suficientemente buena de esto)
#
# List comprehension son una de las características más chidas de Python. La manera más fácil de entenderla, como muchas cosas, es revisando ejemplos:
# Primero, con ciclo for: listar los cuadrados de los 10 dígitos
# Ahora con una list comprehension
# Podemos agregar, incluso, condicionales:
planetas
# Ejemplo con los planetas
[planeta for planeta in planetas if len(planeta) < 7]
# Se puede usar para dar formato:
# str.upper()
# Es supremamente importante aprender esto, ya que es ampliamente utilizado y ayuda a reducir muchísimas líneas de código.
#
# Ejemplo: escribir la siguiente función usando un ciclo for.
def cuantos_negativos(iterable):
"""
Devuelve el número de números negativos en el iterable dado.
>>> cuantos_negativos([5, -1, -2, 0, 3])
2
"""
pass
cuantos_negativos([5, -1, -2, 0, 3])
# Ahora, con list comprehensions:
def cuantos_negativos(iterable):
"""
Devuelve el número de números negativos en el iterable dado.
>>> cuantos_negativos([5, -1, -2, 0, 3])
2
"""
pass
# Probar la función
cuantos_negativos([5, -1, -2, 0, 3])
# # 3. Strings y diccionarios
#
# ## 3.1 Strings
#
# Si hay algo en lo que Python es la ley es manipulando Strings. En esta sección veremos algunos de los métodos de los objetos tipo string, y operaciones de formateo (muy útiles en la limpieza de bases de datos, por cierto).
# ### 3.1.1 Sintaxis string
#
# Ya hemos visto varios ejemplos involucrando strings anteriormente. Solo para recordar:
x = 'Pluton es un planeta'
y = "Pluton es un planeta"
x == y
# Hay casos particulares para preferir una u otra:
#
# - Las comillas dobles son convenientes si tu string contiene un apóstrofe.
#
# - De manera similar, se puede crear fácilmente un string que contiene comillas dobles englobándolo en comillas simples.
#
# Ejemplos:
print("Pluto's a planet!")
print('My dog is named "Pluto"')
print('Pluto\'s a planet!')
print("My dog is named \"Pluto\"")
# ### 3.1.2 Los strings son iterables
#
# Los objetos tipo strings son cadenas de caracteres. Casi todo lo que vimos que le podíamos aplicar a una lista, se lo podemos aplicar a un string.
# string de ejemplo
# Indexado
# Indexado multiple
# ¿Cuántos caracteres tiene?
# También podemos iterar sobre ellos
# Sin embargo, una diferencia principal con las listas, es que son inmutables (no los podemos modificar).
# ### 3.1.3 Métodos de los strings
#
# Como las listas, los objetos tipo `str` tienen una gran cantidad de métodos útiles.
#
# Veamos algunos:
# string de ejemplo
# EN MAYÚSCULAS
# en minúsculas
# pregunta: comienza con?
# pregunta: termina con?
# #### Entre listas y strings: métodos `split()` y `join()`
#
# El método `str.split()` convierte un string en una lista de strings más pequeños.
#
# Esto es supremamente útil para tomar de un string cada una de sus palabras:
# Palabras de una frase
# O para obtener cierta información:
# Año, mes y día de una fecha especificada como string
# `str.join()` nos sirve para devolver los pasos.
#
# Teniendo una lista de pequeños strings, la podemos convertir en un solo string usando el string sobre el que se llama como separador:
# Con la fecha...
# ### 3.1.4 Concatenación de strings
#
# Python nos permite concatenar strings con el operador `+`:
# Ejemplo
# Sin embargo, hay que tener cuidado:
# Concatenar un string con un número
# ¿cómo concatenar lo anterior?
# ## 3.2 Diccionarios
#
# Los diccionarios son otros objetos de Python que mapean llaves a elementos:
numeros = {'uno': 1, 'dos': 2, 'tres': 3}
# En este caso, los strings "uno", "dos", y "tres" son las llaves, y los números 1, 2 y 3 son sus valores correspondientes.
#
# Los valores son accesados con brackets, similarmente a las listas:
numeros['uno']
# Usamos una sintaxis similar para añadir otro par llave, valor
numeros['cuatro'] = 4
numeros
# O cambiar el valor asociado a una llave existente
numeros['uno'] = '1'
numeros
# ### Navegando entre listas, tuplas, diccionarios: `zip`
# Supongamos que tenemos dos listas que se corresponden:
key_list = ['name', 'age', 'height', 'weight', 'hair', 'eyes', 'has dog']
value_list = ['Esteban', 30, 1.81, 75, 'black', 'brown', True]
# ¿Cómo puedo asociar estos valores en un diccionario? Con `zip`:
# Primero, obtener la lista de pares
# Después obtener diccionario de relaciones
# Al ser los diccionarios iterables, puedo iterar sobre ellos (valga la redundancia)
# Iterar sobre diccionario
# Iterar sobre valores
# Iterar sobre pares llave-valor
# ___
# - Quiz 1 al comenzar la siguiente clase. Comprende clases 1 y 2.
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by Esteban Jiménez Rodríguez.
# </footer>
| 18,911 |
/Data_Structures/arrays/Duplicate-Number.ipynb | b14d3fc53738515f2f5384dd6594c4a844eec897 | [] | no_license | Data-Semi/DataStructure-LessonNotes | https://github.com/Data-Semi/DataStructure-LessonNotes | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 3,810 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_jjzm8pq"
# ### Problem Statement
#
# You have been given an array of `length = n`. The array contains integers from `0` to `n - 2`. Each number in the array is present exactly once except for one number which is present twice. Find and return this duplicate number present in the array
#
# **Example:**
# * `arr = [0, 2, 3, 1, 4, 5, 3]`
# * `output = 3` (because `3` is present twice)
#
# The expected time complexity for this problem is `O(n)` and the expected space-complexity is `O(1)`.
# + graffitiCellId="id_hjobo20"
def duplicate_number(arr):
"""
:param - array containing numbers in the range [0, len(arr) - 2]
return - the number that is duplicate in the arr
"""
pass
# + [markdown] graffitiCellId="id_t54gljc"
# <span class="graffiti-highlight graffiti-id_t54gljc-id_6q2yj6n"><i></i><button>Show Solution</button></span>
# + graffitiCellId="id_32apeg6"
def test_function(test_case):
arr = test_case[0]
solution = test_case[1]
output = duplicate_number(arr)
if output == solution:
print("Pass")
else:
print("Fail")
# + graffitiCellId="id_5b4ou9d"
arr = [0, 0]
solution = 0
test_case = [arr, solution]
test_function(test_case)
# + graffitiCellId="id_kvkeije"
arr = [0, 2, 3, 1, 4, 5, 3]
solution = 3
test_case = [arr, solution]
test_function(test_case)
# + graffitiCellId="id_vfijgc0"
arr = [0, 1, 5, 4, 3, 2, 0]
solution = 0
test_case = [arr, solution]
test_function(test_case)
# + graffitiCellId="id_w6gda6p"
arr = [0, 1, 5, 5, 3, 2, 4]
solution = 5
test_case = [arr, solution]
test_function(test_case)
| 1,881 |
/6_function.ipynb | f0f001ee2af6ead7a3055d9b4b0236603c7d6f9c | [] | no_license | wssunn/Python-Language | https://github.com/wssunn/Python-Language | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 7,201 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# ### 迭代器
#
# ```python
# x = [1, 2, 3]; it = iter(x)
# print(it.__next__()) #输出1
# print(it.__next__()) #输出2
# print(it.__next__()) #输出3
# print(it.__next__()) #返回 StopIteration
# ```
# ## function
def myfunc():
'''the doc string'''
pass
print(myfunc.__doc__) #输出:the doc string
# ### 1.函数的调用方式
#
# +
#星号(*)后面的形参 必须 要指定参数名称
def recorder(name, *, age):
print(name, ' ', age)
#recorder('Gary', 32) #错误,没有指定形参age
recorder('Gary', age=32) #正确,指定形参age
#有默认值的形参必须放在没有默认值的后面
def recorder(name, age=32): #正确
pass
#def recorder(age=32, name): #错误
#func(*参数) 解包tuple或者list
#使用解包列表或元组,则解包参数不可修改
def recorder(*person):
for a in person:
if not isinstance(a, (int, str)):
raise TypeError('bad operand type')
#注:参数不可修改
print(person[0], person[1])
mylist = ['Gary', 32]; recorder(*mylist) #输出:Gary 32
mytuple = ['Gary', 32]; recorder(*mytuple) #输出:Gary 32
#func(**dict) 解包字典
def recorder(**person):
for a in person.values():
if not isinstance(a, (int, str)):
raise TypeError('bad operand type')
print(person['name'], person['age'])
mydict = {'age':32, 'name':'gary'}
recorder(**mydict)
recorder(age=32, name='gary')
# -
# #### 1.1混合使用
# +
# 单个形参在前,列表元组字典在后,调用不需要指定单个形参名字
def recorder(ttt, *person1, **person2):
if len(person1) != 0:
print(person1[0], person1[1])
if len(person2) != 0:
print(person2['name'], person2['age'])
recorder('abc', 'Gary', 32) #传入不指定形参的实参,由person1接收
recorder('abc', name='Gary', age=32) #传入指定形参的实参,由person2接收
recorder(ttt='abc') #不需要指定
# -
# ### 2.生成器函数(generator) 见5
#
# 生成器对象只能迭代一次,所以对可迭代函数的结果只能取一次。迭代器对象(iter)可迭代多次。
# +
#匿名函数
myfunc = lambda x,y: x+y
print(myfunc(1, 2)) #输出3
# reduce函数:按照sequence的顺序,依次调用function,每次调用传入两个参数
# 一个是sequence当前元素,一个是上一个元素在function的返回值
from functools import reduce
a = reduce(lambda x,y: x+y, range(1, 101)); print(a) #输出5050
b = map(lambda x: x**2, [1, 2, 3, 4, 5]); print(list(b)) #输出[1, 4, 9, 16, 25]
#map函数: 可以处理多个函数,lambda函数的参数个数要和列表(序列)数据个数相同
# 当两个序列长度不相等,以最小长度对所有序列进行提取
c = map(lambda x,y: x+y, [1, 2, 3], [4, 5, 6, 7]); print(list(c)) #输出[5, 7, 9]
#filter函数: 把序列对象中的元素依次放到处理函数中,True则留下
t = filter(lambda x: x%2==0, range(10))
print(list(t)) #输出[0, 2, 4, 6, 8]
#生成器对象只能迭代一次,结果只能取一次
print(list(t)) #输出[]
# -
# ### 3.偏函数
# 偏函数用于截取原函数功能。可以使用一个原函数,将其封装多个偏函数。
# +
from functools import partial
def recorder(name, age):
print(name, ' ', age)
partial_recorder = partial(recorder, name='Gary')
partial_recorder(age=32)
# -
# ### 4. eval与exec函数
# eval执行要返回结果,适合放置有结果返回的语句; eval()用于将'[]'(包含组合对象的字符串)转换为[]组合对象
#
# exec执行完不返回结果,适合放置运行后没有结果的语句
a = exec('2+3'); print(a) #返回None
a = eval('2+3'); print(a) #返回5
#eval()用于将'[]'(包含组合对象的字符串)转换为[]组合对象
b = '[1, 2, 3]'; print(eval(b)) #返回[1, 2, 3]
c = '"hello"'; print(eval(c)) #返回hello
# ### 5. 生成器函数(generaotr)
#
# 1. 迭代器函数(iterator)将所有的内容放在内存里面,使用next函数来遍历,节约系统运算资源
# 2. 生成器函数(generaotr)不会把内容放在内存里,调用next时会计算然后立刻销毁,节约内存
# +
#使用yield语句返回
def print_list(a):
for i in a:
yield i
for i in print_list([1, 2, 3]):
print(i) #输出1, 2, 3
#使用()
a = (x**2 for x in range(3))
for i in a:
print(i) #输出0, 1, 4
# -
# ### 6.变量的作用域
# 1. L:本地作用域,当前函数
# 2. E:(函数嵌套)上一层结构中的def的本地作用域
# 3. G:全局作用域,不被任何函数包括
# 4. B:内置作用域,是python内部的命名空间
#
# 代码用到某个变量a,系统内部会按照LEGB的顺序去不同的作用域里面找变量a,找到后停下来,否则报错
# +
# global语句
| 3,684 |
/2. SKKU/2.DeepLearningBasic/Practice2_Softmax_Classifier/.ipynb_checkpoints/Samsung_SDS_Practice2_Softmax_Classifier-checkpoint.ipynb | 759b6d3e793c77c17de695744cc5f6cc470097c4 | [] | no_license | LeeKeon/AI | https://github.com/LeeKeon/AI | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 125,936 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Practice2. Softmax classifier
# +
import numpy as np
import random
import os
import matplotlib.pyplot as plt
import _pickle as pickle
import time
# set default plot options
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# -
# ## Data preprocessing
from utils import get_CIFAR10_data
X_tr, Y_tr, X_val, Y_val, X_te, Y_te, mean_img = get_CIFAR10_data()
print ('Train data shape : %s, Train labels shape : %s' % (X_tr.shape, Y_tr.shape))
print ('Val data shape : %s, Val labels shape : %s' % (X_val.shape, Y_val.shape))
print ('Test data shape : %s, Test labels shape : %s' % (X_te.shape, Y_te.shape))
# ## Visualize training images
# +
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
images_index = np.int32(np.round(np.random.rand(18,)*10000,0))
fig, axes = plt.subplots(3, 6, figsize=(18, 6),
subplot_kw={'xticks': [], 'yticks': []})
fig.subplots_adjust(hspace=0.3, wspace=0.05)
for ax, idx in zip(axes.flat, images_index):
img = (X_tr[idx,:3072].reshape(32, 32, 3) + mean_img.reshape(32, 32, 3))/255.
ax.imshow(img)
ax.set_title(class_names[Y_tr[idx]])
# -
# # 1. Softmax Classifier
# We will implement two version of loss functions for softmax classifier, and test it out on the CIFAR10 dataset.
#
# First, implement the naive softmax loss function with nested loops.
def naive_softmax_loss(Weights,X_data,Y_data,reg):
"""
Inputs have D dimension, there are C classes, and we operate on minibatches of N examples.
Inputs :
- Weights : A numpy array of shape (D,C) containing weights.
- X_data : A numpy array of shape (N,D) contatining a minibatch of data.
- Y_data : A numpy array of shape (N,) containing training labels;
Y[i]=c means that X[i] has label c, where 0<=c<C.
- reg : Regularization strength. (float)
Returns :
- loss as single float
- gradient with respect to Weights; an array of sample shape as Weights
"""
# Initialize the loss and gradient to zero
softmax_loss = 0.0
dWeights = np.zeros_like(Weights)
#print(dWeights.shape)
####################################################################################################
# TODO : Compute the softmax loss and its gradient using explicit loops. #
# Store the loss in loss and the gradient in dW. #
# If you are not careful here, it is easy to run into numeric instability. #
# Don't forget the regularization. #
#---------------------------------------WRITE YOUR CODE--------------------------------------------#
#length1 = len(X_data)
#length2 = len(X_data[0])
traing_times = X_data.shape[0]
dim,num_class = Weights.shape
print('traing_times',traing_times)
print('dimen, num_class',dim,num_class)
for i in range(traing_times):
#Weight Matmul
score_i = X_data[i].dot(Weights)
#Nomalization - exp 지수값이 너무 커지지 않기 위한
score_i = score_i - np.max(score_i)
prob_i = np.exp(score_i) / np.sum(np.exp(score_i))
#Calculate Loss
softmax_loss += -np.log(prob_i[Y_data[i]])
#Loss Evaluste
prob_i[Y_data[i]] -= 1
dWeights += np.dot(X_data[i].reshape(dim,1),prob_i.reshape(1,num_class))
#Regulazation
softmax_loss /= traing_times
softmax_loss += 0.5 * reg * np.sum(Weights*Weights)
#what's this?
dWeights = (1.0/traing_times)*dWeights + reg*Weights
#--------------------------------------END OF YOUR CODE--------------------------------------------#
####################################################################################################
return softmax_loss, dWeights
# Generate a random softmax weight matrix and use it to compute the loss. As a rough sanity check, our loss should be something close to -log(0.1).
W = np.random.randn(3073, 10) * 0.0001
print(W.shape)
print(W.shape[0] )
# +
loss, grad = naive_softmax_loss(W, X_tr, Y_tr, 0.0)
print ('loss :', loss)
print ('sanity check : ', -np.log(0.1))
# -
# The next thing is the vectorized softmax loss function.
def vectorized_softmax_loss(Weights, X_data, Y_data, reg):
softmax_loss = 0.0
dWeights = np.zeros_like(Weights)
####################################################################################################
# TODO : Compute the softmax loss and its gradient using no explicit loops. #
# Store the loss in loss and the gradient in dW. #
# If you are not careful here, it is easy to run into numeric instability. #
# Don't forget the regularization. #
#---------------------------------------WRITE YOUR CODE--------------------------------------------#
tr_length = X_data.shape[0]
#Weight Matmul
score = X_data.dot(Weights)
#print(np.mean(score,axis=0))
#print(np.mean(score,axis=1))
#Nomalization
score -= np.max(score,axis=1).reshape(tr_length,1)
prob = np.exp(score) / np.sum(np.exp(score),axis=1).reshape(tr_length,1)
#Calculate Loss
softmax_loss = -np.sum(np.log(prob[range(tr_length), Y_data]))
#Loss Evaluste
prob[range(tr_length), Y_data] -= 1
dWeights = X_data.T.dot(prob)
#Regularization
softmax_loss /= tr_length
softmax_loss += 0.5*reg*np.sum(Weights*Weights)
dWeights = (1.0/tr_length)*dWeights + reg*Weights
#--------------------------------------END OF YOUR CODE--------------------------------------------#
####################################################################################################
return softmax_loss, dWeights
# Compare two versions. The two versions should compute the same results, but the vectorized version should be much faster.
# +
s_time = time.time()
loss_naive, grad_naive = naive_softmax_loss(W, X_tr, Y_tr, 0.00001)
print ('naive loss : %e with %fs' % (loss_naive, time.time()-s_time))
s_time = time.time()
loss_vectorized, grad_vectorized = vectorized_softmax_loss(W, X_tr, Y_tr, 0.00001)
print ('vectorized loss : %e with %fs' % (loss_vectorized, time.time()-s_time))
print ('loss difference : %f' % np.abs(loss_naive - loss_vectorized))
print ('gradient difference : %f' % np.linalg.norm(grad_naive-grad_vectorized, ord='fro'))
# -
# Now, you should implement the softmax classifier using the comment below with softmax loss function you implemented above.
class Softmax(object):
def __init__(self):
#self.Weights = None
return
def train(self, X_tr_data, Y_tr_data, X_val_data, Y_val_data, lr=1e-3, reg=1e-5, iterations=100, bs=128, verbose=False):
"""
Train this Softmax classifier using stochastic gradient descent.
Inputs have D dimensions, and we operate on N examples.
Inputs :
- X_data : A numpy array of shape (N,D) containing training data.
- Y_data : A numpy array of shape (N,) containing training labels;
Y[i]=c means that X[i] has label 0<=c<C for C classes.
- lr : (float) Learning rate for optimization.
- reg : (float) Regularization strength.
- iterations : (integer) Number of steps to take when optimizing.
- bs : (integer) Number of training examples to use at each step.
- verbose : (boolean) If true, print progress during optimization.
Regurns :
- A list containing the value of the loss function at each training iteration.
"""
num_train, dim = X_tr_data.shape
num_classes = np.max(Y_tr_data)+1
self.Weights = 0.001*np.random.randn(dim, num_classes)
for it in range(iterations):
#X_batch = None
#Y_batch = None
####################################################################################################
# TODO : Sample batch_size elements from the training data and their corresponding labels #
# to use in this round of gradient descent. #
# Store the data in X_batch and their corresponding labels in Y_batch; After sampling #
# X_batch should have shape (dim, batch_size) and Y_batch should have shape (batch_siae,) #
# #
# Hint : Use np.random.choice to generate indicies. #
# Sampling with replacement is faster than sampling without replacement. #
#---------------------------------------WRITE YOUR CODE--------------------------------------------#
#--------------------------------------END OF YOUR CODE--------------------------------------------#
####################################################################################################
# Evaluate loss and gradient
tr_loss, tr_grad = self.loss(X_batch, Y_batch, reg)
# Perform parameter update
####################################################################################################
# TODO : Update the weights using the gradient and the learning rate #
#---------------------------------------WRITE YOUR CODE--------------------------------------------#
#--------------------------------------END OF YOUR CODE--------------------------------------------#
####################################################################################################
if verbose and it % num_iters == 0:
print ('Ieration %d / %d : loss %f ' % (it, num_iters, loss))
def predict(self, X_data):
"""
Use the trained weights of this softmax classifier to predict labels for data points.
Inputs :
- X : A numpy array of shape (N,D) containing training data.
Returns :
- Y_pred : Predicted labels for the data in X. Y_pred is a 1-dimensional array of length N,
and each element is an integer giving the predicted class.
"""
Y_pred = np.zeros(X_data.shape[0])
####################################################################################################
# TODO : Implement this method. Store the predicted labels in Y_pred #
#---------------------------------------WRITE YOUR CODE--------------------------------------------#
#--------------------------------------END OF YOUR CODE--------------------------------------------#
####################################################################################################
return Y_pred
def get_accuracy(self, X_data, Y_data):
"""
Use X_data and Y_data to get an accuracy of the model.
Inputs :
- X_data : A numpy array of shape (N,D) containing input data.
- Y_data : A numpy array of shape (N,) containing a true label.
Returns :
- Accuracy : Accuracy of input data pair [X_data, Y_data].
"""
####################################################################################################
# TODO : Implement this method. Calculate an accuracy of X_data using Y_data and predict Func #
#---------------------------------------WRITE YOUR CODE--------------------------------------------#
#--------------------------------------END OF YOUR CODE--------------------------------------------#
####################################################################################################
return accuracy
def loss(self, X_batch, Y_batch, reg):
return vectorized_softmax_loss(self.Weights, X_batch, Y_batch, reg)
# Use the validatoin set to tune hyperparemeters (regularizatoin strength and learning rate).
# You should experiment with different range for the learning rates and regularization strength;
# if you are careful you should be able to get a classification accuracy of over 0.35 on the validatoin set.
# +
# results is dictionary mapping tuples of the form.
# (learning_rate, regularization_strength) to tuple of the form (training_accuracy, validation_accuracy).
# The accuracy is simply the fraction of data points that are correctly classified.
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-8, 1e-7, 5e-7, 1e-6]
regularization_strengths = [5e2, 1e3, 1e4, 5e4]
#########################################################################################################
# TODO : Write code that chooses the best hyperparameters by tuning on the validation set. #
# For each combination of hyperparemeters, train a Softmax on the training set, #
# compute its accuracy on the training and validatoin sets, and store these numbers in the #
# results dictionary. In addition, store the best validation accuracy in best_val #
# and the Softmax object that achieves this accuracy in best_softmax. #
# #
# Hint : You should use a small value for num_iters as you develop your validation code so that the #
# Softmax don't take much time to train; once you are confident that your validation code works, #
# you should rerun the validation code with a larger value for num_iter. #
#------------------------------------------WRITE YOUR CODE----------------------------------------------#
#softmax = Softmax()
#-----------------------------------------END OF YOUR CODE----------------------------------------------#
#########################################################################################################
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print ('lr %e reg %e train accuracy : %f, val accuracy : %f ' % (lr, reg, train_accuracy, val_accuracy))
print ('best validatoin accuracy achieved during cross-validation :', best_val)
# -
# Evaluate the best softmax on testset.
# +
Y_te_pred = best_softmax.predict(X_te)
test_accuracy = np.mean(Y_te == Y_te_pred)
print ('softmax on raw pixels final test set accuracy : ', test_accuracy)
# -
# ## Visualize test results
# Visualize (Image, Predicted label) pairs of the best softmax model. Results may are not good because we train simple softmax classifier model.
# +
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
images_index = np.int32(np.round(np.random.rand(18,)*1000,0))
fig, axes = plt.subplots(3, 6, figsize=(18, 6),
subplot_kw={'xticks': [], 'yticks': []})
fig.subplots_adjust(hspace=0.3, wspace=0.05)
for ax, idx in zip(axes.flat, images_index):
img = (X_te[idx,:3072].reshape(32, 32, 3) + mean_img.reshape(32, 32, 3))/255.
ax.imshow(img)
ax.set_title(class_names[Y_te_pred[idx]])
# -
# ## Visualize test results
# Visualize the learned weights for each class. Depending on your choice of learning rate and regularization strength, these may or may not be nice to look at.
# +
w = best_softmax.Weights[:-1, :]
w = w.reshape(32,32,3,10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2,5,i+1)
wimg=255.0*(w[:,:,:,i].squeeze() - w_min)/(w_max-w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
# -
| 16,856 |
/13_sql.ipynb | 72fd7d34098246695262ad7fb85aa4ad1c4d829b | [] | no_license | kisslitsyn/ya.practicum | https://github.com/kisslitsyn/ya.practicum | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 43,457 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 2: Inference in Graphical Models
#
# ### Machine Learning 2 (2016/2017)
#
# * The lab exercises should be made in groups of two people or individually.
# * The hand-in deadline is Wednesday, May 10, 23:59.
# * Assignment should be sent to [email protected]. The subject line of your email should be "[ML2_2017] lab#_lastname1\_lastname2".
# * Put your and your teammates' names in the body of the email
# * Attach the .IPYNB (IPython Notebook) file containing your code and answers. Naming of the file follows the same rule as the subject line. For example, if the subject line is "[ML2_2017] lab02\_Bongers\_Blom", the attached file should be "lab02\_Bongers\_Blom.ipynb". Only use underscores ("\_") to connect names, otherwise the files cannot be parsed.
#
# Notes on implementation:
#
# * You should write your code and answers in an IPython Notebook: http://ipython.org/notebook.html. If you have problems, please ask or e-mail Philip.
# * For some of the questions, you can write the code directly in the first code cell that provides the class structure.
# * Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.
# * NOTE: test your code and make sure we can run your notebook / scripts!
# ### Introduction
# In this assignment, we will implement the sum-product and max-sum algorithms for factor graphs over discrete variables. The relevant theory is covered in chapter 8 of Bishop's PRML book, in particular section 8.4. Read this chapter carefuly before continuing!
#
# We will first implement sum-product and max-sum and apply it to a simple poly-tree structured factor graph for medical diagnosis. Then, we will implement a loopy version of the algorithms and use it for image denoising.
#
# For this assignment we recommended you stick to numpy ndarrays (constructed with np.array, np.zeros, np.ones, etc.) as opposed to numpy matrices, because arrays can store n-dimensional arrays whereas matrices only work for 2d arrays. We need n-dimensional arrays in order to store conditional distributions with more than 1 conditioning variable. If you want to perform matrix multiplication on arrays, use the np.dot function; all infix operators including *, +, -, work element-wise on arrays.
# ## Part 1: The sum-product algorithm
#
# We will implement a datastructure to store a factor graph and to facilitate computations on this graph. Recall that a factor graph consists of two types of nodes, factors and variables. Below you will find some classes for these node types to get you started. Carefully inspect this code and make sure you understand what it does; you will have to build on it later.
# +
# %pylab inline
class Node(object):
"""
Base-class for Nodes in a factor graph. Only instantiate sub-classes of Node.
"""
def __init__(self, name):
# A name for this Node, for printing purposes
self.name = name
# Neighbours in the graph, identified with their index in this list.
# i.e. self.neighbours contains neighbour 0 through len(self.neighbours) - 1.
self.neighbours = []
# Reset the node-state (not the graph topology)
self.reset()
def reset(self):
# Incoming messages; a dictionary mapping neighbours to messages.
# That is, it maps Node -> np.ndarray.
self.in_msgs = {}
# A set of neighbours for which this node has pending messages.
# We use a python set object so we don't have to worry about duplicates.
self.pending = set([])
def add_neighbour(self, nb):
self.neighbours.append(nb)
def send_sp_msg(self, other):
# To be implemented in subclass.
raise Exception('Method send_sp_msg not implemented in base-class Node')
def send_ms_msg(self, other):
# To be implemented in subclass.
raise Exception('Method send_ms_msg not implemented in base-class Node')
def receive_msg(self, other, msg):
# Store the incomming message, replacing previous messages from the same node
self.in_msgs[other] = msg
# TODO: add pending messages
# self.pending.update(...)
def __str__(self):
# This is printed when using 'print node_instance'
return self.name
class Variable(Node):
def __init__(self, name, num_states):
"""
Variable node constructor.
Args:
name: a name string for this node. Used for printing.
num_states: the number of states this variable can take.
Allowable states run from 0 through (num_states - 1).
For example, for a binary variable num_states=2,
and the allowable states are 0, 1.
"""
self.num_states = num_states
# Call the base-class constructor
super(Variable, self).__init__(name)
def set_observed(self, observed_state):
"""
Set this variable to an observed state.
Args:
observed_state: an integer value in [0, self.num_states - 1].
"""
# Observed state is represented as a 1-of-N variable
# Could be 0.0 for sum-product, but log(0.0) = -inf so a tiny value is preferable for max-sum
self.observed_state[:] = 0.000001
self.observed_state[observed_state] = 1.0
def set_latent(self):
"""
Erase an observed state for this variable and consider it latent again.
"""
# No state is preferred, so set all entries of observed_state to 1.0
# Using this representation we need not differentiate between observed and latent
# variables when sending messages.
self.observed_state[:] = 1.0
def reset(self):
super(Variable, self).reset()
self.observed_state = np.ones(self.num_states)
def marginal(self, Z=None):
"""
Compute the marginal distribution of this Variable.
It is assumed that message passing has completed when this function is called.
Args:
Z: an optional normalization constant can be passed in. If None is passed, Z is computed.
Returns: marginal, Z. The first is a numpy array containing the normalized marginal distribution.
Z is either equal to the input Z, or computed in this function (if Z=None was passed).
"""
# TODO: compute marginal
return None, Z
def send_sp_msg(self, other):
# TODO: implement Variable -> Factor message for sum-product
pass
def send_ms_msg(self, other):
# TODO: implement Variable -> Factor message for max-sum
pass
class Factor(Node):
def __init__(self, name, f, neighbours):
"""
Factor node constructor.
Args:
name: a name string for this node. Used for printing
f: a numpy.ndarray with N axes, where N is the number of neighbours.
That is, the axes of f correspond to variables, and the index along that axes corresponds to a value of that variable.
Each axis of the array should have as many entries as the corresponding neighbour variable has states.
neighbours: a list of neighbouring Variables. Bi-directional connections are created.
"""
# Call the base-class constructor
super(Factor, self).__init__(name)
assert len(neighbours) == f.ndim, 'Factor function f should accept as many arguments as this Factor node has neighbours'
for nb_ind in range(len(neighbours)):
nb = neighbours[nb_ind]
assert f.shape[nb_ind] == nb.num_states, 'The range of the factor function f is invalid for input %i %s' % (nb_ind, nb.name)
self.add_neighbour(nb)
nb.add_neighbour(self)
self.f = f
def send_sp_msg(self, other):
# TODO: implement Factor -> Variable message for sum-product
pass
def send_ms_msg(self, other):
# TODO: implement Factor -> Variable message for max-sum
pass
# -
# ### 1.1 Instantiate network (10 points)
# Convert the directed graphical model ("Bayesian Network") shown below to a factor graph. Instantiate this graph by creating Variable and Factor instances and linking them according to the graph structure.
# To instantiate the factor graph, first create the Variable nodes and then create Factor nodes, passing a list of neighbour Variables to each Factor.
# Use the following prior and conditional probabilities.
#
# $$
# p(\verb+Influenza+) = 0.05 \\\\
# p(\verb+Smokes+) = 0.2 \\\\
# $$
#
# $$
# p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 1) = 0.3 \\\\
# p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 0) = 0.001 \\\\
# p(\verb+Fever+ = 1| \verb+Influenza+ = 1) = 0.9 \\\\
# p(\verb+Fever+ = 1| \verb+Influenza+ = 0) = 0.05 \\\\
# p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 1) = 0.99 \\\\
# p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 0) = 0.9 \\\\
# p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 1) = 0.7 \\\\
# p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 0) = 0.0001 \\\\
# p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 1) = 0.8 \\\\
# p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 0) = 0.07 \\\\
# p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 1) = 0.6 \\\\
# p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 0) = 0.001 \\\\
# $$
from IPython.core.display import Image
Image(filename='bn.png')
# YOUR ANSWER HERE
# ### 1.2 Factor to variable messages (20 points)
# Write a method `send_sp_msg(self, other)` for the Factor class, that checks if all the information required to pass a message to Variable `other` is present, computes the message and sends it to `other`. "Sending" here simply means calling the `receive_msg` function of the receiving node (we will implement this later). The message itself should be represented as a numpy array (np.array) whose length is equal to the number of states of the variable.
#
# An elegant and efficient solution can be obtained using the n-way outer product of vectors. This product takes n vectors $\mathbf{x}^{(1)}, \ldots, \mathbf{x}^{(n)}$ and computes a $n$-dimensional tensor (ndarray) whose element $i_0,i_1,...,i_n$ is given by $\prod_j \mathbf{x}^{(j)}_{i_j}$. In python, this is realized as `np.multiply.reduce(np.ix_(*vectors))` for a python list `vectors` of 1D numpy arrays. Try to figure out how this statement works -- it contains some useful functional programming techniques. Another function that you may find useful in computing the message is `np.tensordot`.
# ### 1.3 Variable to factor messages (10 points)
#
# Write a method `send_sp_message(self, other)` for the Variable class, that checks if all the information required to pass a message to Variable var is present, computes the message and sends it to factor.
# ### 1.4 Compute marginal (10 points)
# Later in this assignment, we will implement message passing schemes to do inference. Once the message passing has completed, we will want to compute local marginals for each variable.
# Write the method `marginal` for the Variable class, that computes a marginal distribution over that node.
# ### 1.5 Receiving messages (10 points)
# In order to implement the loopy and non-loopy message passing algorithms, we need some way to determine which nodes are ready to send messages to which neighbours. To do this in a way that works for both loopy and non-loopy algorithms, we make use of the concept of "pending messages", which is explained in Bishop (8.4.7):
# "we will say that a (variable or factor)
# node a has a message pending on its link to a node b if node a has received any
# message on any of its other links since the last time it send (sic) a message to b. Thus,
# when a node receives a message on one of its links, this creates pending messages
# on all of its other links."
#
# Keep in mind that for the non-loopy algorithm, nodes may not have received any messages on some or all of their links. Therefore, before we say node a has a pending message for node b, we must check that node a has received all messages needed to compute the message that is to be sent to b.
#
# Modify the function `receive_msg`, so that it updates the self.pending variable as described above. The member self.pending is a set that is to be filled with Nodes to which self has pending messages. Modify the `send_msg` functions to remove pending messages as they are sent.
# ### 1.6 Inference Engine (10 points)
# Write a function `sum_product(node_list)` that runs the sum-product message passing algorithm on a tree-structured factor graph with given nodes. The input parameter `node_list` is a list of all Node instances in the graph, which is assumed to be ordered correctly. That is, the list starts with a leaf node, which can always send a message. Subsequent nodes in `node_list` should be capable of sending a message when the pending messages of preceding nodes in the list have been sent. The sum-product algorithm then proceeds by passing over the list from beginning to end, sending all pending messages at the nodes it encounters. Then, in reverse order, the algorithm traverses the list again and again sends all pending messages at each node as it is encountered. For this to work, you must initialize pending messages for all the leaf nodes, e.g. `influenza_prior.pending.add(influenza)`, where `influenza_prior` is a Factor node corresponding the the prior, `influenza` is a Variable node and the only connection of `influenza_prior` goes to `influenza`.
#
#
#
# +
# YOUR ANSWER HERE
# -
# ### 1.7 Observed variables and probabilistic queries (15 points)
# We will now use the inference engine to answer probabilistic queries. That is, we will set certain variables to observed values, and obtain the marginals over latent variables. We have already provided functions `set_observed` and `set_latent` that manage a member of Variable called `observed_state`. Modify the `Variable.send_msg` and `Variable.marginal` routines that you wrote before, to use `observed_state` so as to get the required marginals when some nodes are observed.
# ### 1.8 Sum-product and MAP states (5 points)
# A maximum a posteriori state (MAP-state) is an assignment of all latent variables that maximizes the probability of latent variables given observed variables:
# $$
# \mathbf{x}_{\verb+MAP+} = \arg\max _{\mathbf{x}} p(\mathbf{x} | \mathbf{y})
# $$
# Could we use the sum-product algorithm to obtain a MAP state? If yes, how? If no, why not?
#
# __YOUR ANSWER HERE__
# ## Part 2: The max-sum algorithm
# Next, we implement the max-sum algorithm as described in section 8.4.5 of Bishop.
# ### 2.1 Factor to variable messages (10 points)
# Implement the function `Factor.send_ms_msg` that sends Factor -> Variable messages for the max-sum algorithm. It is analogous to the `Factor.send_sp_msg` function you implemented before.
# ### 2.2 Variable to factor messages (10 points)
# Implement the `Variable.send_ms_msg` function that sends Variable -> Factor messages for the max-sum algorithm.
# ### 2.3 Find a MAP state (10 points)
#
# Using the same message passing schedule we used for sum-product, implement the max-sum algorithm. For simplicity, we will ignore issues relating to non-unique maxima. So there is no need to implement backtracking; the MAP state is obtained by a per-node maximization (eq. 8.98 in Bishop). Make sure your algorithm works with both latent and observed variables.
# +
# YOUR ANSWER HERE
# -
# ## Part 3: Image Denoising and Loopy BP
#
# Next, we will use a loopy version of max-sum to perform denoising on a binary image. The model itself is discussed in Bishop 8.3.3, but we will use loopy max-sum instead of Iterative Conditional Modes as Bishop does.
#
# The following code creates some toy data: `im` is a quite large binary image and `test_im` is a smaller synthetic binary image. Noisy versions are also provided.
# +
from pylab import imread, gray
# Load the image and binarize
im = np.mean(imread('dalmatian1.png'), axis=2) > 0.5
imshow(im)
gray()
# Add some noise
noise = np.random.rand(*im.shape) > 0.9
noise_im = np.logical_xor(noise, im)
figure()
imshow(noise_im)
test_im = np.zeros((10,10))
#test_im[5:8, 3:8] = 1.0
#test_im[5,5] = 1.0
figure()
imshow(test_im)
# Add some noise
noise = np.random.rand(*test_im.shape) > 0.9
noise_test_im = np.logical_xor(noise, test_im)
figure()
imshow(noise_test_im)
# -
# ### 3.1 Construct factor graph (10 points)
# Convert the Markov Random Field (Bishop, fig. 8.31) to a factor graph and instantiate it.
# +
# YOUR ANSWER HERE
# -
# ### 3.2 Loopy max-sum (10 points)
# Implement the loopy max-sum algorithm, by passing messages from randomly chosen nodes iteratively until no more pending messages are created or a maximum number of iterations is reached.
#
# Think of a good way to initialize the messages in the graph.
# +
# YOUR ANSWER hErE
| 17,297 |
/reproductions/Example/Example_05_04.ipynb | c4c2b66bffc7496da634cfb7e89f9f33acaf7ac2 | [] | no_license | Sikhu-Ntaka/Skogestad-Python | https://github.com/Sikhu-Ntaka/Skogestad-Python | 0 | 0 | null | 2020-01-29T13:15:55 | 2020-01-29T13:05:41 | null | Jupyter Notebook | false | false | .py | 1,731 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Regression [35pts (+5 bonus)]
#
# ## Introduction
# One of the most widespread regression tools is the simple but powerful linear regression. In this notebook, you will engineer the Pittsburgh bus data into numerical features and use them to predict the number of minutes until the bus reaches the bus stop at Forbes and Morewood.
#
# Notebook restriction: you may not use scikit-learn for this notebook.
#
# ## Q1: Labeling the Dataset [8pts]
#
# You may have noticed that the Pittsburgh bus data has a predictions table with the TrueTime predictions on arrival time, however it does not have the true label: the actual number of minutes until a bus reaches Forbes and Morewood. You will have to generate this yourself.
#
# Using the `all_trips` function that you implemented in homework 2, you can split the dataframe into separate trips. You will first process each trip into a form more natural for the regression setting. For each trip, you will need to locate the point at which a bus passes the bus stop to get the time at which the bus passes the bus stop. From here, you can calculate the true label for all prior datapoints, and throw out the rest.
#
# ### Importing functions from homework 2
#
# Using the menu in Jupyter, you can import code from your notebook as a Python script using the following steps:
# 1. Click File -> Download as -> Python (.py)
# 2. Save file (time_series.py) in the same directory as this notebook
# 3. (optional) Remove all test code (i.e. lines between AUTOLAB_IGNORE macros) from the script for faster loading time
# 4. Import from the notebook with `from time_series import function_name`
#
# ### Specifications
#
# 1. To determine when the bus passes Morewood, we will use the Euclidean distance as a metric to determine how close the bus is to the bus stop.
# 2. We will assume that the row entry with the smallest Euclidean distance to the bus stop is when the bus reaches the bus stop, and that you should truncate all rows that occur **after** this entry. In the case where there are multiple entries with the exact same minimal distance, you should just consider the first one that occurs in the trip (so truncate everything after the first occurance of minimal distance).
# 3. Assume that the row with the smallest Euclidean distance to the bus stop is also the true time at which the bus passes the bus stop. Using this, create a new column called `eta` that contains for each row, the number of minutes until the bus passes the bus stop (so the last row of every trip will have an `eta` of 0).
# 4. Make sure your `eta` is numerical and not a python timedelta object.
import pandas as pd
import numpy as np
import scipy.linalg as la
from collections import Counter
import datetime
# AUTOLAB_IGNORE_START
from time_series import load_data, split_trips
vdf, _ = load_data('bus_train.db')
all_trips = split_trips(vdf)
# AUTOLAB_IGNORE_STOP
#https://stackoverflow.com/questions/4983258/python-how-to-check-list-monotonicity
pd.options.mode.chained_assignment = None
def label_and_truncate(trip, bus_stop_coordinates):
""" Given a dataframe of a trip following the specification in the previous homework assignment,
generate the labels and throw away irrelevant rows.
Args:
trip (dataframe): a dataframe from the list outputted by split_trips from homework 2
stop_coordinates ((float, float)): a pair of floats indicating the (latitude, longitude)
coordinates of the target bus stop.
Return:
(dataframe): a labeled trip that is truncated at Forbes and Morewood and contains a new column
called `eta` which contains the number of minutes until it reaches the bus stop.
"""
result_lst =pd.DataFrame()
lat=(trip["lat"]-bus_stop_coordinates[0])**2
lon =(trip["lon"]-bus_stop_coordinates[1])**2
dist = np.array(lat+lon)
result_lst=trip[:np.argmin(dist)+1]
passing_time = np.array(result_lst.tail(1).index)
ongoing_time = np.array(result_lst.index)
etas = pd.to_numeric((passing_time-ongoing_time).astype('timedelta64[m]'))
result_lst["eta"]=etas
return result_lst
pass
#trip.index[:np.argmin(dist)+1]
# AUTOLAB_IGNORE_START
morewood_coordinates = (40.444671114203, -79.94356058465502) # (lat, lon)
labeled_trips = [label_and_truncate(trip, morewood_coordinates) for trip in all_trips]
# print(len(labeled_trips))
labeled_vdf = pd.concat(labeled_trips).reset_index()
# We remove datapoints that make no sense (ETA more than 10 hours)
labeled_vdf = labeled_vdf[labeled_vdf["eta"] < 10*60].reset_index(drop=True)
print(Counter([len(t) for t in labeled_trips]))
print(labeled_vdf.head())
# AUTOLAB_IGNORE_STOP
# For our implementation, this returns the following output
# ```python
# >>> Counter([len(t) for t in labeled_trips])
# Counter({1: 506, 21: 200, 18: 190, 20: 184, 19: 163, 16: 162, 22: 159, 17: 151, 23: 139, 31: 132, 15: 128, 2: 125, 34: 112, 32: 111, 33: 101, 28: 98, 14: 97, 30: 95, 35: 95, 29: 93, 24: 90, 25: 89, 37: 86, 27: 83, 39: 83, 38: 82, 36: 77, 26: 75, 40: 70, 13: 62, 41: 53, 44: 52, 42: 47, 6: 44, 5: 39, 12: 39, 46: 39, 7: 38, 3: 36, 45: 33, 47: 33, 43: 31, 48: 27, 4: 26, 49: 26, 11: 25, 50: 25, 10: 23, 51: 23, 8: 19, 9: 18, 53: 16, 54: 15, 52: 14, 55: 14, 56: 8, 57: 3, 58: 3, 59: 3, 60: 3, 61: 1, 62: 1, 67: 1})
# >>> labeled_vdf.head()
# tmstmp vid lat lon hdg pid rt des \
# 0 2016-08-11 10:56:00 5549 40.439504 -79.996981 114 4521 61A Swissvale
# 1 2016-08-11 10:57:00 5549 40.439504 -79.996981 114 4521 61A Swissvale
# 2 2016-08-11 10:58:00 5549 40.438842 -79.994733 124 4521 61A Swissvale
# 3 2016-08-11 10:59:00 5549 40.437938 -79.991213 94 4521 61A Swissvale
# 4 2016-08-11 10:59:00 5549 40.437938 -79.991213 94 4521 61A Swissvale
#
# pdist spd tablockid tatripid eta
# 0 1106 0 061A-164 6691 16
# 1 1106 0 061A-164 6691 15
# 2 1778 8 061A-164 6691 14
# 3 2934 7 061A-164 6691 13
# 4 2934 7 061A-164 6691 13
# ```
# ## Q2: Generating Basic Features [8pts]
# In order to perform linear regression, we need to have numerical features. However, not everything in the bus database is a number, and not all of the numbers even make sense as numerical features. If you use the data as is, it is highly unlikely that you'll achieve anything meaningful.
#
# Consequently, you will perform some basic feature engineering. Feature engineering is extracting "features" or statistics from your data, and hopefully improve the performance if your learning algorithm (in this case, linear regression). Good features can often make up for poor model selection and improve your overall predictive ability on unseen data. In essence, you want to turn your data into something your algorithm understands.
#
# ### Specifications
# 1. The input to your function will be a concatenation of the trip dataframes generated in Q1 with the index dropped (so same structure as the original dataframe, but with an extra column and less rows).
# 2. Linear models typically have a constant bias term. We will encode this as a column of 1s in the dataframe. Call this column 'bias'.
# 2. We will keep the following columns as is, since they are already numerical: pdist, spd, lat, lon, and eta
# 3. Time is a cyclic variable. To encode this as a numerical feature, we can use a sine/cosine transformation. Suppose we have a feature of value f that ranges from 0 to N. Then, the sine and cosine transformation would be $\sin\left(2\pi \frac{f}{N}\right)$ and $\cos\left(2\pi \frac{f}{N}\right)$. For example, the sine transformation of 6 hours would be $\sin\left(2\pi \frac{6}{24}\right)$, since there are 24 hours in a cycle. You should create sine/cosine features for the following:
# * day of week (cycles every week, 0=Monday)
# * hour of day (cycles every 24 hours, 0=midnight)
# * time of day represented by total number of minutes elapsed in the day (cycles every 60*24 minutes, 0=midnight).
# 4. Heading is also a cyclic variable, as it is the ordinal direction in degrees (so cycles every 360 degrees).
# 4. Buses run on different schedules on the weekday as opposed to the weekend. Create a binary indicator feature `weekday` that is 1 if the day is a weekday, and 0 otherwise.
# 5. Route and destination are both categorical variables. We can encode these as indicator vectors, where each column represents a possible category and a 1 in the column indicates that the row belongs to that category. This is also known as a one hot encoding. Make a set of indicator features for the route, and another set of indicator features for the destination.
# 6. The names of your indicator columns for your categorical variables should be exactly the value of the categorical variable. The pandas function `pd.DataFrame.get_dummies` will be useful.
# +
def create_features(vdf):
""" Given a dataframe of labeled and truncated bus data, generate features for linear regression.
Args:
df (dataframe) : dataframe of bus data with the eta column and truncated rows
Return:
(dataframe) : dataframe of features for each example
"""
df = pd.DataFrame()
df['pdist'] =vdf["pdist"]
df['spd']= vdf['spd']
df['lat']= vdf['lat']
df['lon']= vdf['lon']
df['eta']= vdf["eta"]
df['sin_hdg']=np.sin(2*np.pi*vdf["hdg"]/360)
df['cos_hdg']=np.cos(2*np.pi*vdf["hdg"]/360)
df['sin_day_of_week']=np.sin((2*np.pi*vdf["tmstmp"].dt.dayofweek)/7)
df['cos_day_of_week']=np.cos((2*np.pi*vdf["tmstmp"].dt.dayofweek)/7)
df['sin_hour_of_day']=np.sin((2*np.pi*vdf["tmstmp"].dt.hour)/24)
df['cos_hour_of_day']=np.cos((2*np.pi*vdf["tmstmp"].dt.hour)/24)
minutes=pd.DataFrame()
mins=[]
for i in vdf["tmstmp"]:
d1 = datetime.datetime.combine(i,datetime.datetime.min.time())
secs=(i-d1).total_seconds()/60
mins.append(secs)
minutes["mins"]=mins
df['sin_time_of_day']=np.sin((2*np.pi*minutes["mins"])/(60*24))
df['cos_time_of_day']=np.cos((2*np.pi*minutes["mins"])/(60*24))
df["weekday"]=[1 if i<4 else 0 for i in vdf["tmstmp"].dt.dayofweek]
df['bias']=1
set_des=set(vdf["des"])
set_rt =set(vdf["rt"])
for i in set_des:
df[i]=[1 if i==j else 0 for j in vdf["des"]]
for i in set_rt:
df[i]=[1 if i==j else 0 for j in vdf["rt"]]
return df
# AUTOLAB_IGNORE_START
#print(labeled_vdf["des"])
vdf_features = create_features(labeled_vdf)
vdf_features
# AUTOLAB_IGNORE_STOP
# -
# AUTOLAB_IGNORE_START
with pd.option_context('display.max_columns', 26):
print(vdf_features.columns)
print(vdf_features.head())
# AUTOLAB_IGNORE_STOP
# Our implementation has the following output. Verify that your code has the following columns (order doesn't matter):
# ```python
# >>> vdf_features.columns
# Index([ u'bias', u'pdist', u'spd',
# u'lat', u'lon', u'eta',
# u'sin_hdg', u'cos_hdg', u'sin_day_of_week',
# u'cos_day_of_week', u'sin_hour_of_day', u'cos_hour_of_day',
# u'sin_time_of_day', u'cos_time_of_day', u'weekday',
# u'Braddock ', u'Downtown', u'Greenfield Only',
# u'McKeesport ', u'Murray-Waterfront', u'Swissvale',
# u'61A', u'61B', u'61C',
# u'61D'],
# dtype='object')
# bias pdist spd lat lon eta sin_hdg cos_hdg \
# 0 1.0 1106 0 40.439504 -79.996981 16 0.913545 -0.406737
# 1 1.0 1106 0 40.439504 -79.996981 15 0.913545 -0.406737
# 2 1.0 1778 8 40.438842 -79.994733 14 0.829038 -0.559193
# 3 1.0 2934 7 40.437938 -79.991213 13 0.997564 -0.069756
# 4 1.0 2934 7 40.437938 -79.991213 13 0.997564 -0.069756
#
# sin_day_of_week cos_day_of_week ... Braddock Downtown \
# 0 0.433884 -0.900969 ... 0.0 0.0
# 1 0.433884 -0.900969 ... 0.0 0.0
# 2 0.433884 -0.900969 ... 0.0 0.0
# 3 0.433884 -0.900969 ... 0.0 0.0
# 4 0.433884 -0.900969 ... 0.0 0.0
#
# Greenfield Only McKeesport Murray-Waterfront Swissvale 61A 61B 61C \
# 0 0.0 0.0 0.0 1.0 1.0 0.0 0.0
# 1 0.0 0.0 0.0 1.0 1.0 0.0 0.0
# 2 0.0 0.0 0.0 1.0 1.0 0.0 0.0
# 3 0.0 0.0 0.0 1.0 1.0 0.0 0.0
# 4 0.0 0.0 0.0 1.0 1.0 0.0 0.0
#
# 61D
# 0 0.0
# 1 0.0
# 2 0.0
# 3 0.0
# 4 0.0
#
# [5 rows x 25 columns]
# ```
# ## Q3 Linear Regression using Ordinary Least Squares [10 + 4pts]
# Now you will finally implement a linear regression. As a reminder, linear regression models the data as
#
# $$\mathbf y = \mathbf X\mathbf \beta + \mathbf \epsilon$$
#
# where $\mathbf y$ is a vector of outputs, $\mathbf X$ is also known as the design matrix, $\mathbf \beta$ is a vector of parameters, and $\mathbf \epsilon$ is noise. We will be estimating $\mathbf \beta$ using Ordinary Least Squares, and we recommending following the matrix notation for this problem (https://en.wikipedia.org/wiki/Ordinary_least_squares).
#
# ### Specification
# 1. We use the numpy term array-like to refer to array like types that numpy can operate on (like Pandas DataFrames).
# 1. Regress the output (eta) on all other features
# 2. Return the predicted output for the inputs in X_test
# 3. Calculating the inverse $(X^TX)^{-1}$ is unstable and prone to numerical inaccuracies. Furthermore, the assumptions of Ordinary Least Squares require it to be positive definite and invertible, which may not be true if you have redundant features. Thus, you should instead use $(X^TX + \lambda*I)^{-1}$ for identity matrix $I$ and $\lambda = 10^{-4}$, which for now acts as a numerical "hack" to ensure this is always invertible. Furthermore, instead of computing the direct inverse, you should utilize the Cholesky decomposition which is much more stable when solving linear systems.
class LR_model():
""" Perform linear regression and predict the output on unseen examples.
Attributes:
beta (array_like) : vector containing parameters for the features """
def __init__(self, X, y):
""" Initialize the linear regression model by computing the estimate of the weights parameter
Args:
X (array-like) : feature matrix of training data where each row corresponds to an example
y (array like) : vector of training data outputs
"""
self.beta = np.zeros(X.shape[1])
x = np.array(X)
y = np.array(y)
lambdaa = 10**-4
part1 = ((x.T @ x) +(lambdaa * np.identity(X.shape[1])))
part2 =(x.T @ y)
self.beta=np.linalg.solve(part1,part2)
pass
def predict(self, X_p):
""" Predict the output of X_p using this linear model.
Args:
X_p (array_like) feature matrix of predictive data where each row corresponds to an example
Return:
(array_like) vector of predicted outputs for the X_p
"""
x_arr = np.array(X_p)
y_pred = x_arr @ self.beta
return y_pred
pass
# We have provided some validation data for you, which is another scrape of the Pittsburgh bus data (but for a different time span). You will need to do the same processing to generate labels and features to your validation dataset. Calculate the mean squared error of the output of your linear regression on both this dataset and the original training dataset.
#
# How does it perform? One simple baseline is to make sure that it at least predicts as well as predicting the mean of what you have seen so far. Does it do better than predicting the mean? Compare the mean squared error of a predictor that predicts the mean vs your linear classifier.
#
# ### Specifications
# 1. Build your linear model using only the training data
# 2. Compute the mean squared error of the predictions on both the training and validation data.
# 3. Compute the mean squared error of predicting the mean of the **training outputs** for all inputs.
# 4. You will need to process the validation dataset in the same way you processed the training dataset.
# 5. You will need to split your features from your output (eta) prior to calling compute_mse
# +
# Calculate mean squared error on both the training and validation set
def compute_mse(LR, X, y, X_v, y_v):
""" Given a linear regression model, calculate the mean squared error for the
training dataset, the validation dataset, and for a mean prediction
Args:
LR (LR_model) : Linear model
X (array-like) : feature matrix of training data where each row corresponds to an example
y (array like) : vector of training data outputs
X_v (array-like) : feature matrix of validation data where each row corresponds to an example
y_v (array like) : vector of validation data outputs
Return:
(train_mse, train_mean_mse,
valid_mse, valid_mean_mse) : a 4-tuple of mean squared errors
1. MSE of linear regression on the training set
2. MSE of predicting the mean on the training set
3. MSE of linear regression on the validation set
4. MSE of predicting the mean on the validation set
"""
yhat = LR.predict(X)
mse_lr_tr = np.mean((y-yhat)**2)
mse_me_tr = np.mean((y-np.mean(y))**2)
yhat_v = LR.predict(X_v)
mse_lr_v = np.mean((y_v-yhat_v)**2)
mse_me_v = np.mean((y_v-np.mean(y))**2)
return (mse_lr_tr,mse_me_tr,mse_lr_v,mse_me_v)
pass
# +
# AUTOLAB_IGNORE_START
# First you should replicate the same processing pipeline as we did to the training set
vdf_valid, pdf_valid = load_data('bus_valid.db')
all_trips_valid =split_trips(vdf_valid)
labeled_trips_valid = [label_and_truncate(trip, morewood_coordinates) for trip in all_trips_valid]
labeled_vdf_valid = pd.concat(labeled_trips_valid).reset_index()
vdf_features_valid = create_features(labeled_vdf_valid)
# Separate the features from the output and pass it into your linear regression model.
y_df =vdf_features.eta
X_df = vdf_features.drop("eta",axis=1)
y_valid_df = vdf_features_valid.eta
X_valid_df =vdf_features_valid.drop("eta",axis=1)
LR = LR_model(X_df, y_df)
print(compute_mse(LR,
X_df,
y_df,
X_valid_df,
y_valid_df))
# AUTOLAB_IGNORE_STOP
# -
# As a quick check, our training data MSE is approximately 38.99.
# ## Q4 TrueTime Predictions [5pts]
# How do you fare against the Pittsburgh Truetime predictions? In this last problem, you will match predictions to their corresponding vehicles to build a dataset that is labeled by TrueTime. Remember that we only evaluate performance on the validation set (never the training set). How did you do?
#
# ### Specification
# 1. You should use the pd.DataFrame.merge function to combine your vehicle dataframe and predictions dataframe into a single dataframe. You should drop any rows that have no predictions (see the how parameter). (http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html)
# 2. You can compute the TrueTime ETA by taking their predicted arrival time and subtracting the timestamp, and converting that into an integer representing the number of minutes.
# 3. Compute the mean squared error for linear regression only on the rows that have predictions (so only the rows that remain after the merge).
# +
def compare_truetime(LR, labeled_vdf, pdf):
""" Compute the mse of the truetime predictions and the linear regression mse on entries that have predictions.
Args:
LR (LR_model) : an already trained linear model
labeled_vdf (pd.DataFrame): a dataframe of the truncated and labeled bus data (same as the input to create_features)
pdf (pd.DataFrame): a dataframe of TrueTime predictions
Return:
(tt_mse, lr_mse): a tuple of the TrueTime MSE, and the linear regression MSE
"""
featured_vdf = create_features(labeled_vdf)
eta =featured_vdf["eta"]
x = featured_vdf.drop("eta",axis=1)
eta_hat = LR.predict(x)
labeled_vdf["eta_lr"]=eta_hat
labeled_vdf.reset_index()
merged_df=pd.merge(labeled_vdf, pdf,how="inner")
mins = np.array(((merged_df["prdtm"]-merged_df["tmstmp"]).dt.seconds)/60)
merged_df["eta_tt"] =mins
mse_lr = np.mean((merged_df["eta_lr"]-merged_df["eta"])**2)
mse_tt = np.mean((merged_df["eta_tt"]-merged_df["eta"])**2)
return (mse_tt,mse_lr)
pass
# AUTOLAB_IGNORE_START
compare_truetime(LR, labeled_vdf_valid, pdf_valid)
# AUTOLAB_IGNORE_STOP
#50.20239900730732, 60.40782041336532
# -
# As a sanity check, your linear regression MSE should be approximately 50.20.
# ## Q5 Feature Engineering contest (bonus)
#
# You may be wondering "why did we pick the above features?" Some of the above features may be entirely useless, or you may have ideas on how to construct better features. Sometimes, choosing good features can be the entirety of a data science problem.
#
# In this question, you are given complete freedom to choose what and how many features you want to generate. Upon submission to Autolab, we will run linear regression on your generated features and maintain a scoreboard of best regression accuracy (measured by mean squared error).
#
# The top scoring students will receive a bonus of 5 points.
#
# ### Tips:
# * Test your features locally by building your model using the training data, and predicting on the validation data. Compute the mean squared error on the **validation dataset** as a metric for how well your features generalize. This helps avoid overfitting to the training dataset, and you'll have faster turnaround time than resubmitting to autolab.
# * The linear regression model will be trained on your chosen features of the same training examples we provide in this notebook.
# * We test your regression on a different dataset from the training and validation set that we provide for you, so the MSE you get locally may not match how your features work on the Autolab dataset.
# * We will solve the linear regression using Ordinary Least Squares with regularization $\lambda=10^{-4}$ and a Cholesky factorization, exactly as done earlier in this notebook.
# * Note that the argument contains **UNlabeled** data: you cannot build features off the output labels (there is no ETA column). This is in contrast to before, where we kept everything inside the same dataframe for convenience. You can produce the sample input by removing the "eta" column, which we provide code for below.
# * Make sure your features are all numeric. Try everything!
# +
def contest_features(vdf, vdf_train):
""" Given a dataframe of UNlabeled and truncated bus data, generate ANY features you'd like for linear regression.
Args:
vdf (dataframe) : dataframe of bus data with truncated rows but unlabeled (no eta column )
for which you should produce features
vdf_train (dataframe) : dataframe of training bus data, truncated and labeled
Return:
(dataframe) : dataframe of features for each example in vdf
"""
# create your own engineered features
pass
# AUTOLAB_IGNORE_START
# contest_cols = list(labeled_vdf.columns)
# contest_cols.remove("eta")
# contest_features(labeled_vdf_valid[contest_cols], labeled_vdf).head()
# AUTOLAB_IGNORE_STOP
# -
| 24,644 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 47