text
stringlengths 0
4.99k
|
---|
Epoch 353/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0455 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0945 - val_sparse_categorical_accuracy: 0.9612 |
Epoch 354/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0452 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0921 - val_sparse_categorical_accuracy: 0.9598 |
Epoch 355/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0430 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0903 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 356/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0471 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.1045 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 357/500 |
90/90 [==============================] - 0s 5ms/step - loss: 0.0508 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0949 - val_sparse_categorical_accuracy: 0.9653 |
Epoch 358/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0468 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 359/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0466 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9612 |
Epoch 360/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0440 - sparse_categorical_accuracy: 0.9899 - val_loss: 0.0988 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 361/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0448 - sparse_categorical_accuracy: 0.9875 - val_loss: 0.0975 - val_sparse_categorical_accuracy: 0.9667 |
Epoch 362/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0477 - sparse_categorical_accuracy: 0.9875 - val_loss: 0.0914 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 363/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0493 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0906 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 364/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0488 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 365/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0491 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0960 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 366/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0477 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0891 - val_sparse_categorical_accuracy: 0.9612 |
Epoch 367/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0470 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.1026 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 368/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0463 - sparse_categorical_accuracy: 0.9885 - val_loss: 0.0909 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 369/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0459 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0909 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 370/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0511 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.1036 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 371/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0479 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0922 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 372/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0516 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0932 - val_sparse_categorical_accuracy: 0.9653 |
Epoch 373/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0451 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0928 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 374/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0461 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.0911 - val_sparse_categorical_accuracy: 0.9612 |
Epoch 375/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.0895 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 376/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0466 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.0902 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 377/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0465 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9681 |
Epoch 378/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0430 - sparse_categorical_accuracy: 0.9882 - val_loss: 0.0906 - val_sparse_categorical_accuracy: 0.9626 |
Epoch 379/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0524 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0910 - val_sparse_categorical_accuracy: 0.9598 |
Epoch 380/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0467 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.0947 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 381/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0464 - sparse_categorical_accuracy: 0.9885 - val_loss: 0.0922 - val_sparse_categorical_accuracy: 0.9653 |
Epoch 382/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0449 - sparse_categorical_accuracy: 0.9885 - val_loss: 0.0918 - val_sparse_categorical_accuracy: 0.9639 |
Epoch 383/500 |
90/90 [==============================] - 1s 6ms/step - loss: 0.0438 - sparse_categorical_accuracy: 0.9889 - val_loss: 0.0905 - val_sparse_categorical_accuracy: 0.9612 |
Epoch 00383: early stopping |
Evaluate model on test data |
model = keras.models.load_model(\"best_model.h5\") |
test_loss, test_acc = model.evaluate(x_test, y_test) |
print(\"Test accuracy\", test_acc) |
print(\"Test loss\", test_loss) |
42/42 [==============================] - 0s 2ms/step - loss: 0.0936 - sparse_categorical_accuracy: 0.9682 |
Test accuracy 0.9681817889213562 |
Test loss 0.0935916006565094 |
Plot the model's training and validation loss |
metric = \"sparse_categorical_accuracy\" |
plt.figure() |
plt.plot(history.history[metric]) |
plt.plot(history.history[\"val_\" + metric]) |
plt.title(\"model \" + metric) |
plt.ylabel(metric, fontsize=\"large\") |
plt.xlabel(\"epoch\", fontsize=\"large\") |
plt.legend([\"train\", \"val\"], loc=\"best\") |
plt.show() |
plt.close() |
png |
We can see how the training accuracy reaches almost 0.95 after 100 epochs. However, by observing the validation accuracy we can see how the network still needs training until it reaches almost 0.97 for both the validation and the training accuracy after 200 epochs. Beyond the 200th epoch, if we continue on training, the validation accuracy will start decreasing while the training accuracy will continue on increasing: the model starts overfitting. |
This notebook demonstrates how to do timeseries classification using a Transformer model. |
Introduction |
This is the Transformer architecture from Attention Is All You Need, applied to timeseries instead of natural language. |
This example requires TensorFlow 2.4 or higher. |
Load the dataset |
We are going to use the same dataset and preprocessing as the TimeSeries Classification from Scratch example. |
import numpy as np |