File size: 50,151 Bytes
20c73bc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Q-Learning en el juego del Dinosaurio en Google Chrome"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Usamos el modulo Pygame y el tutorial de Max Teaches Tech de Youtube para recrear el juego del dinosaurio. El repositorio con los assets se encuentra aqui:\n",
"\n",
"https://github.com/maxontech/chrome-dinosaur/tree/master/Assets\n",
"\n",
"\n",
"En los primeros 300 puntos del juego, solo aparecen cactus; despues, aparecen mas obstaculos, y la velocidad va aumentando entre mas avanza el juego."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Configuraciones Iniciales y Importaciones\n",
"Aquí definimos las dimensiones de la pantalla del juego y establecemos configuraciones iniciales como la velocidad del juego, tamaños de memoria, y parámetros para el entrenamiento de la red neuronal. También importamos las bibliotecas necesarias como Pygame para la interfaz gráfica y PyTorch para la implementación de la red neuronal.\n",
"\n",
"### 1. Tamaños de Memoria de Replay\n",
"- **`INIT_REPLAY_MEM_SIZE = 5_000` y `REPLAY_MEMORY_SIZE = 45_000`:** Una memoria de replay suficientemente grande permite al agente aprender de muchas experiencias pasadas, y asi generalizar.\n",
"- **`MIN_REPLAY_MEMORY_SIZE = 1_000`:** Este es el tamaño mínimo de memoria de replay necesario antes de comenzar el entrenamiento. Garantiza que el modelo tenga suficientes datos para un entrenamiento bueno.\n",
"\n",
"### 2. Tamaño del Minibatch y Factor de Descuento\n",
"- **`MINIBATCH_SIZE = 64`:** Un tamaño de minibatch de 64 es un equilibrio común entre eficiencia computacional y estabilidad del entrenamiento. Permite que la red aprenda de diferentes experiencias en cada actualización.\n",
"- **`DISCOUNT = 0.95`:** El factor de descuento (gamma) de 0.95 reduce el valor presente de las recompensas futuras, lo que ayuda a enfocar el agente en recompensas a corto plazo pero sin ignorar completamente las consecuencias a largo plazo.\n",
"\n",
"### 3. Actualización del Modelo Objetivo\n",
"- **`UPDATE_TARGET_THRESH = 5`:** La actualización de los pesos del modelo objetivo cada 5 episodios ayuda a mantener la estabilidad del aprendizaje. Asegura que el modelo objetivo no cambie demasiado rápido y proporciona estimaciones consistentes de los valores Q objetivo.\n",
"\n",
"### 4. Estrategia Epsilon-Greedy\n",
"- **`EPSILON_INIT = 0.45`, `EPSILON_DECAY = 0.997`, `MIN_EPSILON = 0.05`:** Estos valores controlan la tasa de exploración/explotación del agente. Un `epsilon` inicial de 0.45 se eligió debido a las características específicas del juego (p. ej., la duración de la acción de saltar comparada con correr o agacharse). La tasa de decaimiento de 0.997 reduce gradualmente la exploración a medida que el agente aprende, pero evita que se reduzca a cero, manteniendo siempre una cierta probabilidad de exploración.\n",
"\n",
"### 5. Duración del Entrenamiento\n",
"- **`NUM_EPISODES = 2_000`:** El numero de epochs que queremos que corra.\n",
"\n",
"### Conclusión\n",
"La selección de estos hiperparámetros refleja un enfoque equilibrado que considera las particularidades del juego del dinosaurio, como la diferencia en la duración de las acciones y la necesidad de estabilizar el aprendizaje. Además, la estrategia de aprendizaje y memoria de replay ayuda a mantener un equilibrio entre aprender de experiencias recientes y no olvidar lecciones anteriores, facilitando un aprendizaje eficaz y robusto."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"#%pip install sqlalchemy\n",
"#%pip install pygame\n",
"#Para usar GPU y torch\n",
"# %pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9e601655",
"metadata": {},
"outputs": [],
"source": [
"SCREEN_HEIGHT = 600\n",
"SCREEN_WIDTH = 1100\n",
"\n",
"INIT_GAME_SPEED = 14\n",
"X_POS_BG_INIT = 0\n",
"Y_POS_BG = 380\n",
"\n",
"INIT_REPLAY_MEM_SIZE = 5_000\n",
"REPLAY_MEMORY_SIZE = 45_000\n",
"MODEL_NAME = \"DINO\"\n",
"MIN_REPLAY_MEMORY_SIZE = 1_000\n",
"MINIBATCH_SIZE = 64\n",
"DISCOUNT = 0.95\n",
"UPDATE_TARGET_THRESH = 5\n",
"#EPSILON_INIT = 0.45 epsilon inicial\n",
"EPSILON_INIT = 0.25 #modificamos para que sea menos exploratorio, menor epsilon menos exploratorio\n",
"#EPSILON_DECAY = 0.997 epsilon inicial\n",
"EPSILON_DECAY = 0.75 #modificamos para que sea menos exploratorio, menor epsilon menos exploratorio\n",
"NUM_EPISODES = 100\n",
"MIN_EPSILON = 0.05"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"pygame 2.5.2 (SDL 2.28.3, Python 3.10.9)\n",
"Hello from the pygame community. https://www.pygame.org/contribute.html\n"
]
}
],
"source": [
"import pygame\n",
"import os\n",
"\n",
"import torch\n",
"import torch.nn as nn\n",
"import torch.optim as optim\n",
"\n",
"\n",
"import pandas as pd\n",
"import numpy as np\n",
"from collections import deque\n",
"import random\n",
"\n",
"import pygame\n",
"import random\n",
"from typing import List\n",
"\n",
"from argparse import Action\n",
"import random\n",
"import sys\n",
"import pygame\n",
"\n",
"from sqlalchemy import asc\n",
"import math\n",
"import time\n",
"from tqdm import tqdm\n",
"from datetime import datetime"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import threading\n",
"import queue\n",
"\n",
"# Cola para comunicación entre hilos\n",
"action_queue = queue.Queue()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Carga de Assets\n",
"Esta sección carga las imágenes necesarias para el juego, como el dinosaurio, cactus, pájaros, etc.\n",
"Utilizamos pygame.image.load() para cargar cada imagen y las almacenamos en listas o variables para su uso en el juego."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "41eebe05",
"metadata": {},
"outputs": [],
"source": [
"RUNNING = [pygame.image.load(os.path.join(\"Assets/Dino\", \"DinoRun1.png\")), \n",
" pygame.image.load(os.path.join(\"Assets/Dino\", \"DinoRun2.png\"))]\n",
"\n",
"DUCKING = [pygame.image.load(os.path.join(\"Assets/Dino\", \"DinoDuck1.png\")), \n",
" pygame.image.load(os.path.join(\"Assets/Dino\", \"DinoDuck2.png\"))]\n",
"\n",
"\n",
"JUMPING = pygame.image.load(os.path.join(\"Assets/Dino\", \"DinoJump.png\"))\n",
"\n",
"SMALL_CACTUS = [pygame.image.load(os.path.join(\"Assets/Cactus\", \"SmallCactus1.png\")), \n",
" pygame.image.load(os.path.join(\"Assets/Cactus\", \"SmallCactus2.png\")), \n",
" pygame.image.load(os.path.join(\"Assets/Cactus\", \"SmallCactus3.png\"))]\n",
"\n",
"\n",
"LARGE_CACTUS = [pygame.image.load(os.path.join(\"Assets/Cactus\", \"LargeCactus1.png\")), \n",
" pygame.image.load(os.path.join(\"Assets/Cactus\", \"LargeCactus2.png\")), \n",
" pygame.image.load(os.path.join(\"Assets/Cactus\", \"LargeCactus3.png\"))]\n",
"\n",
"BIRD = [pygame.image.load(os.path.join(\"Assets/Bird\", \"Bird1.png\")), pygame.image.load(os.path.join(\"Assets/Bird\", \"Bird2.png\"))]\n",
"\n",
"CLOUD = pygame.image.load(os.path.join(\"Assets/Other\", \"Cloud.png\"))\n",
"\n",
"BACKGROUND = pygame.image.load(os.path.join(\"Assets/Other\", \"Track.png\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Explicacion de Q-Learning breve\n",
"\n",
"Q-Learning es un algoritmo de aprendizaje por refuerzo que el agente (en este caso, el dinosaurio en el juego) utiliza para aprender qué acciones tomar en diferentes situaciones o estados para maximizar su recompensa total. Es un proceso iterativo donde el agente aprende gradualmente a tomar decisiones óptimas (acciones) para maximizar su recompensa total (puntuación en el juego) mediante la experimentación y el ajuste de las predicciones de los valores Q a través de una red neuronal.\n",
"\n",
"1. **Valores Q (Quality):** Q-Learning se basa en una función de valor Q, que estima la \"calidad\" o utilidad total esperada de tomar una acción específica en un estado dado. Aqui, los valores Q indicarían cuán bueno es realizar acciones como saltar, agacharse o correr en un determinado momento del juego.\n",
"\n",
"2. **Recompensas y Decisiones:** El objetivo del agente es maximizar su recompensa total. Cada vez que el agente toma una acción, recibe una recompensa (o penalización). Las recompensas se basan en la supervivencia del dinosaurio sin chocar con obstáculos.\n",
"\n",
"3. **Actualización de Valores Q:** Los valores Q se actualizan a medida que el agente experimenta nuevas situaciones (estados y recompensas) y aprende de sus errores y éxitos.\n",
"\n",
"### Implementación en el Proyecto\n",
"\n",
"1. **Estados y Acciones:** En cada paso del juego, el agente observa el estado actual del juego (la posición del dinosaurio, los obstáculos próximos, etc.) y decide qué acción tomar (saltar, agacharse, correr). La red neuronal predice los valores Q para cada posible acción en ese estado.\n",
"\n",
"2. **Memoria de Replay:** Cada experiencia (estado, acción, recompensa, nuevo estado) se almacena en una memoria de replay. Esto permite al agente aprender de experiencias pasadas.\n",
"\n",
"3. **Entrenamiento del Modelo:** Se toman muestras de la memoria de replay para entrenar la red neuronal. Durante el entrenamiento, la red ajusta sus pesos para predecir con mayor precisión los valores Q basándose en las recompensas obtenidas y las predicciones del modelo objetivo.\n",
"\n",
"4. **Modelo Objetivo:** Se utiliza un segundo modelo, el modelo objetivo, para calcular los valores Q objetivo durante el entrenamiento. Esto ayuda a estabilizar el aprendizaje, ya que proporciona una estimación los valores Q objetivo.\n",
"\n",
"5. **Exploración vs. Explotación:** Al principio, el agente tiene más probabilidades de tomar acciones aleatorias para explorar el entorno (alta tasa de exploración `epsilon`). A medida que aprende, se vuelve más propenso a confiar en las predicciones de la red neuronal (explotación)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Implementación de Q-Learning y Entrenamiento\n",
" Aquí implementamos el algoritmo Q-Learning, un tipo de aprendizaje por refuerzo. Definimos el estado del juego, las posibles acciones y las recompensas asociadas. La red neuronal se actualiza continuamente en base a la memoria de replay y las recompensas obtenidas, ajustando las decisiones del agente.\n",
" \n",
"\n",
"Los inputs son:\n",
"- Distancia del Dinosaurio del obstaculo\n",
"- Coordenada Y del dinosaurio\n",
"- Coordenada Y del Obstaculo\n",
"- Ancho del obstaculo\n",
"- Velocidad del juego\n",
"- El tipo de obstaculo (Cactus o Pterodactilo)\n",
"\n",
"Los outputs son caminar, saltar o agacharte.\n",
"\n",
"\n",
"### Clase NeuralNetwork\n",
"\n",
"Originalmente queriamos hacer el proyecto con TensorFlow, pero por limitaciones tecnicas nos limitamos al uso de PyTorch; por eso, en esta clase definimos una red neuronal simple para la toma de decisiones dentro del juego, algo que originalmente no era necesario. Se usa para predecir los valores Q, que representan el valor esperado de tomar una determinada acción en un estado dado del juego. \n",
" \n",
"- Constructor __init__(self)\n",
"\n",
" Hereda de nn.Module de PyTorch, que es la base para todas las redes neuronales en PyTorch. Define dos capas lineales (fc1 y fc2). La primera capa (fc1) toma 7 características de entrada y las transforma en 4 características, y la segunda capa (fc2) toma estas 4 características y produce 3 salidas. Estas salidas representarán los valores Q para cada posible acción en el juego.\n",
"\n",
"- forward(self, x)\n",
"\n",
" Define cómo se pasa la entrada a través de la red. La función relu se aplica a la salida de la primera capa para introducir no linealidad, y luego se pasa a la segunda capa. El resultado es el conjunto de valores Q predichos para las acciones dadas el estado actual del juego."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"class NeuralNetwork(nn.Module):\n",
" def __init__(self):\n",
" super(NeuralNetwork, self).__init__()\n",
" self.fc1 = nn.Linear(7, 4) # 7 input features, 4 output features\n",
" self.fc2 = nn.Linear(4, 3) # 4 input features, 3 output features\n",
"\n",
" def forward(self, x):\n",
" x = torch.relu(self.fc1(x))\n",
" x = self.fc2(x)\n",
" return x\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Clase DQNAgent\n",
"\n",
"\n",
"El `DQNAgent` utiliza esta red para aprender y decidir acciones basándose en el método Q-Learning. El agente entrena la red utilizando experiencias de juego anteriores almacenadas en la memoria de replay, ajustando sus estrategias para mejorar el rendimiento en el juego.\n",
"\n",
"#### Constructor `__init__(self)`\n",
"- Creamos dos instancias de `NeuralNetwork`, una como el modelo principal y otra como el modelo objetivo. El modelo objetivo se actualiza ocasionalmente con los pesos del modelo principal.\n",
"- Usamos el optimizador Adam para ajustar los pesos de la red y Mean Squared Error (MSE) como la función de pérdida.\n",
"- Se inicializa dos memorias de replay (`init_replay_memory` y `late_replay_memory`) para almacenar experiencias pasadas y aprender de ellas.\n",
"\n",
"#### update_replay_memory(self, transition)\n",
"- Agrega una nueva experiencia (transición) a la memoria de replay. La memoria de replay se utiliza para entrenar la red neuronal con experiencias pasadas.\n",
"\n",
"#### get_qs(self, state)\n",
"- Calcula y devuelve los valores Q para un estado dado utilizando el modelo principal. Esto se utiliza para decidir qué acción tomar en un estado particular del juego.\n",
"\n",
"#### train(self, terminal_state, step)\n",
"- Entrenamos la red neuronal usando un minibatch de experiencias de la memoria de replay. Utiliza tanto el modelo principal como el modelo objetivo para calcular el valor Q objetivo y ajustar los pesos del modelo principal.\n",
"- Si se alcanza un cierto umbral (indicado por `UPDATE_TARGET_THRESH`), se actualizan los pesos del modelo objetivo con los del modelo principal."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "362f34ac",
"metadata": {},
"outputs": [],
"source": [
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") #Para poder usar GPU\n",
"\n",
"class DQNAgent:\n",
" def __init__(self,learning_rate=0.001):\n",
" self.model = NeuralNetwork().to(device) # Mover el modelo a la GPU si está disponible\n",
" self.target_model = NeuralNetwork().to(device) # Mover el modelo a la GPU si está disponible\n",
" self.target_model.load_state_dict(self.model.state_dict())\n",
" self.optimizer = optim.Adam(self.model.parameters(), lr=0.001)\n",
" self.loss_function = nn.MSELoss()\n",
"\n",
" self.init_replay_memory = deque(maxlen=INIT_REPLAY_MEM_SIZE)\n",
" self.late_replay_memory = deque(maxlen=REPLAY_MEMORY_SIZE)\n",
" self.target_update_counter = 0\n",
" self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)\n",
" # Update the memory store\n",
" def update_replay_memory(self, transition):\n",
" # if len(self.replay_memory) > 50_000:\n",
" # self.replay_memory.clear()\n",
" if len(self.init_replay_memory) < INIT_REPLAY_MEM_SIZE:\n",
" self.init_replay_memory.append(transition)\n",
" else:\n",
" self.late_replay_memory.append(transition)\n",
"\n",
" # Método get_qs dentro de la clase DQNAgent\n",
" def get_qs(self, state):\n",
" state_tensor = torch.Tensor(state).to(device) # Asegúrate de mover el tensor al dispositivo correcto\n",
" with torch.no_grad():\n",
" return self.model(state_tensor).cpu().numpy() # Luego mueve el resultado de vuelta a la CPU si es necesario\n",
" \n",
" def calculate_action(self, state_queue, action_queue):\n",
" while True:\n",
" state = state_queue.get() # Espera y obtiene el estado del juego\n",
" if state is None:\n",
" break # Si recibes None, termina el hilo\n",
"\n",
" # Calcula la acción usando el modelo\n",
" q_values = self.get_qs(state)\n",
" action = np.argmax(q_values) # Elige la acción con el Q-value más alto\n",
"\n",
" action_queue.put(action) # Coloca la acción en la cola\n",
" \n",
" def train(self, terminal_state, step):\n",
" if len(self.init_replay_memory) < MIN_REPLAY_MEMORY_SIZE:\n",
" return\n",
"\n",
" total_mem = list(self.init_replay_memory)\n",
" total_mem.extend(self.late_replay_memory)\n",
" minibatch = random.sample(total_mem, MINIBATCH_SIZE)\n",
"\n",
" # Asegurarse de que los tensores estén en el dispositivo correcto\n",
" current_states = torch.Tensor([transition[0] for transition in minibatch]).to(device)\n",
" current_qs_list = self.model(current_states)\n",
" new_current_states = torch.Tensor([transition[3] for transition in minibatch]).to(device)\n",
" future_qs_list = self.target_model(new_current_states)\n",
"\n",
" X = []\n",
" y = []\n",
"\n",
" for index, (current_state, action, reward, new_current_state, done) in enumerate(minibatch):\n",
" if not done:\n",
" max_future_q = torch.max(future_qs_list[index])\n",
" new_q = reward + DISCOUNT * max_future_q\n",
" else:\n",
" new_q = reward\n",
"\n",
" current_qs = current_qs_list[index]\n",
" current_qs[action] = new_q\n",
"\n",
" X.append(current_state)\n",
" y.append(current_qs)\n",
"\n",
" X = torch.tensor(np.array(X, dtype=np.float32)).to(device) # Mover X a la GPU\n",
" y = torch.tensor(np.array([y_item.detach().cpu().numpy() if isinstance(y_item, torch.Tensor) else y_item for y_item in y], dtype=np.float32)).to(device) # Mover y a la GPU\n",
"\n",
" self.optimizer.zero_grad()\n",
" output = self.model(X) # X ya está en el dispositivo correcto\n",
" loss = self.loss_function(output, y) # y ya está en el dispositivo correcto\n",
" loss.backward()\n",
" self.optimizer.step()\n",
"\n",
" if terminal_state:\n",
" self.target_update_counter += 1\n",
"\n",
" if self.target_update_counter > UPDATE_TARGET_THRESH:\n",
" self.target_model.load_state_dict(self.model.state_dict())\n",
" self.target_update_counter = 0\n",
" # print(self.target_update_counter)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Definición de Clases del Juego\n",
"En esta parte del código, definimos varias clases para manejar diferentes aspectos del juego:\n",
" - Clase Obstacle: Representa los obstáculos del juego.\n",
" - Clase Dino: Controla el personaje del dinosaurio y su interacción con el juego.\n",
" - Clase Game: Maneja la lógica principal del juego, incluyendo la creación de obstáculos y la actualización de la interfaz"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"class Obstacle:\n",
" def __init__(self, image: List[pygame.Surface], type: int) -> None:\n",
" self.image = image\n",
" self.type = type\n",
" self.rect = self.image[self.type].get_rect()\n",
" self.rect.x = SCREEN_WIDTH\n",
"\n",
" def update(self, obstacles: list, game_speed: int):\n",
" self.rect.x -= game_speed\n",
" if self.rect.x < -self.rect.width:\n",
" obstacles.pop()\n",
" \n",
" def draw(self, SCREEN: pygame.Surface):\n",
" SCREEN.blit(self.image[self.type], self.rect)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "66589bde",
"metadata": {},
"outputs": [],
"source": [
"class Dino(DQNAgent):\n",
" X_POS = 80\n",
" Y_POS = 310\n",
" Y_DUCK_POS = 340\n",
" JUMP_VEL = 8.5\n",
" #code here\n",
" def __init__(self, learning_rate=0.001):\n",
" super().__init__(learning_rate=learning_rate) \n",
" self.duck_img = DUCKING\n",
" self.run_img = RUNNING\n",
" self.jump_img = JUMPING\n",
"\n",
"\n",
" #Initially the dino starts running\n",
" self.dino_duck = False\n",
" self.dino_run = True\n",
" self.dino_jump = False\n",
"\n",
" self.step_index = 0\n",
" self.jump_vel = self.JUMP_VEL\n",
" self.image = self.run_img[0]\n",
" self.dino_rect = self.image.get_rect()\n",
"\n",
" self.dino_rect.x = self.X_POS\n",
" self.dino_rect.y = self.Y_POS\n",
"\n",
" self.score = 0\n",
"\n",
" super().__init__()\n",
" \n",
" \n",
" # Update the Dino's state\n",
" def update(self, move: pygame.key.ScancodeWrapper):\n",
" if self.dino_duck:\n",
" self.duck()\n",
" \n",
" if self.dino_jump:\n",
" self.jump()\n",
" \n",
" if self.dino_run:\n",
" self.run()\n",
"\n",
" if self.step_index >= 20:\n",
" self.step_index = 0\n",
" \n",
"\n",
" if move[pygame.K_UP] and not self.dino_jump:\n",
" self.dino_jump = True\n",
" self.dino_run = False\n",
" self.dino_duck = False\n",
"\n",
" elif move[pygame.K_DOWN] and not self.dino_jump:\n",
" self.dino_duck = True\n",
" self.dino_run = False\n",
" self.dino_jump = False\n",
" \n",
" elif not(self.dino_jump or move[pygame.K_DOWN]):\n",
" self.dino_run = True\n",
" self.dino_jump = False\n",
" self.dino_duck = False\n",
" \n",
" def update_auto(self, move):\n",
" if self.dino_duck == True:\n",
" self.duck()\n",
" \n",
" if self.dino_jump == True:\n",
" self.jump()\n",
" \n",
" if self.dino_run == True:\n",
" self.run()\n",
"\n",
" if self.step_index >= 20:\n",
" self.step_index = 0\n",
" \n",
" if move == 0 and not self.dino_jump:\n",
" self.dino_jump = True\n",
" self.dino_run = False\n",
" self.dino_duck = False\n",
"\n",
" elif move == 1 and not self.dino_jump:\n",
" self.dino_duck = True\n",
" self.dino_run = False\n",
" self.dino_jump = False\n",
" \n",
" elif not(self.dino_jump or move == 1):\n",
" self.dino_run = True\n",
" self.dino_jump = False\n",
" self.dino_duck = False\n",
"\n",
" def duck(self) -> None:\n",
" self.image = self.duck_img[self.step_index // 10]\n",
" self.dino_rect = self.image.get_rect()\n",
" self.dino_rect.x = self.X_POS\n",
" self.dino_rect.y = self.Y_DUCK_POS\n",
" self.step_index += 1\n",
"\n",
" def run(self) -> None:\n",
" self.image = self.run_img[self.step_index // 10]\n",
" self.dino_rect = self.image.get_rect()\n",
" self.dino_rect.x = self.X_POS\n",
" self.dino_rect.y = self.Y_POS\n",
" self.step_index += 1\n",
" \n",
"\n",
" def jump(self) -> None:\n",
" self.image = self.jump_img\n",
" if self.dino_jump:\n",
" self.dino_rect.y -= self.jump_vel * 3\n",
" self.jump_vel -= 0.6\n",
" \n",
" if self.jump_vel < -self.JUMP_VEL:\n",
" self.dino_jump = False\n",
" self.dino_run = True\n",
" self.jump_vel = self.JUMP_VEL\n",
"\n",
" def draw(self, SCREEN: pygame.Surface):\n",
" SCREEN.blit(self.image, (self.dino_rect.x, self.dino_rect.y))"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a1bc8f20",
"metadata": {},
"outputs": [],
"source": [
"class LargeCactus(Obstacle):\n",
" def __init__(self, image: List[pygame.Surface]) -> None:\n",
" self.type = random.randint(0, 2)\n",
" super().__init__(image, self.type)\n",
" self.rect.y = 300\n",
"\n",
"\n",
"class SmallCactus(Obstacle):\n",
" def __init__(self, image: List[pygame.Surface]) -> None:\n",
" self.type = random.randint(0, 2)\n",
" super().__init__(image, self.type)\n",
" self.rect.y = 325\n",
"\n",
"class Bird(Obstacle):\n",
" def __init__(self, image: List[pygame.Surface]) -> None:\n",
" self.type = 0\n",
" super().__init__(image, self.type)\n",
" self.rect.y = SCREEN_HEIGHT - 340\n",
" self.index = 0\n",
" \n",
" def draw(self, SCREEN: pygame.Surface):\n",
" if self.index >= 19:\n",
" self.index = 0\n",
" \n",
" SCREEN.blit(self.image[self.index // 10], self.rect)\n",
" self.index += 1\n",
" \n",
"class Cloud:\n",
" def __init__(self) -> None:\n",
" self.x = SCREEN_WIDTH + random.randint(800, 1000)\n",
" self.y = random.randint(50, 100)\n",
" self.image = CLOUD\n",
" self.width = self.image.get_width()\n",
"\n",
" def update(self, game_speed: int):\n",
" self.x -= game_speed\n",
" if self.x < -self.width:\n",
" self.x = SCREEN_WIDTH + random.randint(800, 1000)\n",
" self.y = random.randint(50, 100)\n",
" \n",
"\n",
" def draw(self, SCREEN: pygame.Surface):\n",
" SCREEN.blit(self.image, (self.x, self.y)) "
]
},
{
"cell_type": "markdown",
"id": "fd31220d",
"metadata": {},
"source": [
"# Clase Game\n",
"\n",
"La clase Game es donde se ejecuta el juego del dinosaurio. En general es casi lo mismo que la clase original, pero con play_auto siendo la parte donde se entrena al agente:\n",
"\n",
"\n",
"#### Inicialización y Configuración del Juego\n",
"- Se establece points_label para rastrear los puntos obtenidos en cada episodio y se inicia un ciclo que recorre un número predefinido de episodios (NUM_EPISODES).\n",
"- Para cada episodio, se inicializa episode_reward a 0 y se obtiene el estado inicial del juego mediante get_state.\n",
"\n",
"#### Ciclo Principal del Juego\n",
"\n",
"- Dentro de cada episodio, se inicia un while que continúa mientras self.run sea True. Este representa el juego continuo hasta que el dinosaurio choca con un obstáculo.\n",
"- Se verifica si es necesario crear un nuevo obstáculo en el juego y se llama a create_obstacle si es así.\n",
"- El agente decide qué acción tomar. Si un valor aleatorio es mayor que epsilon (que representa la probabilidad de exploración), el agente selecciona la acción basada en las predicciones de la red neuronal (usando get_qs). Si el valor es menor, se elige una acción aleatoria.\n",
"\n",
"#### Actualización y Aprendizaje\n",
"\n",
"- Se actualiza el juego con la acción seleccionada llamando a update_game, y se obtiene el nuevo estado del juego.\n",
"- Se calcula la recompensa basada en la interacción del dinosaurio con los obstáculos y se actualiza la memoria de replay del agente con la transición (estado actual, acción, recompensa, nuevo estado).\n",
"- Se entrena el modelo del agente con la nueva experiencia (transición).\n",
"\n",
"#### Fin del Episodio y Guardado del Modelo\n",
"\n",
"- Si el dinosaurio colisiona con un obstáculo, se termina el episodio. Se actualiza episode_reward con la recompensa obtenida, y se reinicia el juego para el próximo episodio.\n",
"- Se añade la reward del episodio a ep_rewards para rastrear el rendimiento a lo largo del tiempo. epsilon se reduce (decay) para disminuir la exploración a medida que el agente aprende.\n",
"- El modelo se guarda a intervalos regulares para conservar el estado de aprendizaje."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"#Codigo para guardar los modelos\n",
"# Verifica si el directorio 'models' existe, y si no, créalo\n",
"if not os.path.exists('models/episodes'):\n",
" os.makedirs('models/episodes')\n",
"\n",
"# Verifica si el directorio 'models/highscore' existe, y si no, créalo\n",
"if not os.path.exists('models/highscore'):\n",
" os.makedirs('models/highscore')"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "3665702a",
"metadata": {},
"outputs": [],
"source": [
"class Game:\n",
" def __init__(self, epsilon, learning_rate, num_episodes, load_model=False, model_path=None):\n",
" pygame.init()\n",
" self.SCREEN = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))\n",
"\n",
" self.obstacles = []\n",
"\n",
" self.run = True\n",
"\n",
" self.clock = pygame.time.Clock()\n",
"\n",
" self.cloud = Cloud()\n",
"\n",
" self.game_speed = INIT_GAME_SPEED\n",
"\n",
" self.font = pygame.font.Font(\"freesansbold.ttf\", 20)\n",
"\n",
" self.dino = Dino()\n",
"\n",
" # tu código existente aquí...\n",
" self.epsilon = epsilon\n",
" self.num_episodes = num_episodes\n",
" self.dino = Dino(learning_rate=learning_rate) # Pasa learning_rate aquí\n",
" \n",
" # Cargar el modelo si se solicita\n",
" if load_model and model_path:\n",
" self.dino.model.load_state_dict(torch.load(model_path, map_location=device))\n",
"\n",
" self.x_pos_bg = X_POS_BG_INIT\n",
"\n",
" self.points = 0\n",
" \n",
" self.epsilon = epsilon\n",
"\n",
" self.ep_rewards = [-200]\n",
"\n",
" self.high_score = 0 # Inicializa el high score con 0 o carga el high score existente de un archivo si lo prefieres\n",
"\n",
" self.best_score = 0\n",
"\n",
" def reset(self):\n",
" self.game_speed = INIT_GAME_SPEED\n",
" old_dino = self.dino\n",
" self.dino = Dino()\n",
" self.dino.init_replay_memory = old_dino.init_replay_memory\n",
" self.dino.late_replay_memory = old_dino.late_replay_memory\n",
" self.dino.target_update_counter = old_dino.target_update_counter\n",
"\n",
" self.dino.model.load_state_dict(old_dino.model.state_dict())\n",
" self.dino.target_model.load_state_dict(old_dino.target_model.state_dict())\n",
"\n",
" self.x_pos_bg = X_POS_BG_INIT\n",
" self.points = 0\n",
" self.SCREEN = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))\n",
" self.clock = pygame.time.Clock()\n",
"\n",
" def get_dist(self, pos_a: tuple, pos_b:tuple):\n",
" dx = pos_a[0] - pos_b[0]\n",
" dy = pos_a[1] - pos_b[1]\n",
"\n",
" return math.sqrt(dx**2 + dy**2) \n",
"\n",
" def update_background(self):\n",
" image_width = BACKGROUND.get_width()\n",
"\n",
" self.SCREEN.blit(BACKGROUND, (self.x_pos_bg, Y_POS_BG))\n",
" self.SCREEN.blit(BACKGROUND, (self.x_pos_bg + image_width, Y_POS_BG))\n",
"\n",
" if self.x_pos_bg <= -image_width:\n",
" self.SCREEN.blit(BACKGROUND, (self.x_pos_bg + image_width, Y_POS_BG))\n",
" self.x_pos_bg = 0\n",
" \n",
" self.x_pos_bg -= self.game_speed\n",
" return self.x_pos_bg\n",
" \n",
" def get_state(self):\n",
" state = []\n",
" state.append(self.dino.dino_rect.y / self.dino.Y_DUCK_POS + 10) \n",
" pos_a = (self.dino.dino_rect.x, self.dino.dino_rect.y)\n",
" bird = 0\n",
" cactus = 0\n",
" if len(self.obstacles) == 0:\n",
" dist = self.get_dist(pos_a, tuple([SCREEN_WIDTH + 10, self.dino.Y_POS])) / math.sqrt(SCREEN_HEIGHT**2 + SCREEN_WIDTH**2)\n",
" obs_height = 0\n",
" obj_width = 0\n",
" else:\n",
" dist = self.get_dist(pos_a, (self.obstacles[0].rect.midtop)) / math.sqrt(SCREEN_HEIGHT**2 + SCREEN_WIDTH**2)\n",
" obs_height = self.obstacles[0].rect.midtop[1] / self.dino.Y_DUCK_POS\n",
" obj_width = self.obstacles[0].rect.width / SMALL_CACTUS[2].get_rect().width\n",
" if self.obstacles[0].__class__ == SmallCactus(SMALL_CACTUS).__class__ or \\\n",
" self.obstacles[0].__class__ == LargeCactus(LARGE_CACTUS).__class__:\n",
" cactus = 1\n",
" else:\n",
" bird = 1\n",
" \n",
" state.append(dist)\n",
" state.append(obs_height)\n",
" state.append(self.game_speed / 24)\n",
" state.append(obj_width)\n",
" state.append(cactus)\n",
" state.append(bird)\n",
" \n",
" return state\n",
"\n",
"\n",
" def update_score(self):\n",
" self.points += 1\n",
" if self.points % 200 == 0:\n",
" self.game_speed += 1\n",
"\n",
" if self.points > self.high_score:\n",
" self.high_score = self.points\n",
"\n",
" text = self.font.render(f\"Points: {self.points} Highscore: {self.high_score}\", True, (0, 0, 0))\n",
" textRect = text.get_rect()\n",
" textRect.center = (SCREEN_WIDTH - textRect.width // 2 - 10, 40)\n",
" self.SCREEN.blit(text, textRect)\n",
"\n",
" \n",
" def create_obstacle(self):\n",
" # bird_prob = random.randint(0, 15)\n",
" # cactus_prob = random.randint(0, 10)\n",
" # if bird_prob == 0:\n",
" # self.obstacles.append(Bird(BIRD))\n",
" # elif cactus_prob == 0:\n",
" # self.obstacles.append(SmallCactus(SMALL_CACTUS))\n",
" # elif cactus_prob == 1:\n",
" # self.obstacles.append(LargeCactus(LARGE_CACTUS))\n",
"\n",
" obstacle_prob = random.randint(0, 50)\n",
" if obstacle_prob == 0:\n",
" self.obstacles.append(SmallCactus(SMALL_CACTUS))\n",
" elif obstacle_prob == 1:\n",
" self.obstacles.append(LargeCactus(LARGE_CACTUS))\n",
" elif obstacle_prob == 2 and self.points > 300:\n",
" self.obstacles.append(Bird(BIRD))\n",
" \n",
" def update_game(self, moves, user_input=None):\n",
" self.dino.draw(self.SCREEN)\n",
" if user_input is not None:\n",
" self.dino.update(user_input)\n",
" else:\n",
" self.dino.update_auto(moves)\n",
"\n",
" self.update_background()\n",
"\n",
" self.cloud.draw(self.SCREEN)\n",
"\n",
" self.cloud.update(self.game_speed)\n",
"\n",
" self.update_score() \n",
"\n",
" self.clock.tick(30)\n",
"\n",
" # pygame.display.update()\n",
"\n",
" def play_manual(self):\n",
" \n",
" while self.run is True:\n",
" for event in pygame.event.get():\n",
" if event.type == pygame.QUIT:\n",
" sys.exit()\n",
" \n",
" self.SCREEN.fill((255, 255, 255))\n",
" user_input = pygame.key.get_pressed()\n",
" # moves = []\n",
"\n",
" if len(self.obstacles) == 0:\n",
" self.create_obstacle()\n",
"\n",
" for obstacle in self.obstacles:\n",
" obstacle.draw(SCREEN=self.SCREEN)\n",
" obstacle.update(self.obstacles, self.game_speed)\n",
" if self.dino.dino_rect.colliderect(obstacle.rect):\n",
" self.dino.score = self.points\n",
" pygame.quit()\n",
" self.obstacles.pop()\n",
" print(\"Game over!\")\n",
" return\n",
"\n",
" self.update_game(user_input=user_input, moves=2)\n",
" pygame.display.update()\n",
"\n",
"\n",
" def play_auto(self):\n",
" state_queue = queue.Queue()\n",
" # Crea y comienza el hilo de cálculo\n",
" calculation_thread = threading.Thread(target=self.dino.calculate_action, args=(state_queue, action_queue))\n",
" calculation_thread.start()\n",
" \n",
" try:\n",
" points_label = 0\n",
" for episode in tqdm(range(1, NUM_EPISODES + 1), ascii=True, unit='episodes'):\n",
" episode_reward = 0\n",
" step = 1\n",
" current_state = self.get_state()\n",
" self.run = True\n",
" while self.run is True:\n",
"\n",
" for event in pygame.event.get():\n",
" if event.type == pygame.QUIT:\n",
" sys.exit()\n",
" \n",
" self.SCREEN.fill((255, 255, 255))\n",
"\n",
" if len(self.obstacles) == 0:\n",
" self.create_obstacle()\n",
"\n",
" # if self.run == False:\n",
" # print(current_state)\n",
" # time.sleep(2)\n",
" # continue\n",
"\n",
" if np.random.random() > self.epsilon:\n",
" action = self.dino.get_qs(torch.Tensor(current_state))\n",
" # print(action)\n",
" action = np.argmax(action)\n",
" # print(action)\n",
" else:\n",
" num = np.random.randint(0, 10)\n",
" if num == 0:\n",
" # print(\"yes\")\n",
" action = num\n",
" elif num <= 3:\n",
" action = 1\n",
" else:\n",
" action = 2\n",
"\n",
" self.update_game(moves=action)\n",
" # print(self.game_speed)\n",
" next_state = self.get_state()\n",
" reward = 0\n",
"\n",
" for obstacle in self.obstacles:\n",
" obstacle.draw(SCREEN=self.SCREEN)\n",
" obstacle.update(self.obstacles, self.game_speed)\n",
" next_state = self.get_state()\n",
" if self.dino.dino_rect.x > obstacle.rect.x + obstacle.rect.width:\n",
" reward = 3\n",
" \n",
" if action == 0 and obstacle.rect.x > SCREEN_WIDTH // 2:\n",
" reward = -1\n",
" \n",
" if self.dino.dino_rect.colliderect(obstacle.rect):\n",
" self.dino.score = self.points\n",
" # pygame.quit()\n",
" self.obstacles.pop()\n",
" points_label = self.points\n",
" self.reset()\n",
" reward = -10\n",
" # print(\"Game over!\")\n",
" self.run = False\n",
" break\n",
" current_state = self.get_state()\n",
" state_queue.put(current_state) # Envía el estado actual al hilo de cálculo\n",
"\n",
" if not action_queue.empty():\n",
" action = action_queue.get() # Recibe la acción del hilo de cálculo\n",
" self.update_game(moves=action)\n",
"\n",
" # if reward != 0:\n",
" # print(reward > 0)\n",
" \n",
" episode_reward += reward\n",
" \n",
" self.dino.update_replay_memory(tuple([current_state, action, reward, next_state, self.run]))\n",
"\n",
" self.dino.train( not self.run, step=step)\n",
"\n",
" current_state = next_state\n",
"\n",
" step += 1\n",
"\n",
" # self.clock.tick(60)\n",
"\n",
" #print(self.points)\n",
" #print(self.high_score)\n",
"\n",
" # Al final de cada episodio, verifica si hay un nuevo mejor puntaje\n",
" if self.points > self.best_score:\n",
" self.best_score = self.points\n",
" # Este archivo se sobrescribirá con el último mejor modelo\n",
" self.best_model_filename = 'models/highscore/BestScore_model.pth'\n",
" torch.save(self.dino.model.state_dict(), self.best_model_filename)\n",
"\n",
" pygame.display.update()\n",
" \n",
"\n",
" self.ep_rewards.append(episode_reward)\n",
"\n",
" # Obtenemos la fecha y hora actual\n",
" current_time = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')\n",
"\n",
" # Guardar el modelo cada 50 escenarios\n",
" if episode % 50 == 0:\n",
" filename = f'models/episodes/{points_label}_Points,Episode_{episode}_Date_{current_time}_model.pth'\n",
" torch.save(self.dino.model.state_dict(), filename)\n",
"\n",
"\n",
" if self.epsilon > MIN_EPSILON:\n",
" self.epsilon *= EPSILON_DECAY\n",
" if self.epsilon < MIN_EPSILON:\n",
" self.epsilon = 0\n",
" # print(self.epsilon)\n",
" else:\n",
" self.epsilon = max(MIN_EPSILON, self.epsilon)\n",
" # print(self.epsilon)\n",
" # print((self.dino.replay_memory))\n",
" # Al salir del bucle del juego, envía None para detener el hilo de cálculo\n",
" state_queue.put(None)\n",
" calculation_thread.join()\n",
" finally:\n",
" # Este bloque se ejecutará incluso si se interrumpe el juego.\n",
" # Aquí duplicas el archivo del mejor puntaje alcanzado hasta ahora.\n",
" if hasattr(self, 'best_model_filename'):\n",
" current_time = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')\n",
" final_model_filename = f'models/highscore/{self.best_score}_BestScore_Final_{current_time}_model.pth'\n",
" import shutil\n",
" shutil.copy(self.best_model_filename, final_model_filename)\n",
" print(f\"Modelo duplicado guardado como: {final_model_filename}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"El código tiene dos modos de juego:\n",
" - Modo Manual: El usuario juega controlando al dinosaurio con el teclado.\n",
" - Modo Automático: El agente controla al dinosaurio basándose en las decisiones tomadas por el modelo de Q-Learning.\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "2c04f7ce",
"metadata": {},
"outputs": [],
"source": [
"def run_game(epsilon, learning_rate, num_episodes):\n",
" model_path = 'models/highscore/4245_BestScore_Final_2023-12-10_18-43-53_model.pth'\n",
" game = Game(epsilon, learning_rate=learning_rate, num_episodes=num_episodes, load_model=True, model_path=model_path)\n",
" game.play_auto()\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
" 0%| | 0/100 [00:00<?, ?episodes/s]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|##########| 100/100 [22:16<00:00, 13.37s/episodes] \n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Modelo duplicado guardado como: models/highscore/3836_BestScore_Final_2023-12-10_19-53-01_model.pth\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|##########| 100/100 [26:16<00:00, 15.77s/episodes]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Modelo duplicado guardado como: models/highscore/3813_BestScore_Final_2023-12-10_20-19-18_model.pth\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|##########| 100/100 [27:43<00:00, 16.64s/episodes]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Modelo duplicado guardado como: models/highscore/3683_BestScore_Final_2023-12-10_20-47-01_model.pth\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|##########| 100/100 [30:23<00:00, 18.23s/episodes]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Modelo duplicado guardado como: models/highscore/3629_BestScore_Final_2023-12-10_21-17-25_model.pth\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|##########| 100/100 [28:44<00:00, 17.24s/episodes] \n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Modelo duplicado guardado como: models/highscore/3783_BestScore_Final_2023-12-10_21-46-09_model.pth\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|##########| 100/100 [21:19<00:00, 12.79s/episodes]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Modelo duplicado guardado como: models/highscore/4157_BestScore_Final_2023-12-10_22-07-29_model.pth\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" 52%|#####2 | 52/100 [15:01<31:07, 38.90s/episodes] "
]
}
],
"source": [
"epsilons = [0.1, 0.2, 0.3] # diferentes valores de epsilon que quieres probar\n",
"learning_rates = [0.001, 0.005, 0.01] # diferentes tasas de aprendizaje\n",
"num_episodes_list = [100, 200, 300] # diferentes números de episodios\n",
"\n",
"for epsilon in epsilons:\n",
" for lr in learning_rates:\n",
" for num_episodes in num_episodes_list:\n",
" run_game(epsilon, lr, num_episodes)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
|