Spaces:
Running
Running
### Assessment of SLS {#sec-subsec_3_5_1_SLS} | |
In this subsection, the prediction capability for the *SLS* will be analyzed in detail. All the presented output is generated with \gls{svd} as the decomposition method and \gls{rf} as the $\boldsymbol Q / \boldsymbol T$ regressor.\newline | |
The final objective of \gls{cnmc} is to capture the characteristics of the original trajectory. | |
However, it is important to outline that \gls{cnmc} is trained with the \gls{cnm} predicted trajectories. | |
Thus, the outcome of \gls{cnmc} highly depends on the ability of \gls{cnm} to represent the original data. | |
Consequently, \gls{cnmc} can only be as effective as \gls{cnm} is in the first place, with the approximation of the true data. | |
Figures @fig-fig_72 and @fig-fig_73 show the true, \gls{cnm} and \gls{cnmc} predicted trajectories and a focused view on the \gls{cnm} and \gls{cnmc} trajectories, respectively. | |
The output was generated for $\beta_{unseen} = 28.5$ and $L =1$. | |
First, it can be observed that \gls{cnm} is not able to capture the full radius of the Lorenz attractor. | |
This is caused by the low chosen number of centroids $K=10$. | |
Furthermore, as mentioned at the beginning of this chapter, the goal is not to replicate the true data one-to-one, but rather to catch the significant behavior of any dynamic system. | |
With the low number of centroids $K$, \gls{cnm} extracts the characteristics of the Lorenz system well. | |
Second, the other aim for \gls{cnmc} is to match the \gls{cnm} data as closely as possible. | |
Both figures @fig-fig_72 and @fig-fig_73 prove that \gls{cnmc} has fulfilled its task very well. \newline | |
:::{layout="[[1,1]]"} | |
![True, \gls{cnm} and \gls{cnmc} predicted trajectories](../../3_Figs_Pyth/3_Task/4_SLS/0_lb_28.5_All.svg){#fig-fig_72} | |
![\gls{cnm} and \gls{cnmc} predicted trajectories](../../3_Figs_Pyth/3_Task/4_SLS/1_lb_28.5.svg){#fig-fig_73} | |
*SLS*, $\beta_{unseen}=28.5,\, L=1$, true, \gls{cnm} and \gls{cnmc} predicted trajectories | |
::: | |
A close-up of the movement of the different axes is shown in the picture @fig-fig_74 . | |
Here, as well, the same can be observed as described above. Namely, the predicted \gls{cnmc} trajectory is not a one-to-one reproduction of the original trajectory. | |
However, the characteristics, i.e., the magnitude of the motion in all 3 directions (x, y, z) and the shape of the oscillations, are very similar to the original trajectory. | |
Note that even though the true and predicted trajectories are utilized to assess, whether the characteristical behavior could be extracted, a single evaluation based on the trajectories is not sufficient and often not advised or even possible. | |
In complex systems, trajectories can change rapidly while dynamical features persist [@Fernex2021a]. | |
In \gls{cnmc} the predicted trajectories are obtained through the \gls{cnm} propagation, which itself is based on a probabilistic model, i.e. the $\boldsymbol Q$ tensor. | |
Thus, matching full trajectories becomes even more unrealistic. | |
The latter two statements highlight yet again that more than one method of measuring quality is needed. | |
To further support the generated outcome the autocorrelation and \gls{cpd} in figure @fig-fig_75 and @fig-fig_76, respectively, shall be considered. | |
It can be inspected that the \gls{cnm} and \gls{cnmc} autocorrelations are matching the true autocorrelation in the shape favorably well. | |
Nonetheless, the degree of reflecting the magnitude fully decreases quite fast. | |
Considering the \gls{cpd}, it can be recorded that the true \gls{cpd} could overall be reproduced satisfactorily.\newline | |
![*SLS*, $\beta_{unseen}=28.5, \, L=1$, true, \gls{cnm} and \gls{cnmc} predicted trajectories as 2d graphs ](../../3_Figs_Pyth/3_Task/4_SLS/2_lb_28.5_3V_All.svg){#fig-fig_74} | |
::: {layout="[[1,1]]"} | |
![autocorrelation](../../3_Figs_Pyth/3_Task/4_SLS/3_lb_3_all_28.5.svg){#fig-fig_75} | |
![\gls{cpd}](../../3_Figs_Pyth/3_Task/4_SLS/4_lb_28.5.svg){#fig-fig_76} | |
*SLS*, $\beta_{unseen}= 28.5, \, L =1$, autocorrelation and \gls{cpd} for true, \gls{cnm} and \gls{cnmc} predicted trajectories | |
::: | |
To illustrate the influence of $L$, figure @fig-fig_77 shall be viewed. | |
It depicts the MAE error for the true and \gls{cnmc} predicted trajectories for $\beta_{unseen}= [\, 28.5,\, 32.5 \, ]$ with $L$ up to 7. | |
It can be observed that the choice of $L$ has an impact on the prediction quality measured by autocorrelation. | |
For $\beta_{unseen}=28.5$ and $\beta_{unseen}=32.5$, the optimal $L$ values are $L = 2$ and $L = 7$, respectively. To emphasize it even more that with the choice of $L$ the prediction quality can be regulated, figure @fig-fig_78 shall be considered. | |
It displays the 3 autocorrelations for $L = 7$. | |
Matching the shape of the true autocorrelation was already established with $L =1$ as shown in figure @fig-fig_75 . In addition to that, $L=7$ improves by matching the true magnitude. | |
Finally, it shall be mentioned that similar results have been accomplished with other $K$ tested values, where the highest value was $K =50$. | |
:::{layout="[[1,1]]"} | |
![*SLS*, MAE error for true and \gls{cnmc} predicted autocorrelations for $\beta_{unseen}= [\, 28.5,$ $32.5 \, ]$ and different values of $L$](../../3_Figs_Pyth/3_Task/4_SLS/5_lb_1_Orig_CNMc.svg){#fig-fig_77} | |
![*SLS*, $\beta_{unseen}=32.5, \, L=7$, \gls{cnm} and \gls{cnmc} predicted autocorrelation](../../3_Figs_Pyth/3_Task/4_SLS/6_lb_3_all_32.5.svg){#fig-fig_78} | |
::: | |
\FloatBarrier |