Below we show detailed results and visualizations for each metric in each context chunk. We are scoring the expressed values of a simulated participant in a context. The population is simulated 9 times, once for each context chunk. A context chunk is a set of 50 contexts - one context for each individual. For instance, chunks_0-4 contain reddit posts (longest in chunk_0, shortest in chunk_4). When comparing chunk_0 and chunk_4, the conversations with the participants are initialized first with posts from chunk_0 and then with posts form chunk_4. Metrics and chunks are explained in more detail on the Motivation and Methodology page.
This image shows the circular value structure projected on a 2D plane. This was done by computing the intercorrelations between different values this space was then reduced with a SVD-based approach and varimax rotation (`FactorAnalysis` object from `scikit-learn`). The theoretical order (shown in the top left figure) was used to initialize the SVD. Stress denotes the fit quality.
This tables show the metrics resulting from the Magnifying class CFA procedure: for each context chunk four CFA models are fit (one for each high level value). The average of the metrics for those four CFA models are shown for each context chunk.
This image shows the Rank-Order stability between each pair of context chunks. Rank-Order stability is computed by ordering the personas based on their expression of some value, and then computing the correlation between their orders in two different context chunks. The stability estimates for the ten values are then averaged to get the final Rank-Order stability measure. Refer to our paper for details.
This image shows the order of personas in each context chunk for each value. For each value (row), the personas are ordered on the x-axis by their expression of this value in the `no_conv` setting (gray). Therefore, the Rank-Order stability between the `no_conv` chunk and some chunk corresponds to the extent to which the curve is increasing in that chunk.