Stick To Your Role! Leaderboard

The Stick to Your Role! leaderboard compares LLMs based on undesired sensitivity to context change. It focuses on the stability of personal value expression in simulated personas. As proposed in our paper, unwanted context-dependence should be seen as a property of LLMs - a dimension of LLM comparison (alongside others such as model size speed or expressed knowledge). This leaderboard aims to provide such a comparison and extends our paper with a more focused and elaborate experimental setup. Standard benchmarks present many questions from the same minimal contexts (e.g. multiple choice questions), we present same questions from many different contexts.

{{ main_table_html|safe }}
Cardinal Ordinal

We leverage the Schwartz's theory of Basic Personal Values, which defines 10 values Self-Direction, Stimulation, Hedonism, Achievement, Power, Security, Conformity, Tradition, Benevolence, Universalism), and the associated PVQ-40 and SVS questionnaires (available here).

Using the methodology from psychology, we focus on population-level (interpersonal) value stability, i.e. Rank-Order stability (RO stability). Rank-Order stability refers to the extent the order of different personas (in terms of expression of some value) remains the same along different contexts. Refer here or to our paper for more details.

In addition to Rank-Order stability we compute validity metrics (Stress, Separability, CFI, SRMR, RMSEA), which are a common practice in psychology. Validity refers to the extent the questionnaire measures what it purports to measure. It can be seen the questionnaire's accuracy in measuring the intended factors, i.e. values. For example, basic personal values should be organized in a circular structure, and questions measuring the same value should be correlated. The table below additionally shows the validity metrics, refer here for more details.

We aggregate Rank-Order stability and validation metrics to rank the models. We do so in two ways: Cardinal and Ordinal. Following, this paper, we compute the stability and diversity of those rankings. See here for more details.

To sum up here are the metrics used:

{{ full_table_html|safe }}
Learn More About This Project
Submit a model

If you found this project useful, please cite our related paper, which this leaderboard extends with a more focused and elaborate experimental setup. Refer here for details.

@inproceedings{kovavc2024stick, title={Stick to your Role! Stability of Personal Values Expressed in Large Language Models}, author={Kova{\v{c}}, Grgur and Portelas, R{\'e}my and Sawayama, Masataka and Dominey, Peter Ford and Oudeyer, Pierre-Yves}, booktitle={Proceedings of the Annual Meeting of the Cognitive Science Society}, volume={46}, year={2024} }