KTO: Model Alignment as Prospect Theoretic Optimization
Abstract
Kahneman & Tversky's prospect theory tells us that humans perceive random variables in a biased but well-defined manner; for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them being human-aware loss functions (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach Kahneman-Tversky Optimization (KTO), and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B. Crucially, KTO does not need preferences -- only a binary signal of whether an output is desirable or undesirable for a given input. This makes it far easier to use in the real world, where preference data is scarce and expensive.
Community
Take a look at this collection of datasets for KTO https://huggingface.co/collections/argilla/preference-datasets-for-kto-65f98314d7c1b04ab54d41a7
Hey, Amazing work :)
We've summarised this and a few other papers in our blog. Hope you like it
- KTO: The infamous alignment algorithm
- OLMoE: Open Data, Weights, Code Mixture of Experts models
- Mamba in the LlaMA: Distilling from Transformers to Mamba
- PlanSearch: Improving Code Generation via Planning
https://datta0.substack.com/p/ai-unplugged-19-kto-for-model-alignment