Game-theoretic LLM: Agent Workflow for Negotiation Games
Abstract
This paper investigates the rationality of large language models (LLMs) in strategic decision-making contexts, specifically within the framework of game theory. We evaluate several state-of-the-art LLMs across a spectrum of complete-information and incomplete-information games. Our findings reveal that LLMs frequently deviate from rational strategies, particularly as the complexity of the game increases with larger payoff matrices or deeper sequential trees. To address these limitations, we design multiple game-theoretic workflows that guide the reasoning and decision-making processes of LLMs. These workflows aim to enhance the models' ability to compute Nash Equilibria and make rational choices, even under conditions of uncertainty and incomplete information. Experimental results demonstrate that the adoption of these workflows significantly improves the rationality and robustness of LLMs in game-theoretic tasks. Specifically, with the workflow, LLMs exhibit marked improvements in identifying optimal strategies, achieving near-optimal allocations in negotiation scenarios, and reducing susceptibility to exploitation during negotiations. Furthermore, we explore the meta-strategic considerations of whether it is rational for agents to adopt such workflows, recognizing that the decision to use or forgo the workflow constitutes a game-theoretic issue in itself. Our research contributes to a deeper understanding of LLMs' decision-making capabilities in strategic contexts and provides insights into enhancing their rationality through structured workflows. The findings have implications for the development of more robust and strategically sound AI agents capable of navigating complex interactive environments. Code and data supporting this study are available at https://github.com/Wenyueh/game_theory.
Community
This paper explores the inherent limitations of state-of-the-art LLMs like o1 and Claude-3.5 Sonnet when faced with game-theoretic challenges.
It discovered that, without guidance, these models often stray from rational strategies if (1) game complexity increases (2) noises/perturbances occur (3) multi-round negotiation is allowed.
To enhance the rational decision-making abilities of LLMs, this papers introduces innovative, game-theory-inspired workflows that act as a compass for LLMs. By integrating classic game-theoretic principles directly into their reasoning processes, these workflows enable these models to compute Nash Equilibria and make rational choices, even under uncertainty and incomplete information.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TMGBench: A Systematic Game Benchmark for Evaluating Strategic Reasoning Abilities of LLMs (2024)
- A Fairness-Driven Method for Learning Human-Compatible Negotiation Strategies (2024)
- Autoformalization of Game Descriptions using Large Language Models (2024)
- Game Theory with Simulation in the Presence of Unpredictable Randomisation (2024)
- Integrated Decision Making and Trajectory Planning for Autonomous Driving Under Multimodal Uncertainties: A Bayesian Game Approach (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper