dkga
8 hours ago
Hi, HN! Co-author here (I don't know if own papers are also Show HN, happy to adjust if so). We explored LLM strategic choices in a simple but intriguing game theoretical setting, the ultimatum game.
In this game, a Proposer proposes to distribute a share of the amount at stake with the Responder. If the Responder accepts, both get their proposed amounts; if the proposal is rejected then no one gets anything. This game shines a light on how these models could behave when their payoffs depend on the opponent's choices too.
We document three main findings.
First, LLM behavior is heterogeneous but predictable when conditioning on stake size and player types.
Second, some models approximate the rational benchmark and others mimic human social preferences, but we also observe a distinct "altruistic" mode emerging - in this case, LLMs propose hyper-fair distributions (greater than 50%).
Third, when we calculate the expected payoff, LLMs actually leave a lot of money on the table. They forgo a large share of total payoff, and an even larger share when the Responder is human.