People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction

📄 arXiv: 2507.18639v1 📥 PDF

作者: Paweł Niszczota, Tomasz Grzegorczyk, Alexander Pastukhov

分类: cs.HC, cs.CL, cs.CY, econ.GN

发布日期: 2025-05-10


💡 一句话要点

探讨人类与大型语言模型的合作行为及其影响因素

🎯 匹配领域: 支柱九:具身大模型 (Embodied Foundation Models)

关键词: 大型语言模型 合作行为 囚徒困境 人机交互 沟通影响 实验研究 商业应用

📋 核心要点

  1. 现有方法在促进人类与机器之间的合作时存在挑战,尤其是在缺乏有效沟通的情况下。
  2. 论文通过囚徒困境游戏,探讨了与LLMs互动时人类合作行为的变化,特别是在沟通可行的情境下。
  3. 实验结果显示,尽管与LLMs的合作率低于人类,但在沟通条件下合作率显著提升,且先前的人类互动增强了合作倾向。

📝 摘要(中文)

本研究探讨了与大型语言模型(LLMs)互动如何影响人类的合作行为,特别是在沟通可能或先前与人类互动的情况下。通过囚徒困境游戏,研究发现与LLMs的合作率虽然比与人类低10-15个百分点,但仍然较高。在允许沟通的实验中,合作率显著提高,且与人类和LLMs的合作机会相等,表明LLMs在合作场景中的潜在应用价值。研究结果为企业在需要合作的环境中谨慎使用LLMs提供了验证。

🔬 方法详解

问题定义:本研究旨在解决人类在与大型语言模型(LLMs)互动时的合作行为变化问题。现有方法未能充分探讨人类与机器之间的合作动态,尤其是在沟通缺失的情况下。

核心思路:研究通过囚徒困境游戏模拟人类与LLMs的互动,分析沟通对合作行为的影响,旨在揭示人类在与机器互动时的心理机制。

技术框架:研究分为两个实验,实验一为三十轮重复游戏,参与者分别与人类、经典机器人和实时LLM(GPT)对战;实验二为一次性游戏,参与者与人类或LLM对战,部分参与者允许沟通。

关键创新:本研究的创新在于揭示了沟通对人类与LLMs合作行为的积极影响,尤其是在先前与人类互动的背景下,显示出合作行为的溢出效应。

关键设计:实验设计中,参与者的选择和沟通条件是关键参数,使用囚徒困境游戏作为实验框架,确保了结果的有效性和可靠性。

📊 实验亮点

实验结果显示,与LLMs的合作率虽然比与人类低10-15个百分点,但在允许沟通的情况下,合作率提升了88%。此外,先前与人类的互动显著提高了与LLMs的合作倾向,验证了LLMs在合作场景中的潜在价值。

🎯 应用场景

该研究为企业在需要合作的环境中使用大型语言模型提供了重要的理论支持,尤其是在需要有效沟通和信任的商业场景中。未来,LLMs可被应用于客户服务、团队协作和决策支持等领域,提升工作效率和合作效果。

📄 摘要(原文)

Machines driven by large language models (LLMs) have the potential to augment humans across various tasks, a development with profound implications for business settings where effective communication, collaboration, and stakeholder trust are paramount. To explore how interacting with an LLM instead of a human might shift cooperative behavior in such settings, we used the Prisoner's Dilemma game -- a surrogate of several real-world managerial and economic scenarios. In Experiment 1 (N=100), participants engaged in a thirty-round repeated game against a human, a classic bot, and an LLM (GPT, in real-time). In Experiment 2 (N=192), participants played a one-shot game against a human or an LLM, with half of them allowed to communicate with their opponent, enabling LLMs to leverage a key advantage over older-generation machines. Cooperation rates with LLMs -- while lower by approximately 10-15 percentage points compared to interactions with human opponents -- were nonetheless high. This finding was particularly notable in Experiment 2, where the psychological cost of selfish behavior was reduced. Although allowing communication about cooperation did not close the human-machine behavioral gap, it increased the likelihood of cooperation with both humans and LLMs equally (by 88%), which is particularly surprising for LLMs given their non-human nature and the assumption that people might be less receptive to cooperating with machines compared to human counterparts. Additionally, cooperation with LLMs was higher following prior interaction with humans, suggesting a spillover effect in cooperative behavior. Our findings validate the (careful) use of LLMs by businesses in settings that have a cooperative component.