logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

Are Large Language Models Sensitive To The Motives Behind Communication Addison J Wu Ryan Liu Kerem Oktar Theodore R Sumers Thomas L Griffiths

  • SKU: BELL-239948798
Are Large Language Models Sensitive To The Motives Behind Communication Addison J Wu Ryan Liu Kerem Oktar Theodore R Sumers Thomas L Griffiths
$ 35.00 $ 45.00 (-22%)

5.0

110 reviews

Are Large Language Models Sensitive To The Motives Behind Communication Addison J Wu Ryan Liu Kerem Oktar Theodore R Sumers Thomas L Griffiths instant download after payment.

Publisher: x
File Extension: PDF
File size: 2.56 MB
Author: Addison J. Wu & Ryan Liu & Kerem Oktar & Theodore R. Sumers & Thomas L. Griffiths
Language: English
Year: 2025

Product desciption

Are Large Language Models Sensitive To The Motives Behind Communication Addison J Wu Ryan Liu Kerem Oktar Theodore R Sumers Thomas L Griffiths by Addison J. Wu & Ryan Liu & Kerem Oktar & Theodore R. Sumers & Thomas L. Griffiths instant download after payment.

arXiv:2510.19687v1 [cs.CL] 22 Oct 20251Department of Computer Science, Princeton University2Department of Psychology, Princeton University3AnthropicAbstractHuman communication is motivated: people speak, write, and create content witha particular communicative intent in mind. As a result, information that largelanguage models (LLMs) and AI agents process is inherently framed by humans’intentions and incentives. People are adept at navigating such nuanced information:we routinely identify benevolent or self-serving motives in order to decide whatstatements to trust. For LLMs to be effective in the real world, they too must critically evaluate content by factoring in the motivations of the source—for instance,weighing the credibility of claims made in a sales pitch. In this paper, we undertake a comprehensive study of whether LLMs have this capacity for motivationalvigilance. We first employ controlled experiments from cognitive science to verifythat LLMs’ behavior is consistent with rational models of learning from motivatedtestimony, and find they successfully discount information from biased sources ina human-like manner. We then extend our evaluation to sponsored online adverts,a more naturalistic reflection of LLM agents’ information ecosystems. In thesesettings, we find that LLMs’ inferences do not track the rational models’ predictions nearly as closely—partly due to additional information that distracts themfrom vigilance-relevant considerations. However, a simple steering interventionthat boosts the salience of intentions and incentives substantially increases thecorrespondence between LLMs and the rational model. These results suggest thatLLMs possess a basic sensitivity to the motivations of others, but generalizing tonovel real-world settings will require further improvements to these models.1 IntroductionMuch of the information available online—and hence a large fraction of the data large languagemodels (LLMs) are tasked with processing—is the product