logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

How Large Language Models Encode Theoryofmind A Study On Sparse Parameter Patterns Yuheng Wu Wentao Guo Zirui Liu Heng Ji Zhaozhuo Xu Denghui Zhang

  • SKU: BELL-239164658
How Large Language Models Encode Theoryofmind A Study On Sparse Parameter Patterns Yuheng Wu Wentao Guo Zirui Liu Heng Ji Zhaozhuo Xu Denghui Zhang
$ 35.00 $ 45.00 (-22%)

5.0

40 reviews

How Large Language Models Encode Theoryofmind A Study On Sparse Parameter Patterns Yuheng Wu Wentao Guo Zirui Liu Heng Ji Zhaozhuo Xu Denghui Zhang instant download after payment.

Publisher: x
File Extension: PDF
File size: 1.23 MB
Author: Yuheng Wu & Wentao Guo & Zirui Liu & Heng Ji & Zhaozhuo Xu & Denghui Zhang
Language: English
Year: 2025

Product desciption

How Large Language Models Encode Theoryofmind A Study On Sparse Parameter Patterns Yuheng Wu Wentao Guo Zirui Liu Heng Ji Zhaozhuo Xu Denghui Zhang by Yuheng Wu & Wentao Guo & Zirui Liu & Heng Ji & Zhaozhuo Xu & Denghui Zhang instant download after payment.

npj Artificial Intelligence, doi:10.1038/s44387-025-00031-9

This paper investigates the emergence of Theory-of-Mind (ToM) capabilities in large language models(LLMs) from a mechanistic perspective, focusing on the role of extremely sparse parameter patterns.1234567890():,;1234567890():,;We introduce a novel method to identify ToM-sensitive parameters and reveal that perturbing as littleas 0.001% of these parameters significantly degrades ToM performance while also impairingcontextual localization and language understanding. To understand this effect, we analyze theirinteractions with core architectural components of LLMs. Our findings demonstrate that thesesensitive parameters are closely linked to the positional encoding module, particularly in models usingRotary Position Embedding (RoPE), where perturbations disrupt dominant frequency activationscritical for contextual processing. Furthermore, we show that perturbing ToM-sensitive parametersaffects LLMs’ attention mechanism by modulating the angle between queries and keys underpositional encoding. These insights provide a deeper understanding of how LLMs acquire socialreasoning abilities, bridging AI interpretability with cognitive science.

Related Products