What is it about?
Large language models (LLMs) have impressive capabilities, but the exact ways their "attention" compares to human attention have remained unclear. In this paper, you propose that AI attention mechanisms are functionally similar to a concept from visual neuroscience called the "priority map". In the human brain, a priority map directs our focus by combining what is naturally eye-catching with what is relevant to our current goals. You suggest that Transformer-based AI models operate on a very similar principle. They combine the basic statistical relationships of words with the overarching task given in a prompt to decide which information is most important. Although the human brain uses biological cells and AI uses computer code, both systems employ layered structures to dynamically prioritize information and manage limited resources. By highlighting this shared strategy, your research aims to provide new computational models for understanding human cognition and to inspire the development of more adaptable, human-like AI systems.
Featured Image
Photo by Steve Johnson on Unsplash
Why is it important?
Our theoretical framework bridges a critical gap by shifting the comparison between artificial and biological attention from physical structure to functional principles. As large language models (LLMs) grow increasingly sophisticated, understanding their underlying mechanisms is highly timely for both computer science and neuroscience. Two significant implications of our findings are: Advancing AI Agents: The human brain's priority map acts as a pre-action decision hub, not merely a sensory filter. Applying this biological blueprint to AI could help evolve standard LLMs into active agents capable of making efficient, human-like decisions in complex environments. A Testbed for Neuroscience: While evidence shows multiple priority maps exist across human attention networks, a working computational model of their integration is still lacking. Transformer multi-head attention provides a novel, instantiable framework to model how distinct brain areas collaborate to produce a single, coherent attentional focus.
Perspectives
Writing this article was a uniquely rewarding opportunity to synthesize our diverse clinical and research backgrounds. As a team, we combine perspectives from computational modeling and machine learning, visual attention research, and functional neurosurgery. By looking at large language models through this multidisciplinary lens, we realized that the artificial attention mechanisms used in modern machine learning functionally mirror the biological priority maps we study in primate vision. Furthermore, as we work in the clinic to understand and repair broken neural circuits using neuromodulation, this theoretical framework offers a dual promise. Not only can the brain's elegant prioritization strategies inspire smarter, more efficient AI architectures , but these advanced artificial networks can simultaneously serve as tangible computational testbeds to help us decipher—and eventually heal—the complex, distributed attention maps within the human brain.
Koorosh Mirpour
University of Texas Southwestern Medical Center
Read the Original
This page is a summary of: A priority map is all you need: Exploring the roots of neural mechanisms underlying transformer-based large language models., Psychological Review, February 2026, American Psychological Association (APA),
DOI: 10.1037/rev0000616.
You can read the full text:
Contributors
The following have contributed to this page







