What is it about?

When we look out at the world, we both "see things" and "feel things". From the earliest days of psychology and neuroscience, researchers have debated where exactly those feelings come from. (A classic example of this debate is a thought experiment of the "fear response" we feel upon "seeing" a bear in a forest: Is the source of our fear our recognition of the bear as a "bear" or our mind's conscious interpretation of our body's subconscious impulse to run?) Answering these kinds of questions using human behavioral or neuroscientific experiments alone is difficult, because (as humans studying other humans) we can never fully separate the experience of "seeing" from the experience of "feeling" in any scientifically sound (or ethical) way. With recent advancements in artificial intelligence research, however, we do now have access to machines that demonstrate many of the behaviors we associate with "seeing" (e.g. recognizing objects), but effectively none of the behaviors we associate with "feeling" (e.g. fluctuating changes in our bodily states not directly linked to changes in our immediate surroundings). The controllable separation of "seeing" and "feeling" in machines allows us to use them as proxies for better understanding the relationship between "seeing" and "feeling" in humans -- and in particular, how much of the "feeling" we feel in response to things we "see" is explained directly by the act of "seeing" itself (visual perception) -- the most immediate process by which our brains translate the light that reflects off of "things" in the world (e.g. brown fur) into meaning (e.g. the recognition that what we're looking at is a "bear"). The machines we study in this paper suggest to us that a sizable majority of the "feelings" we feel (on average) in response to the things we "see" may come directly from the act of "seeing" -- a far larger proportion than other psychological theories (e.g. ones that focus more on our "conscious thoughts") have suggested in the past.

Featured Image

Why is it important?

In a world that more and more frequently bombards our senses with information (often at a pace that leaves our slower-moving emotional minds little time to keep track), understanding the relationship between "seeing" and "feeling" is more important than ever. Fortunately, the same technologies (machine intelligences) that are in many ways responsible for accelerating the output of information (some might say, overload) may in turn hold one of the key tools we need for better understanding: That is, the ability to experimentally separate and actively control the factors we consider the core underpinnings of our experience as humans. While imperfect (since machines will always remain "proxies" of the human phenomena we're ultimately interested in), the use of machines to decompose hard-to-study, sometimes abstract aspects of human experience into more actionable (science-friendly) case studies could help us get a better hold on the rapidly changing world around (and within) us.

Perspectives

Since I was young, I have wanted (perhaps more than almost anything else) to understand the experience of beauty. One of my earliest "memories" -- though I recognize it may entirely be reconstructed -- is of the overwhelming (perhaps "sublime") feeling I felt watching the rays of the sun at sunset dancing in between the interwoven vines (and lingering streamers from a long-forgotten backyard party) hanging off the branch of a silver oak, a light breeze moving every shape I saw in motions that had no discernible pattern, but which seemed to me (at the time, or at least in my mind) like choreography. I've come to see beauty in many things and many ways seen since then -- in the partner I share my life with, the cat sleeping on our sofa, the elegance of algorithms, the scientific quest to grapple with uncertainty -- but the core question of "whence comes this feeling that I feel" remains a constant. As I started my own journey in science, I struggled initially to think of how I could study this question in a way that simultaneously did justice to its kaleidoscopic complexity, the idiosyncrasy of similar experiences felt by others, and the scientific rigor I believed in so strongly as a pillar of communal progress. Then came the machines! (Or at least, then came machine intelligences that have come so frequently to dominate our headlines these days). Admittedly, I was always intuitively rather fond of them, but as a human interested in understanding not just my own experience, but the experiences of other humans, I recognized the potential of these machines both as sources of "fear" and as sources of scientific control in otherwise "hard-to-control" fields of study. This work, for me at least, is the synthesis of many years (and many attempts) to better reconcile these two potentialities -- to start to address my questions about feeling (and beauty, in particular) in ways that made sense to me, that I hope make some sense to others, and of course, that I hope will help keep us human in the age of AI. I tend personally to think of this work very much as more of "a starting point", and we have a long way to go, but as the saying goes (whether or not it involves a bear), "you cannot run until you learn to walk". And perhaps, if what we find in this work stands the test of time, "you cannot feel until you learn to see".

Colin Conwell
Harvard University

Read the Original

This page is a summary of: The perceptual primacy of feeling: Affectless visual machines explain a majority of variance in human visually evoked affect, Proceedings of the National Academy of Sciences, January 2025, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2306025121.
You can read the full text:

Read

Contributors

The following have contributed to this page