Being You: A New Science of Consciousness
November 16, 2023 12:09 PM - Subscribe

Anil Seth positions consciousness as prediction: Rather than your mind passively receiving sensory information from outside, it is always predicting the world around it, trying to generate a plausible 'controlled hallucination' to keep you alive, always correcting the predictions (sometimes unsuccessfully) based on new information from the senses. Further, he extends this to the sense of self itself--the brain hallucinates a self as a method of internal monitoring through prediction. All along the way, he provides engaging examples from his research lab, and looks at some competing theories.
posted by mittens (10 comments total) 8 users marked this as a favorite
 
I thought I might share this shorter piece that got me to read his book: The Hard Problem of Consciousness Is A Distraction From The Real One.

He's got some interesting material on Integrated Information Theory as well--he's not a believer, but he's sympathetic to its outlook and goals--which seemed well-timed given the recent open letter saying IIT is a pseudoscience.

I do wish there had been a little more on actual brains in the book--there's an awful lot on how the brain works, but not a lot of linking these predictive functions to particular pinkish-gray clumps. But all in all an eye-opening read, and now I've got to go around reading more about this prediction stuff. (What is a Bayyyyyyes?)
posted by mittens at 12:16 PM on November 16, 2023 [3 favorites]


This looks great - reserved the book.
posted by whatevernot at 2:31 PM on November 16, 2023


My friend was reading this and because she had a headache I read a chapter or so aloud to her--chewy, interesting stuff.
posted by praemunire at 3:10 PM on November 16, 2023


Sounds like a retread of Jeff Hawkins' On Intelligence from 2005. Except I'm not sure why consciousness needs to be involved in the prediction -> observation -> refinement -> prediction loop. Hawkins posits the cortical column as the functional unit of mental computation with no need to intervene consciousness.
posted by Gyan at 9:03 PM on November 16, 2023


Gyan, I'll probably mis-state his argument, but consciousness for Seth arises out of 'interoceptive inference'--that is, predictions aren't just being made for the chairs we might stumble into, or the bears who might be chasing us, but for our internal states as well. Our brains, without direct access to what's going on in our bodies, have to do the same kind of predictions. But predictions can also exhibit some degree of change-blindness. That is, we aren't taking into account every single change our senses can pick up. Some are too slow, some are too subtle, etc. This can be demonstrated easily in the visual realm with very slowly-changing colors. Something similar could be happening to create a sense of a permanent self. From chapter 9, "We will be better able to maintain our physiological and psychological identity, at every level of selfhood, if we do not (expect to) perceive ourselves as continually changing. [...] we perceive ourselves as stable over time because we perceive ourselves in order to control ourselves, not in order to know ourselves. [...] Our perceptual best guesses need to be experienced as really existing out there in the world, rather than as the brain-based constructions that in truth they are. The same reasoning holds for the self too. Just as it seems as though the chair in the corner really is red, and that a minute really has passed since I started writing this sentence, the predictive machinery of perception when directed inwardly makes it seem as though there really is a stable essence of “me” at the center of everything."
posted by mittens at 5:03 AM on November 17, 2023 [4 favorites]


That still doesn't explain why any of this sensory and metacognitive processing needs to be *experienced*. Why can't 'Our perceptual best guesses' be computationally treated 'as really existing out there in the world'? A computer isn't held to feel the contents of its memory or computation whatever strategy is needed to be a correct & efficient computer.
posted by Gyan at 7:57 PM on November 17, 2023


Well, obviously if I give a satisfactory answer to "why," I'll be the first person to have won a Nobel on the basis of a MetaFilter comment! But one thing I've found interesting about this predictive model, which at first seems wasteful and counterintuitive--why can't brains just, like...see what they see--is because it turns out the predictions are incredibly more efficient. They're essentially doing the wetware version of a compression algorithm. Rather than creating an entirely new reality in every frame (so to speak), they are only updating based on 'prediction errors'--i.e., surprises coming through the senses. We can process huge amounts of reality--inside and out--extraordinarily quickly, thanks to the compression, and thus have a sense of continuity that seems necessary for the narrative side of our consciousness.

Suddenly some weirdnesses begin to make sense--to steal an example from another book that I'll be posting here soon, there's the phantom buzz of a smartphone against your leg, if you usually keep your phone in your pocket. I keep my phone on my desk during the day, and sure enough, sometimes I will feel it in my pocket nonetheless, vibrating. A completely made-up sensation, that comes from my brain mis-predicting something. Or, another example, my tendency to see my dog where she's not. She's small and blackish-grayish-brown, a very dull dark color, and so sometimes I'll see her and smile and start to say something to her, when suddenly she will resolve into a dark bookbag or pair of boots. My brain didn't look at something and say, aha, a vague dark undifferentiated mass, I'll wait to figure out what it is until I get closer or turn a light on--it went right ahead and predicted: dog.

Now, it does seem absolutely true that there can be intelligence without consciousness. Some animals problem-solve without giving signs of this form of experiential consciousness. The octopus being probably the most contentious of examples, with some researchers certain they're conscious, others certain that they're not...Nicholas Humphrey, whose recent book I also posted, points out that young octopi do not play, unlike baby mammals and birds. (Humphrey: "This seems counter-intuitive—until you take on board the argument that, if an animal shows no signs of going out of its way to grow and enrich its phenomenal self, it probably doesn't have a phenomenal self to grow and enrich.")

So...again, why is the crucial and unanswerable question, but we could at least take a few stabs at it. What does the additional expense of "experience" get us? You could think in a few directions: The ability to have a theory of mind is quite attractive in a social animal; if you're looking for a partner, or even a friend, you'd want someone who can correctly portray your inner world in their mind, to understand your wants and needs--and that friend is hoping for the reciprocal from you. A sense of self, and a sense of having one's individual, unique self be understood, can creative a beneficial cycle of empathic feedback. Moreover, if I sense that I am a self, I can begin to take extraordinary steps to lengthen and improve my life. Not just running from lions to preserve my basic safety, I can observe the world around me and better judge risks. I can imagine actions I could take, playing them out without consequence, before choosing the best course. Or...maybe all of that doesn't matter, and the predictive model, with its efficiency, just does cause a self, because it costs less to think of a you experiencing your life, than putting all these senses and potential actions together without a subjective referent?
posted by mittens at 5:40 AM on November 18, 2023 [5 favorites]


Thanks for linking this book, it was a very interesting read. I had not previously encountered the idea that the breain has to predict inwards as well as outwards. I thought he was very convincing till he got to the other animals/ AI stage and then it all seemed too pat/ popular science like explanations.
posted by dhruva at 4:33 AM on November 28, 2023 [1 favorite]


The Philosophical Transactions of the Royal Society journal has a special issue out on predictive processing in relation to art
posted by dhruva at 3:58 AM on December 19, 2023 [1 favorite]


which at first seems wasteful and counterintuitive

I also wonder why we consider "wastefulness" a powerful objection here, unless we believe in intelligent design or similar. A solution is needed, not the most efficient (or otherwise "the best") solution.
posted by praemunire at 1:02 PM on December 20, 2023 [1 favorite]


« Older Upload: Season 3...   |  The Great British Bake Off: Pa... Newer »

You are not logged in, either login or create an account to post comments