“We do not see things as they are, we see them as we are.”
—from Seduction of the Minotaur, by Anaïs Nin (1961)
When we look at the world, what we experience is the world itself, at least if all goes well. Right? If you suffer from a hallucination, your fevered brain might impose its own reality on you, but those are rare. Normally, those objects we see—that kitchen table, the chairs, the yellow lemon on the table—are the parts of the mind-independent world. What else could they be?
In the September 2019 Scientific American, Anil Seth argues that according to neuroscience the reality we experience is constructed by the brain, even going so far as to as to call ordinary perception a “controlled hallucination.” https://www.scientificamerican.com/article/the-neuroscience-of-reality/ “The reality we perceive is not a direct reflection of the external objective world. Instead it is the product of the brain’s predictions about the causes of incoming sensory signals.” Seth may not realize how radical his thesis is. Let’s take a look at some excerpts. He begins—appropriately, as far as BQTA is concerned—with color.
Anil Seth: In 2015, a badly exposed photograph of a dress tore across the Internet, dividing the world into those who saw it as blue and black (me included) and those who saw it as white and gold (half my lab). Those who saw it one way were so convinced they were right—that the dress truly was blue and black or white and gold—that they found it almost impossible to believe that others might perceive it differently.

Seth: We all know that our perceptual systems are easy to fool. The popularity of visual illusions is testament to this phenomenon. Things seem to be one way, and they are revealed to be another: two lines appear to be different lengths, but when measured they are exactly the same; we see movement in an image we know to be still. The story usually told about illusions is that they exploit quirks in the circuitry of perception, so that what we perceive deviates from what is there. Implicit in this story, however, is the assumption that a properly functioning perceptual system will render to our consciousness things precisely as they are.

The deeper truth is that perception is never a direct window onto an objective reality. All our perceptions are active constructions, brain-based best guesses at the nature of a world that is forever obscured behind a sensory veil. Visual illusions are fractures in the Matrix, fleeting glimpses into this deeper truth.
Take, for example, the experience of color—say, the bright red of the coffee mug on my desk. The mug really does seem to be red: its redness seems as real as its roundness and its solidity. These features of my experience seem to be truly existent properties of the world, detected by our senses and revealed to our mind through the complex mechanisms of perception.
Yet we have known since Isaac Newton that colors do not exist out there in the world. Instead they are cooked up by the brain from mixtures of different wavelengths of colorless electromagnetic radiation. Colors are a clever trick that evolution has hit on to help the brain keep track of surfaces under changing lighting conditions. And we humans can sense only a tiny slice of the full electromagnetic spectrum, nestled between the lows of infrared and the highs of ultraviolet. Every color we perceive, every part of the totality of each of our visual worlds, comes from this thin slice of reality.
GNS: Cognitive scientist Steve Palmer backs up Seth’s view.
People universally believe that objects look colored because they are colored, just as we experience them. The sky looks blue because it is blue, grass looks green because it is green, and blood looks red because it is red. As surprising as it may seem, these beliefs are fundamentally mistaken. Neither objects nor lights are actually “colored” in anything like the way we experience them.Rather, color is a psychological property of our visual experiences when we look at objects and lights, not a physical property of those objects or lights. The colors we see are based on physical properties of objects and lights that cause us to see them as colored, to be sure, but these physical properties are different in important ways from the colors we perceive. (Palmer, Vision Science, Photons to Phenomenology MIT 1999, p 95.)
Notice that in the physical description of light there was no mention of color at all. This is because, as Newton said, “the Rays to speak properly are not coloured.” Color becomes relevant only when light enters the eyes of an observer who is equipped with the proper sort of visual nervous system to experience it. The situation is reminiscent of the old puzzle about whether a tree that falls in the forest makes a sound if nobody is there to hear it. There may be light of different wavelengths independent of an observer, but there is no color independent of an observer, because color is a psychological phenomenon that arises only within an observer. (Palmer, p. 97 emphasis in original)
Anil Seth: Just knowing this is enough to tell us that perceptual experience cannot be a comprehensive representation of an external objective world. It is both less than that and more than that. The reality we experience—the way things seem—is not a direct reflection of what is actually out there. It is a clever construction by the brain, for the brain. And if my brain is different from your brain, my reality may be different from yours, too.
GNS: The way we experience color is enough by itself to tell us that the world as experienced is constructed by our brains, not the mind-independent world itself. Your brain constructs your reality, my brain constructs mine. We each get our own individual reality. We hope that most of the time our realities resemble each other, since our brains resemble each other, but our realities can diverge, as the white/gold or blue/black dress illustrates. Do we ever experience the mind-independent world?
Seth: Immanuel Kant realized that the chaos of unrestricted sensory data would always remain meaningless without being given structure by preexisting conceptions or “beliefs,” which for him included a priori frameworks such as space and time. Kant’s term “noumenon” refers to a “thing in itself”—Ding an sich—an objective reality that will always be inaccessible to human perception.
Today these ideas have gained a new momentum through an influential collection of theories that turn on the idea that the brain is a kind of prediction machine and that perception of the world—and of the self within it—is a process of brain-based prediction about the causes of sensory signals.
These new theories are usually traced to German physicist and physiologist Hermann von Helmholtz, who in the late 19th century proposed that perception is a process of unconscious inference. Toward the end of the 20th century Helmholtz’s notion was taken up by cognitive scientists and artificial-intelligence researchers, who reformulated it in terms of what is now generally known as predictive coding or predictive processing.
The central idea of predictive perception is that the brain is attempting to figure out what is out there in the world (or in here, in the body) by continually making and updating best guesses about the causes of its sensory inputs. It forms these best guesses by combining prior expectations or “beliefs” about the world, together with incoming sensory data, in a way that takes into account how reliable the sensory signals are. Scientists usually conceive of this process as a form of Bayesian inference, a framework that specifies how to update beliefs or best guesses with new data when both are laden with uncertainty.
In theories of predictive perception, the brain approximates this kind of Bayesian inference by continually generating predictions about sensory signals and comparing these predictions with the sensory signals that arrive at the eyes and the ears (and the nose and the fingertips, and all the other sensory surfaces on the outside and inside of the body). The differences between predicted and actual sensory signals give rise to so-called prediction errors, which are used by the brain to update its predictions, readying it for the next round of sensory inputs. By striving to minimize sensory-prediction errors everywhere and all the time, the brain implements approximate Bayesian inference, and the resulting Bayesian best guess is what we perceive.
To understand how dramatically this perspective shifts our intuitions about the neurological basis of perception, it is helpful to think in terms of bottom-up and top-down directions of signal flow in the brain. If we assume that perception is a direct window onto an external reality, then it is natural to think that the content of perception is carried by bottom-up signals—those that flow from the sensory surfaces inward. Top-down signals might contextualize or finesse what is perceived, but nothing more. Call this the “how things seem” view because it seems as if the world is revealing itself to us directly through our senses.

The prediction machine scenario is very different. Here the heavy lifting of perception is performed by the top-down signals that convey perceptual predictions, with the bottom-up sensory flow serving only to calibrate these predictions, keeping them yoked, in some appropriate way, to their causes in the world. In this view, our perceptions come from the inside out just as much as, if not more than, from the outside in. Rather than being a passive registration of an external objective reality, perception emerges as a process of active construction—a controlled hallucination, as it has come to be known.
Why controlled hallucination? People tend to think of hallucination as a kind of false perception, in clear contrast to veridical, true-to-reality, normal perception. The prediction machine view suggests instead a continuity between hallucination and normal perception. Both depend on an interaction between top-down, brain-based predictions and bottom-up sensory data, but during hallucinations, sensory signals no longer keep these top-down predictions appropriately tied to their causes in the world. What we call hallucination, then, is just a form of uncontrolled perception, just as normal perception is a controlled form of hallucination.
GNS: So all perception is hallucination? Some are just better than others? Wild! Is anything real?
Seth: This view of perception does not mean that nothing is real. Writing in the 17th century, English philosopher John Locke made an influential distinction between “primary” and “secondary” qualities. Primary qualities of an object, such as solidity and occupancy of space, exist independently of a perceiver. Secondary qualities, in contrast, exist only in relation to a perceiver—color is a good example. This distinction explains why conceiving of perception as controlled hallucination does not mean it is okay to jump in front of a bus. This bus has primary qualities of solidity and space occupancy that exist independently of our perceptual machinery and that can do us injury. It is the way in which the bus appears to us that is a controlled hallucination, not the bus itself.
GNS: Locke’s primary and secondary quality distinction seems vindicated by 21st century neuroscience. The primary qualities are the one described by physics. In 17th century, those properties were shape, mass, solidity, number, motion, etc. See Tom Nagel’s discussion in https://better-questions-than-answers.blog/2019/08/08/what-is-it-about-lemons/
Nagel: The new science of the 17th century was brought into existence when Galileo and Newton developed a quantitative geometrical understanding of the physical world and the laws governing it, a description that left out the familiar qualitative aspects of things as they appear to the separate human senses: their smell, taste, sound, feel and colour. Colours and smells did not enter into physics, and in spite of the look and aroma of a typical chemistry lab, they didn’t enter into chemistry either, when it subsequently developed into a theory of the true composition of everything around us from a limited number of elements. Stroud quotes Galileo: ‘If the ears, the tongue, and the nostrils were taken away, the figure, the numbers and motions of bodies would indeed remain, but not the odours or the tastes or the sounds, which, without the living animal, I do not believe are anything else than names.’ This view was taken up by Descartes, and then enshrined by Locke as the now familiar distinction between primary and secondary qualities – the primary qualities of size, shape and motion being those that belong to things as they are in themselves, and the secondary qualities of colour, sound, taste, feel and smell being mere appearances, produced by the action of these things on our senses.
This conception of the world, as Barry Stroud says, ‘came to seem like nothing more than scientifically enlightened common sense’. And it has survived changes in physical science that have long since rendered obsolete the original catalogue of primary qualities. A modern Locke has to accommodate charge, spin, superstrings and space-time of many more than three dimensions, but the idea is the same: the physical world as it is in itself is describable in quantitative, spatiotemporal terms; everything else we say about it depends on how it affects us or how we react to it. Objective, mind-independent reality is the now totally unfamiliar world described by a rapidly developing physics; the familiar world that we live in, from colours to values, is subjective and mind-dependent.
GNS: What 21 century neuroscience takes for granted philosophy has struggled with for most of 400 years. We really walk around in our own individual reality bubbles? If we can never experience the mind-independent world, what evidence do we have that it’s there at all? Whatever we use to check our perceptions will always be another brain construction. Nevertheless, the basic science seems inescapable.
Seth: A growing body of evidence supports the idea that perception is controlled hallucination, at least in its broad outlines. A 2015 study by Christoph Teufel of Cardiff University in Wales and his colleagues offers a striking example. In this study, patients with early-stage psychosis who were prone to hallucinations were compared with healthy individuals on their ability to recognize so-called two-tone images.
Take a look at the photograph of a two-tone image below. Probably all you will see is a bunch of black-and-white splotches. Now, after you read the rest of this sentence, look at the perceptual shift image below. Then have another look at the first photo; it ought to look rather different. Where previously there was a splotchy mess, there are now distinct objects, and something is happening.

What I find remarkable about this exercise is that in your second examination of the perceptual shift photo, the sensory signals arriving at your eyes have not changed at all from the first time you saw it. All that has changed are your brain’s predictions about the causes of these sensory signals. You have acquired a new high-level perceptual expectation, and this is what changes what you consciously see.
If you show people many of these two-tone images, each followed by the full picture, they might subsequently be able to identify a good proportion of two-tone images, though not all of them. In Teufel’s study, people with early-stage psychosis were better at recognizing two-tone images after having seen the full image than were healthy control subjects. In other words, being hallucination-prone went along with perceptual priors having a stronger effect on perception. This is exactly what would be expected if hallucinations in psychosis depended on an overweighting of perceptual priors so that they overwhelmed sensory prediction errors, unmooring perceptual best guesses from their causes in the world.

Recent research has revealed more of this story. Phil Corlett of Yale University and his colleagues paired lights and sounds in a simple design to engender expectations among their study subjects of whether or not a light would appear on a given experimental trial. They combined this design with brain imaging to uncover some of the brain regions implicated in predictive perception. When they looked at the data, Corlett and his team were able to identify regions such as the superior temporal sulcus, deep in the temporal lobe of the cortex, that were specifically associated with top-down predictions about auditory sensations. This is an exciting new development in mapping the brain basis of controlled hallucinations.
GNS: The “controlled hallucinations”, which constitute the entire world we experience (when all goes well), have a “basis” in the brain. What is the relationship between our hallucinations and their brain basis? Correlation? These are sometimes called the NCC: Neural Correlates of Consciousness. Seth’s controlled hallucinations are our conscious experience, and the job of neuroscience is figure out which conscious experience correlates with what neural activity.
Seth: In my lab we have taken a different approach to exploring the nature of perception and hallucination. Rather than looking into the brain directly, we decided to simulate the influence of overactive perceptual priors using a unique virtual-reality setup masterminded by our resident VR guru, Keisuke Suzuki. We call it, with tongue firmly in cheek, the “hallucination machine.”
Using a 360-degree camera, we first recorded panoramic video footage of a busy square in the University of Sussex campus on a Tuesday at lunchtime. We then processed the footage through an algorithm based on Google’s AI program DeepDream to generate a simulated hallucination. What happens is that the algorithm takes a so-called neural network—one of the workhorses of AI—and runs it backward. The network we used had been trained to recognize objects in images, so if you run it backward, updating the network’s input instead of its output, the network effectively projects what it “thinks” is there onto and into the image. Its predictions overwhelm the sensory inputs, tipping the balance of perceptual best guessing toward these predictions. Our particular network was good at classifying different breeds of dogs, so the video became unusually suffused by dog presences.
Many people who have viewed the processed footage through the VR headset have commented that the experience is rather reminiscent not of the hallucinations of psychosis but of the exuberant phenomenology of psychedelic trips.
By implementing the hallucination machine in slightly different ways, we could generate different kinds of conscious experience. For example, running the neural network backward from one of its middle layers, rather than from the output layer, leads to hallucinations of object parts, rather than whole objects. As we look ahead, this method will help us match specific features of the computational architecture of predictive perception to specific aspects of what experiences of hallucinations are like. And by understanding hallucinations better, we will be able to understand normal experience better, too, because predictive perception is at the root of all our perceptual experience.
GNS: If the neural inputs to your brain could be simulated convincingly enough, you could experience any reality imaginable. That’s what’s going on in The Matrix and in the Experience Machine. https://better-questions-than-answers.blog/2019/04/01/the-experience-machine/
Seth: Although the hallucination machine is undoubtedly trippy, people who experience it are fully aware that what they are experiencing is not real. Indeed, despite rapid advances in VR technology and computer graphics, no current VR setup delivers an experience that is sufficiently convincing to be indistinguishable from reality…
The basic idea is simple. We again prerecorded some panoramic video footage, this time of the interior of our VR lab rather than of an outside campus scene. People coming to the lab are invited to sit on a stool in the middle of the room and to put on a VR headset that has a camera attached to the front. They are encouraged to look around the room and to see the room as it actually is, via the camera. But at some point, without telling them, we switch the feed so that the headset now displays not the live real-world scene but rather the prerecorded panoramic video. Most people in this situation continue to experience what they are seeing as real even though it is now a fake prerecording.
I find this result fascinating because it shows that it is possible to have people experience an unreal environment as being fully real. This demonstration alone opens new frontiers for VR research: we can test the limits of what people will experience, and believe, to be real.
The idea that the world of our experience might not be real is an enduring trope of philosophy and science fiction, as well as of late-night pub discussions. Neo in The Matrix takes the red pill, and Morpheus shows him how what he thought was real is an elaborate simulation, while the real Neo lies prone in a human body farm, a brain-in-a-vat power source for a dystopian AI.
Philosopher Nick Bostrom of the University of Oxford has famously argued, based largely on statistics, that we are likely to be living inside a computer simulation created in a posthuman age. I disagree with this argument because it assumes that consciousness can be simulated—I do not think this is a safe assumption—but it is thought-provoking nonetheless.
GNS: Is it surprising that that Seth disagrees with Bostrom that consciousness can be simulated? Understand the disagreement would require getting clear about what “simulating consciousness” means. Presumably, Seth would have no problem with the idea that external reality can be simulated in consciousness. That’s what The Matrix and the Experience Machine are doing. Seth’s work in VR is the first step toward the Experience Machine. Bostrom’s crazy idea is that we are characters in a simulation of the universe run by a higher civilization on a computer the size of a planet (I think). Seth is questioning whether those cartoon-like characters in the simulation could be conscious (I think). Maybe consciousness can be simulated, but the simulation would not itself be conscious (whatever that means). Since we are conscious, we are not characters in a simulated universe.
Seth: Although these chunky metaphysical topics are fun to chew on, they are probably impossible to resolve. Instead what we have been exploring throughout this article is the relation between appearance and reality in our conscious perceptions, where part of this appearance is the appearance of being real itself.
The central idea here is that perception is a process of active interpretation geared toward adaptive interaction with the world through the body rather than a recreation of the world within the mind.
GNS: What is the distinction between “active interpretation” and “recreation of the world in the mind” supposed to amount to? What Seth is talking about throughout his article could fairly be described as the recreation of the B&W world in color in the mind. Is this a hint of awareness of the philosophical problems? Surely any kind of hallucination— even a controlled one— is a recreation of the world.
Seth: The contents of our perceptual worlds are controlled hallucinations, brain-based best guesses about the ultimately unknowable causes of sensory signals. And for most of us, most of the time, these controlled hallucinations are experienced as real. As Canadian rapper and science communicator Baba Brinkman suggested to me, when we agree about our hallucinations, maybe that is what we call reality.
But we do not always agree, and we do not always experience things as real. People with dissociative psychiatric conditions such as derealization or depersonalization syndrome report that their perceptual worlds, even their own selves, lack a sense of reality. Some varieties of hallucination, various psychedelic hallucinations among them, combine a sense of unreality with perceptual vividness, as does lucid dreaming. People with synesthesia consistently have additional sensory experiences, such as perceiving colors when viewing black letters, which they recognize as not real. Even with normal perception, if you look directly at the sun you will experience the subsequent retinal afterimage as not being real. There are many such ways in which we experience our perceptions as not fully real.
What this means to me is that the property of realness that attends most of our perceptions should not be taken for granted. It is another aspect of the way our brain settles on its Bayesian best guesses about its sensory causes. One might therefore ask what purpose it serves. Perhaps the answer is that a perceptual best guess that includes the property of being real is usually more fit for purpose—that is, better able to guide behavior—than one that does not. We will behave more appropriately with respect to a coffee cup, an approaching bus or our partner’s mental state when we experience it as really existing.
But there is a trade-off. As illustrated by the dress illusion, when we experience things as being real, we are less able to appreciate that our perceptual worlds may differ from those of others. (The leading explanation for the differing perceptions of the garment holds that people who spend most of their waking hours in daylight see it as white and gold; night owls, who are mainly exposed to artificial light, see it as blue and black.) And even if these differences start out small, they can become entrenched and reinforced as we proceed to harvest information differently, selecting sensory data that are best aligned with our individual emerging models of the world, and then updating our perceptual models based on these biased data.
GNS: BQTA contributor Rick Lenon, M.D. has long maintained “what we experience is not the world, but rather a model of the world”. We will both have much more to say about that soon.
Seth: We are all familiar with this process from the echo chambers of social media and the newspapers we choose to read. I am suggesting that the same principles apply also at a deeper level, underneath our sociopolitical beliefs, right down to the fabric of our perceptual realities. They may even apply to our perception of being a self—the experience of being me or of being you—because the experience of being a self is itself a perception.
This article was originally published with the title “Our Inner Universes” in Scientific American 321, 3, 40-47 (September 2019)
doi:10.1038/scientificamerican0919-40
MORE TO EXPLORE
Shift toward Prior Knowledge Confers a Perceptual Advantage in Early Psychosis and Psychosis-Prone Healthy Individuals. Christoph Teufel et al. in Proceedings of the National Academy of Sciences USA, Vol. 112, No. 43, pages 13,401–13,406; October 27, 2015.
A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology. Keisuke Suzuki et al. in Scientific Reports, Vol. 7, Article No. 15982; November 22, 2017.
Being a Beast Machine: The Somatic Basis of Selfhood. Anil K. Seth and Manos Tsakiris in Trends in Cognitive Sciences, Vol. 22, No. 11, pages 969–981; November 1, 2018.
“If real is what you can feel, smell, taste and see, then ‘real’ is simply electrical signals interpreted by your brain” ~Morpheus
Another “strange but true” factoid, solid matter is comprised of mostly empty space.
LikeLike
I think the truth of the matter is even more significant than the Nuro science findings. For, if our brain is just kind of a “controlled hallucination” at what point does that stop?
LikeLike
Taking this a step further leads to the idea of people’s perceptions/reactions of others’ behavior. “What this means to me is that the property of realness that attends most of our perceptions should not be taken for granted.” – this too can be said of how we interpret others behaviors – our interpretations are not facts or are “real” in this sense, although we see it as such at times. We must acknowledge this is order to see past our own lens. “when we experience things as being real, we are less able to appreciate that our perceptual worlds may differ from those of others.”
LikeLike
Taking this a step further leads to the idea of people’s perceptions/reactions of others’ behavior. “What this means to me is that the property of realness that attends most of our perceptions should not be taken for granted.” – this too can be said of how we interpret others behaviors – our interpretations are not facts or are “real” in this sense, although we see it as such at times. We must acknowledge this is order to see past our own lens. “when we experience things as being real, we are less able to appreciate that our perceptual worlds may differ from those of others.”
LikeLiked by 2 people
So, if a tree falls in the forest does it make a sound if nobody is there to hear it?
Perhaps it doesn’t make a sound even IF somebody is there to hear it. The answer lies in our perception of sound. We here sound via air vibrations through our ear canals vibrating on our eardrum, then amplified by the tiny bones in our inner ear. Simplifying this process, in the end the auditory nerve carries this as an electrical signal to the brain, which turns it into a sound that we recognize and understand.
So the crashing of a falling tree as it hits the ground is in reality, simply the displaced movement of air captured by our human auditory system. It makes no sound as we know it. The sound we know as a falling tree, is assigned by our brain.
This begs the question, if in reality there is no color, no sound, no smells? And reality is a mere fabrication constructed internally by our brain? What is Reality? Simply a concoction of relatable vibrating energy? And, is that what we are as well? YIKES!
LikeLike