David Chalmers and Sean Carroll

Are you conscious? Am I? Maybe we are zombies, or simulated characters in a video game played by a more advanced civilization.

What is consciousness, anyway? And what would a scientific explanation of consciousness be like? Does our experience of the world stand outside science, or at least outside current science?

Should we be materialists or should we consider some kind dualism? Are we forced into panpsychism (what’s that)? Or maybe pan-proto-psychism (what’s that)?

These and other fun topics are taken up by David Chalmers and Sean Carroll on Sean’s Mindscape podcast series.


Here is a new transcript of their conversation, edited for enhanced readability by the BQTA staff. We hope it gets everyone deeper into the discussion. Enjoy!


Episode 25: David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation

Sean Carroll: Hello, everyone, and welcome to the Mindscape Podcast. I’m your host, Sean Carroll. If any of you have read The Big Picture, my most recent book, you know that one of the things we have to think about if we’re naturalists trying to come to terms with the world of our experience, is the phenomenon of consciousness. Actually, most of you probably know that even if you haven’t read that book, it’s a pretty well known fact. The question, of course, is, “What is demanded of us by the fact of consciousness?” Can we simply hope to explain consciousness using the same tools we explain other things with, atoms and particles moving according to the laws of physics, according to standard model and the core theory, or do we need something else somehow that helps us explain what consciousness is and how it came about? So I’m someone who thinks we don’t need anything else, I think it’s just understanding the motion and interactions of physical stuff from which consciousness emerges as a higher level phenomenon.

0:00:57 SC: Our guest today is David Chalmers, who’s probably the most well known and respectable representative of the other side, of the people who think that you need something beyond just the laws of physics as we currently know them to account for consciousness. David is the philosopher who coined that the term “the hard problem of consciousness,” the idea being that the easy problems are how you look at things, and why you react in certain ways, how you do math problems in your head. The hard problem being our personal experience, what it is like to be you or me rather than somebody else, the first person subjective experience. That’s the hard problem, and someone like me thinks, “Oh, yeah. We’ll get there. It’s just a matter of words and understanding and philosophy.” Someone like David thinks we need a real change in our underlying way of looking at the world. So he describes himself as a naturalist, someone who believes in just the natural world, no supernatural world, not a dualist, who thinks it’s our disembodied mind or anything like that, but he’s not a physicalist, he thinks that the natural world has not only natural properties, not only physical properties, but mental properties as well.

0:02:01 SC: So I would characterize him as convinced of the problem, but he’s not wedded to any particular answer. David Chalmers is a philosopher whom everyone respects even if they don’t agree with him. He’s a delight to talk to because he is very open-minded about considering different things. Like I said, he’s convinced of this problem, but when it comes to solving the problem, he will propose solutions, but he won’t take them too dogmatically, he will change his mind when good arguments comes along. So he’s a great person to talk to about this very, very important problem for naturalists when they try to confront how to understand what it means to be a human being and where consciousness comes from. Also, David has developed a recent interest in the simulation hypothesis, the idea that maybe we could all be living in a simulation running on a computer owned by a very, very advanced civilization in a completely different reality.

0:02:51 SC: So we’ll talk about the hard problem of consciousness, we’ll talk about various philosophical issues, and I won’t pin him down on anything. I’m not trying to argue with him. My point here is not to convince David Chalmers in real time that he’s wrong, but rather to let you, the listeners, hear what his perspective is on these issues, and then hear what my perspective is on these issues, and decide for yourself. Maybe you will change your mind either right now or sometime down the road. So, this is a fun conversation, I’m sure you’ll like it, and let’s go.

0:03:35 SC: David Chalmers, welcome to The Mindscape Podcast.

David Chalmers: Thanks. It’s great to be here.





SC: I’ve discovered in my brief history of having philosophers on the podcast that there’s a lot to say, that we have a lot of ground to cover. I know that you especially have all sorts of interests. Let’s just jump right in to the crowd-pleasing things that we can talk about. You’re one of the world’s experts on the philosophy of consciousness, you coined the phrase “the hard problem of consciousness.” So how would you define what the hard problem is?

DC: The hard problem of consciousness is the problem of explaining how physical processes in the brain somehow give rise to subjective experience. So when you think about the mind, there’s a whole a lot of things that need to be explained. Some of them involve our sophisticated behavior, all the things we can do, we can get around, we can walk, we can talk, we can communicate with each other, we can solve scientific problems. But a lot of that is at the level of sophisticated behavioral capacities, things we can do. When it comes to explaining behavior, we’ve got a pretty good bead on how to explain it. In principle at least, you find that circuit in the brain, a complex neural system, which maybe performs some computations, produces some outputs, generates the behavior, then in principle, you’ve got an explanation. It may take a century or two to work out the details, but that’s roughly the standard model in cognitive science.

SC: And you’ve wrapped this together as the easy problem. Slightly tongue in cheek.

DC: Yeah. So this is what 20-odd years ago, I called the easy problems of the mind and of consciousness in particular, roughly referring to these behavioral problems. Nobody thinks they’re easy in the ordinary sense. The sense in which they’re easy is that we’ve got a paradigm for explaining them. Find the neural mechanism or a computational mechanism; that’s the kind of thing that could produce that behavior. In principle, find the right one, tell the right story, you’ll have an explanation. But when it comes to consciousness, to subjective experience, it looks as if that method doesn’t so obviously apply. There are some aspects of consciousness which are, roughly speaking, behavioral or functional, and you could use the word consciousness for the difference between being awake and responsive, for example, versus being asleep or maybe just for the ability to talk about certain things.

I can talk about the fact that, “Hey, there’s Sean Carroll, there are some books over there and I’m hearing my voice.” Explaining those reports might also be an easy problem. But the really distinctive problem of consciousness is posed not by the behavioral parts, but by the subjective experience, by how it feels from the inside to be a conscious being. I’m seeing you right now. I have a visual image of colors and shapes that are sort of present to me as an element in the inner movie of the mind. I’m hearing my voice, I’m feeling my body, I’ve got a stream of thoughts running through my head and the hub. And this is what philosophers call consciousness or subjective experience, and I take it to be one of the fundamental facts about ourselves that we have this kind of subjective experience.

But then the question is, how do you explain it? And the reason why we call it the hard problem is it looks like the standard method of just explaining behaviors and explaining the things we do doesn’t quite come to grips with the question of why is there subjective experience. It seems you could explain all of these things we do: the walking, the talking, the reports, the reasoning. Why doesn’t all that go on in the dark? Why do we need subjective experience? That’s the hard problem.

SC: Sometimes I hear it glossed as the question of what it is like to be a subjective agent, to be a person.

DC: That’s a good definition of consciousness, actually first put forward or at least made famous by my colleague Tom Nagel here at NYU in an article back in 1974 called “What Is It Like To Be A Bat?” His thought was we don’t know what it’s like to be a bat, we don’t know what a bat’s subjective experience is like. It’s got this weird sonar perceptual capacity which doesn’t really correspond directly to anything that humans have. But presumably, there is something it’s like to be a bat. A bat is conscious. Most people would say, on the other hand, “There’s nothing it’s like to be a glass of water.” So if that’s right, then a glass of water is not conscious. So this “what it’s like” way of speaking is a good way at least of serving as an initial intuition term for what is the basic difference we’re getting at between systems which are conscious, and systems which are not.

SC: And the other word that is sometimes invoked in this context is qualia, the experiences that we have. There’s one thing to see the color red, and a separate thing (if I get it right) to have the experience of the redness of red.

DC: My sense is that this word qualia has gone a little bit out of favor over the last, say, 20-odd years. Maybe 20 years ago you had a lot of people speaking of qualia as a word for the sensory qualities that you come across in experience, and the paradigmatic ones would be the experience of red versus the experience of green. You can raise all these familiar questions about this. How do I know that my experience of the things we call “red” is the same as yours. Maybe it’s the same as the experience you have when you are confronted with the things we call “green”. Maybe your internal experiences are swapped with respect to mine. And people call that inverted qualia. That would be “your red is my green.” The feeling of pain would be a quale.

I’m not sure that these qualities are all there is, though, to consciousness. Maybe that’s one reason why qualia have gone out of favor. There’s also maybe an experience to thinking, to reasoning, and to feeling. That’s much harder to pin down in terms of sensory qualities, but there’s still something it’s like. You might think there’s something it’s like to think and to reason, even though it’s not the same as what it’s like to sense.

0:10:11 SC: I want to just for a little bit talk about this question of whether or not you and I have the same experience when we see the color red. I’m not sure I know what that could possibly mean for it to be either the same experience or a different experience. I mean, one is going on in my head, one is going on in your head. In what sense could they be the same? But maybe when I say that, it’s just a reflection of the fact that there’s a hard problem.

DC: To pick a much easier case, some people are red-green color-blind. They don’t even make a distinction between red and green. You know, most people have a red-green axis for color vision and a blue-yellow axis, and something like a brightness axis. But some people, due to things going wrong in their retinal mechanisms, don’t even make the distinction between red and green. I’ve got friends who are red-green color-blind. I’m often asking them, “What is it like to be you?”

“Is it, like, you just see everything in shades of blue and yellow and you don’t get the reds and greens? Or, is it something different entirely?” But we know what it’s like to be them can’t be the same as what it’s like to be us, because reds and greens, which are different for us, are the same for them, so there’s got to be some difference between us, as a matter of logic. My red can’t be the same as their red and my green can’t be the same as theirs. If my red was the same as their red, and my green was the same as their green, and their red is the same as their green, then my red couldn’t be different from my green.

SC: But it is.

DC: As a matter of logic, there has to be some difference there.

SC: I guess the question then is, in what sense could they ever be the same? What is the meaningfulness? I can imagine some kind of operational sameness, right? Like you say the word “red” when you see the word “red”, in that behavioral sense, but that’s exactly what you don’t want to count.

DC: Most people think intuitively that we can at least grasp the idea that my red is the same as your red. Then it’s an empirical open question that they are, in fact, exactly the same. Now, you might say, “I’m a scientist, I want an operational test.” On the other hand, I’m a philosopher and I’m very skeptical of the idea that you can operationalize everything, that a hypothesis has got to be operationalizable to be meaningful.

There was a movement in philosophy in the first part of the 20th century, logical positivism or logical empiricism, where they said the only meaningful hypotheses are the ones we can test. For various reasons that turned out to have a lot of problems, not least because this very philosophical hypothesis of verificationism turned out not to be one that you could test.

0:12:55 SC: There’s a renaissance of logical positivism on Philosophy Twitter these days.

DC: Oh, is that right? Rudolf Carnap, who was one of the great logical positivists, is one of my heroes. I’ve written a whole book called Constructing the World that was partly based around some of his ideas. Nonetheless, verificationism is not one of them.

I think when it comes to consciousness, in particular, we’re dealing with something essentially subjective. I know I’m conscious, not because I measured my behavior or anybody else’s behavior. It’s because it’s something I experience directly from the first person point of view. I think you’re probably conscious, but it’s not as if I can give a straight out operational definition of it, [such as] if you say you’re conscious, then you’re conscious. Most people think that does not absolutely settle the question. Maybe we’d come up with an AI that says it’s conscious. That would be very interesting, but would it settle the question of whether it’s having a subjective experience? Probably not.

0:13:53 SC: Well, so Alan Turing tried, right? The Turing test was supposed to be a way to judge what’s conscious from what’s not. What are your feelings about the success of that program?

DC: I think it’s not a bad approach. Of course, no machine right now is remotely close to passing the Turing test.

0:14:07 SC: You might as well say what the Turing test is.

DC: The Turing test is a test to see whether a machine can behave in a manner indistinguishable from a normal human being, at least in a verbal conversation over, say, text messaging and the like. Turing thought that eventually we’ll have machines that pass this test: they are indistinguishable from a human interlocutor over hours of conversational testing. Turing didn’t say that at that point then machines can think. What he said was that at that point the question of whether machines can think becomes basically meaningless, and I’ve provided an operational definition to substitute for it. So, once they pass this test, he says, “That’s good enough for me.”

0:14:56 SC: He talked in the paper about the consciousness objection. You might say that it’s just mimicking consciousness, but not really conscious. As I recall, his response is, “Well, who cares? I can’t possibly test that. Therefore, it’s not meaningful.”

DC: But it turns out that consciousness is one of the things that we value. First, it’s one of the central properties of our minds. And second, many of us think it’s what actually gives our lives meaning and value. If we weren’t conscious, if we didn’t have subjective experience, then we would basically just be automata for whom nothing has any meaning or value. When it comes to the question of developing more and more sophisticated Artificial Intelligence, the question of whether they’re conscious is gonna be absolutely central to how we treat them— to whether they have moral status, whether we should care whether they continue to live or die, and whether they get rights. Many people think if they’re not having subjective experiences, then they’re basically machines and we can treat them the way we treat machines. But if they’re having conscious experiences like ours, then it would be horrific to treat them the way we currently treat machines. If you simply operationalize all those questions, then there’s a danger, I think, that you lose the things that we really care about.

0:16:08 SC: And just so we can get our background assumptions on the table here, for the most part, neither you or I are coming from a strictly dualist perspective. We’re not trying to explain consciousness in terms of a Cartesian, disembodied, immaterial mind that is a separate substance. As the first hypothesis, we want to say that you and I are made of atoms, that we’re obeying the laws of physics, and that consciousness is somehow related to that, but not an entirely separate category interacting with us. Is that right? Is that fair?

DC: Yeah, although there’s different kinds and different degrees of dualism. My background is very much in mathematics and computer science and physics, and all of my instincts are materialist—to try to explain everything in terms of the processes of physics. Explain biology in terms of chemistry, and chemistry in terms of physics. And this is a wonderful, great chain of explanation. But I do think when it comes to consciousness, this is the one place where that great chain of explanation seems to break down. Roughly because, when it comes to biology and chemistry and all these other fields, the things that need explaining are all basically these easy problems of structure and dynamics and ultimately the behaviors of these systems.

When it comes to consciousness we seem to have something different that needs explaining. The standard kinds of explanation that you get out of physics derived sciences—physics, chemistry, biology, and neuroscience and so on—just ultimately won’t add up to an explanation of subjective experience. Because it always leaves open this further question, “Why is all that sophisticated processing accompanied by consciousness, by a subjective experience?” That doesn’t mean, though, we suddenly need to say it’s all properties of a soul or some religious thing which has existed since the beginning of time and will go on to continue after our death. People sometimes call that substance dualism. Maybe there’s a whole separate substance that’s the mental substance and somehow interacts, connects up with our physical bodies and interacts with it. That view is much harder to connect to a scientific view of the world.

The direction I end up going is what people sometimes call property dualism, the idea that there are some extra properties of things in the universe. This is something we’re used to in physics. Around the time of Maxwell, we had physical theories that took space and time and mass as fundamental. And then Maxwell wanted to explain electromagnetism, and there was a project of trying to explain it in terms of space and time and mass. Turns out, it didn’t quite work. You couldn’t explain it mechanically and eventually, we ended up positing charge as a fundamental property and some new laws governing electromagnetic phenomena. That was just an extra property in our scientific picture of the world.

I’m inclined to think that something analogous to that in some respects is what we have to do with consciousness as well. Basically, explanations in terms of space and time and mass and charge and whatever the fundamentals are in physics these days are not gonna add up to an explanation of consciousness. So, we need another fundamental property in there as well. And one working hypothesis is, “Let’s take consciousness as an irreducible element of the world, and then see if we can come up with a scientific explanation of it.”

0:19:44 SC: Good. I think we should absolutely be open to that. I don’t go down that road myself. I don’t find it very convincing, but maybe in the next 45 minutes, you’ll convince me. So I do want to get there, but let’s lay a little bit more ground work first. I think one of the things that makes the hard problem hard is just the fact that you can’t even imagine looking at neurons doing something and saying, “A-ha, that explains it.” Is that fair to say?

DC: Yeah. When you appeal to neural activity in explaining phenomena, there’s a paradigmatic way that works. We see how those neurons serve as a mechanism for performing some function, ultimately generating some behavior, that is the paradigmatic appeal to neurobiology in explanation. Any explanation of that form is not going to add up to an explanation of consciousness. It explains the wrong thing. It will explain behavior, but that was the easy problem. Explaining consciousness was something distinct— that’s the hard problem.

0:20:51 SC: So you think that even if neuroscientists got to the point where every time a person was doing something that we would all recognize as having a conscious experience—maybe silently experiencing the redness of red—they could point to exactly the same neural activity going on in the brain, you would say, “Yes, but this still doesn’t explain my subjective experience.”

DC: Yeah. That’s in fact, a very important research program is going on right now in neuroscience: the program of funding neural correlates of consciousness, the NCC for short. We’re trying to find the NCC—neural systems active precisely when you’re conscious and which correlate perfectly with consciousness. It’s a very, very important research program, but as it stands, it’s a program for correlation, not for explanation. So we could know that when a certain special kind of neuron fires in a certain pattern that neural pattern always goes along with consciousness. But then the next question is why? Explain that fact. Why is it that this pattern gives you consciousness?

Nothing that we get out of the Neural Correlates of Consciousness program in neuroscience comes close to explaining that. A lot of people, once they start to think about this, think you need some further fundamental principle that connects the neural correlate of consciousness with consciousness itself. Giulio Tononi has developed a theory—integrated information theory—where he says consciousness goes along with a certain mathematical measure of the integration of information that he calls phi. The more phi you have, the more consciousness you have. And phi is a mathematically and physically a respectable quantity. It’s very hard to measure, but in principle…

0:22:43 SC: But you can define it.

DC: It could be measured. There are questions about whether it’s actually well-defined in terms of the details of physics and physical systems, but it’s at least halfway to being something definable. But even if he’s right— that phi, this informational property of the brain, correlates perfectly with consciousness—there’s still the question of why. Prima facie it looks like you could have had a universe with all of this integration of information going on and no consciousness at all. And yet, in our universe, there’s consciousness. How do we explain that fact? Well, I think that what I regard as the scientific thing to do at this point is to say, “Okay. In science, we boil everything down to fundamental principles and fundamental laws. And if we need to postulate a fundamental law to connect phi with consciousness, then maybe that’s going to end up being the best we can do.”

Just as in in physics you always end up with some fundamental laws—whether it’s a principle of gravitation or a grand unified theory that unifies all these different forces—you still end up with some fundamental principles and you don’t explain them further. Something has to be taken as basic. Of course, we want to minimize our fundamental properties as far as we can. Occam’s razor says, “Don’t multiply entities without necessity.” But every now and then, there’s necessity. Maxwell had necessity. And if I’m right, there’s necessity in this case, too.

0:24:12 SC: You hinted at an idea that is one of your most famous philosophical thought experiments: you can imagine a system with whatever phi you want, but we wouldn’t call it conscious. You take this idea to the extreme and say there can be something that looks and acts just like a person, but doesn’t have consciousness.

DC: This is the philosopher’s thought experiment of the zombie. The philosopher’s zombie is somewhat different from the ones you find in Hollywood movies or in Haitian voodoo culture. The ones in voodoo culture, as far as I can tell, are mostly people who have been given some kind of poison, and lack autonomy, volition, a certain kind of free will. The ones in Hollywood movies are beings which are a lot like us, but they’re dead and reanimated.

0:25:05 SC: Yeah, and they want brains.

DC: The philosopher’s zombie is a creature which is exactly like us, functionally and maybe physically, but isn’t conscious. Now, it’s very important to say that nobody—certainly not me— is arguing that zombies actually exist, that some human beings around us are zombies. Actually, I did once meet a philosopher in Dublin who was very concerned that quite a lot of philosophers actually were zombies. They weren’t conscious at all. I was a little bit insulted by this. He seemed to be worried about me.

0:25:40 SC: That might explain a lot. Yes.

DC: Yeah, yeah. He took me to lunch, and he asked me a whole lot of questions about consciousness, and at the…

SC: Your inner experiences?

DC: Yeah. Yeah. And at the end, he said, “Okay. You pass. I think you’re conscious.”

SC: Okay, but a zombie could also pass, right? A zombie would be behaviorally the same, but…

DC: Yeah, behaviorally the same, but no conscious experience. There’s nothing that’s like to be a zombie. Maybe a good way to work up to this is by thinking about some sophisticated artificial intelligence system that produces lots of intelligent responses. It talks to you; maybe it’s an extension of Alexa or Siri who carries on a very sophisticated conversation with us. Most of us are not inclined to think that Alexa and Siri as they stand are conscious, that they’re having subjective experiences.

Now put Alexa in a body like Sophia, a robot that’s out there with a very sophisticated conversational system. Make her smarter and smarter. There’s at least an open question: is she going to be conscious? We can make sense of the hypothesis that she’s conscious. We can also make sense of the hypothesis that she’s not. The extreme case is going to be a complete physical and functional duplicate of a human being with all the brain processing intact, all of the behavior. Maybe even a complete physical duplicate of Sean Carroll. I think I can make sense of the hypothesis when I talk to you that there’d be such a being who’s not conscious— Zombie Sean Carroll. Now, I’m very confident that you’re not Zombie Sean Carroll. I think most human beings are enough like me that they’re going to be conscious, but the point is that at least it seems logically possible. There’s no contradiction in the idea of a being physically just like you without consciousness. And that’s just one way of getting at the idea that since somehow you do have consciousness, then something special and extra has to be going on. You can just put the hard problem of consciousness as the problem of why aren’t we zombies, what differentiates us from zombies?

0:27:50 SC: With some trepidation, let me ask the question, how the difference between possible and conceivable comes into the zombie argument?

DC: Philosophers like to talk about possible worlds, what goes on in different possible worlds. There’s a possible world where Hillary Clinton won the election in 2016, and there are possible worlds where the Second World War never happened. These are not terribly distinct possible worlds. They might, for example, share roughly the same laws of physics as ours, maybe with small differences in the initial conditions. Some of us think we can also make sense of worlds with different laws of physics and different laws of nature. Maybe there are classical possible worlds. Maybe there are possible worlds that are two-dimensional, like Conway’s Game of Life— just bits fluttering on a surface governed by simple rules. So there are very distant possible worlds with very different laws of nature.

The broadest class is the logically possible world— what we can conceive of, or what we can imagine. There may even be worlds that we can’t imagine, like worlds where two plus two is five, That’s getting a bit too far even for me. Things really start to go haywire around that point. But as long as we don’t have contradictions, then we can at least entertain possible worlds. I’m inclined to think the zombie hypothesis looks to me perfectly coherent and perfectly conceivable. There is a universe which is physically identical to ours, but in which nobody has subjective experience. That’s an entire zombie universe, if you like. Conscious experience never flickers into existence, there’s just a whole bunch of sophisticated behavior. I don’t think our universe is like that, but it seems to make sense, and one way to pose the hard problem is saying what differentiates our world from that world?

0:29:47 SC: I don’t think that zombies are conceivable, and I’m very happy to be talked out of this, because I talked to you a couple of years ago before I wrote The Big Picture and I was not quite as sharp in my thoughts about this. So we could imagine a literal physical copy of our world. That includes all the people in it and all the atoms that they’re made of.  You do think that, as far as we know, the atoms in my body just obey the laws of physics as we know them, right? So in that world, I would be there, but without consciousness, without experience. I’d be a zombie, but I would be acting and saying exactly the same things that I’m acting and saying now, is that right?

DC: Yup.

SC: Okay. And if you in that world were to ask me if I were conscious, I would say yes.

DC: Yeah.

SC: And presumably there is a sensible way in which I could say yes, because I believe it to be true. Is that fair?

DC: Yeah, it’s a complicated issue whether zombies actually believe anything, but they’ve got zombie analogs of beliefs, at the very least.

SC: Then how can I be sure that I’m not a zombie, if all of the things that I say and do are exactly what a zombie would say and do?

DC: This is a very good argument that I can’t be sure that you’re not a zombie, because all I have access to with respect to you is your behavior and your functioning and so on, and none of that seems to absolutely differentiate you from a zombie. I think the first person case is different, because in the first person case I’m conscious, I know that I’m conscious, I know that more directly than I know anything else. Descartes said way back in the 1640s: this is the one thing I can be certain of; I can doubt everything about the external world; I can doubt there’s a table here; I can doubt there’s a body. There’s one thing I can’t doubt— that’s that I’m thinking. He put it even better: I’m conscious. I think, therefore I am. Therefore, I cannot doubt my own existence. I think it’s natural to take consciousness as our primary epistemic datum. Whatever you say about zombies and so on, I know that I’m not one of them, because I know that I’m conscious.

0:32:04 SC: But my worry is that a zombie me would behave in exactly the same way I do. That behavior would include writing all the bad poetry I wrote in high school and crying at movies, at WALL-E and so forth, and petting my cats, like all of these things, the zombie would do in exactly the same way that I do. If you ask that zombie me, “Are you conscious?” It would say, “Yes, and here’s why,” it would give you reasons. I don’t see how I can be sure that I’m not that zombie.

DC: You’ve put your finger on the weakest spot for the zombie hypothesis and for ideas that come from it in my first book, The Conscious Mind. I had a whole chapter on this called the paradox of phenomenal judgment that stems from the fact that my zombie twin in that universe next door is going around doing exactly the same things that I’m doing and saying the same things that I’m saying and even writing a word for word identical book called The Conscious Mind arguing that consciousness is irreducible to physical processes. And I’d say a lot of strange things go on in possible worlds, we shouldn’t take them too seriously.

In the zombie universe, the right view is what philosophers call eliminativism, that there is no such thing as consciousness. The zombie is, in fact, making a mistake. There is a respectable program about consciousness that says, we’re basically in the situation of the zombie. Just over the last two or three years, there’s been a bit of an upsurge of people really thinking seriously about this view, which has come to be known as illusionism, the idea that consciousness is some kind of internal introspective illusion.

The zombie thinks it has special properties of consciousness, but it doesn’t. All is dark inside. So then say, “That’s actually our situation.” It seems to us that we have all these special properties, those qualia, those sensory experiences, but we do not. All is, in a way, dark inside for us as well. There’s just a very strong introspective mechanism that makes us think we have these special properties. That’s illusionism. Most people find it impossible to believe that consciousness is an illusion in that way. Illusionism does have the advantage of predicting that we would find illusionism impossible to believe, if the mechanism is good enough to get us to focus on this question.

I’ve been thinking about this a lot lately. I wrote an article called “The Meta-Problem of Consciousness”, which has just come out in the Journal of Consciousness Studies. The hard problem of consciousness is “why are we conscious, how do physical processes give rise to consciousness”. The meta-problem of consciousness is “why do we think we’re conscious, and why do we think there is a problem of consciousness”. When we talk about The Hard Problem, the idea is that the easy problems were about behavior, the hard problems about experience. The great thing about the Meta-problem is that it is a problem ultimately about behavior.

Why do people go around writing books about this? Why do they say, I’m conscious, I’m feeling pain? Why do they say, “I have these properties that are hard to explain in functional terms?” That’s a behavioral problem. That’s an easy problem. Maybe ultimately, there will be a mechanistic explanation of that. And that would, of course, be grist for the illusionist’s mill. Once you have the mechanisms to explain in physical terms why we say all these things, you could then call that solution to the meta-problem an explanation of the illusion of consciousness. Some people will still find it unbelievable. But again, the view predicts that.

0:36:01 SC: And if I wanted to know why I feel puzzled by the hard problem of consciousness, is that the meta meta-problem of consciousness?

DC: Yeah. Why you find consciousness puzzling is certainly one central aspect to the meta-problem. There are all these things that we seem to feel and say, “My red could be your green. I can imagine zombies. Consciousness seems non-physical,” those are all behaviors. Explain those behaviors, and maybe you’ve explained at least the higher order judgments about consciousness. Now, my own view is that even that wouldn’t add up to an explanation of consciousness. But I think, at the very least, understanding those mechanisms might tell us something very, very interesting about the basis of consciousness. So, I’ve been recommending this as a research program, a neutral research program, for everyone. Philosophers, scientists and others…

0:36:52 SC: Neutral in the sense it’s not presuming any conclusion about what the answer will be.

DC: Exactly. You needn’t be materialist. You needn’t be dualist. You needn’t be illusionist. This is an empirical research program. Here are some facts about human behavior. Let’s try and explain them. Furthermore, philosophers, psychologists, neuroscientists, AI researchers could all, in principle, get in on this. There’s already gonna be a target article and a symposium in Journal of Consciousness Studies with a whole bunch of people from all those fields getting in on it. So I’m hoping this turns out to be a productive way to come with the question. Of course, it won’t be neutral forever. Eventually, we’ll have some stuff, and then some results, and some mechanisms. And then the argument will continue to rage between people who think the whole thing is an illusion and those who think the whole thing is real.

0:37:40 SC: We should say, though, that aside from eliminativism and illusionism, which are fairly sort of hardcore on one side, or forms of dualism, which could be on the other side, there is this kind of emergent position that one can take. This is the one I want to take in The Big Picture and so forth, which is physicalist and materialist at the bottom, but doesn’t say that therefore things like consciousness and our subjective experiences don’t exist or are illusions. They’re a higher order of phenomena, like tables and chairs. They’re categories that we invent to help us organize our experience of the world.

DC: My view is that emergence is sometimes used as kind of a magic word to make us feel good about things that we don’t understand. “How do you get from this to this?” “Oh, it’s emergent.” But what really do you mean by emergent? I think I wrote an article on emergence where I distinguished weak emergence from strong emergence. Weak emergence is basically the kind you get from low-level structure dynamics explaining higher level structure dynamics of behavior, of a complex system, such as traffic flows in a city or the dynamics of a hurricane. All kinds of strange and surprising and cool phenomena emerge at the higher level. But ultimately, once you understand the low-level mechanisms well enough, the high-level ones follow transparently. It’s just low-level structure giving you high-level structure according to the following of certain simple low-level rules.

0:39:10 SC: You could put it on a computer and simulate it.

DC: Exactly. When it comes to the easy problems of consciousness, those may well turn out to be emergent in just this way. They may turn out to be low-level structural functional mechanisms that produce these reports and these behaviors and lead to systems sometimes being awake. No one would be surprised if these were weakly emergent in that way. But none of that seems to add up to an explanation of subjective experience, which just looks like something fundamentally new. Philosophers sometimes talk about emergence in a different way, as strong emergence, which actually involves something fundamentally new emerging via new fundamental laws.

Maybe there’s a fundamental law that’s saying, “When you get this information being integrated, then you get consciousness.” Consciousness may be emergent in that sense, but that’s not a sense that ought to help the materialist. If you want consciousness to be emergent in a sense that helps the materialist, you have to go through weak emergence. That’s going to require reducing the hard problem to an easy problem. Everyone has to make hard choices here. I don’t want to let you off the hook by just saying, “Oh, it’s all ultimately gonna be the brain and a bunch of emergence.”

There’s a respectable materialist research program here, but it has to involve turning the Hard Problem into an easy problem. All you’re going to get out of physics is more and more structure and dynamics and functioning and so on. So, for that to turn into an explanation of consciousness, you need to find some way to deflate what needs explaining in the case of consciousness to a matter of behavior and functioning. And maybe say, “that extra thing that seems to need explaining, that’s an illusion”. Dan Dennett, whom I respect greatly, has tried to do this for years, for decades. That’s been his research program. At the end of the day, most people look at what Dennett has come up with and they say, “Nope, not good enough. You haven’t explained consciousness.” If you can do better, then great.

0:41:03 SC: You’ve always been very careful not to positively advocate for too much. As you say, it’s a hard problem. We don’t know the answers yet. We don’t need to move forward by insisting that this or that must be the right answer. You’ve been open-minded about property dualism, and that one version of that leads us into panpsychism. So can you explain these two concepts?

DC: I’ve explored a number of different positive views on consciousness. What I haven’t done is commit to any of them. I see various different interesting possibilities, each of which has big attractions, but also big problems to overcome. One of the possibilities is panpsychism, the idea that consciousness goes right down to the bottom of the natural order. Panpsychism. “Pan” means all, “psych” means mind, so it’s basically saying everything has a mind. Taken literally, it would imply that people have minds, particles have minds, but also tables and numbers have minds.

0:42:16 SC: Sorry, do we have to say “have minds”? Or can we just get away with saying something like have mental properties as well as physical ones?

DC: Yeah, if that makes you feel better.

0:42:23 SC: It might make me feel a little bit better, yeah.

DC: Have experiences. We can say there’s something it’s like to be them.

0:42:29 SC: Well, I don’t know. I mean, do we want to say an electron has experiences?

DC: Panpsychism, taken literally, has that consequence. By the way, most panpsychists don’t say that tables or rocks or numbers have minds, but typically their biggest commitment is to fundamental physical entities having a mind.

There is a weaker view. An electron does not have experiences. It merely has some protoversion of experience, some predecessor of experience. Maybe electrons are protoconscious. Then there’s a view called panprotopsychism, that could seem a little bit less insane to you. One of the troubles with panpsychism is that it seems very counter-intuitive, because we don’t naturally think that electrons have consciousness, and there’s not a whole lot of direct evidence in favor of it. On the other hand, you might say, there’s also not a whole lot of direct evidence against it. It’s not like we’ve got any experimental evidence that electrons are not conscious.

0:43:28 SC: Well, let me, rather than harp on that, let me just try to figure out what it would mean for electrons to have minds or experiences or consciousness. It certainly can’t mean another quantum number in the physical sense, right? They can’t have happy electrons and sad electrons. That would change much of particle physics in bad ways. So, is it some kind of epiphenomenalism? Do the happiness or sadness, if we want to call it that, just go along with the electron? What determines what the electron is feeling?

DC: The best option for a panpsychist here is to claim you don’t need a whole bunch of extra new laws of physics for consciousness at the basic level. Rather, it’s consciousness that is playing the causal role for the physics that we know. It’s a point that’s often been made about physics. The science of physics is fundamentally structural or mathematical. Everything is explained by how it relates to other things. Maybe quantum mechanics gets messy and everything else in contemporary physics gets even messier. So, let’s just start with the way classical physics characterizes particles, the positions in space and time, with some mass, with some forces that operate on them. Then what is mass in classical physics? Well, it’s this thing, which is subject to the laws of gravitation, and the laws of motion, and that is involved in forces in a certain way. Nothing in classical physics tells us what mass is in itself. Rather, it explains mass by the way that particles with mass interact with other particles with mass.

0:45:06 SC: What its role is, yeah.

DC: It’s all a giant structure. And physics does a great job of characterizing this structure. That raises the question… What is the intrinsic nature of mass? One thing someone might say is, “It doesn’t need to have an intrinsic nature, it’s just a giant relational web,” and that’s a respectable view. Some people think it doesn’t make sense, other people think it does. But here’s another possibility…

0:45:31 SC: Structural realism.

DC: Structural realism is what it gets called in the contemporary philosophy of science, and ontological structural realism says, “That’s all there is in the world, a giant web of relations.”

The other possibility people sometimes speak or is epistemological structural realism. What physics tells us about is the structure, but there may be some intrinsic nature underlying the structure. As far I can tell, that’s a respectable possibility as well: mass does have an intrinsic nature. When two things with mass interact, they’ve got some intrinsic properties that govern that interaction. The panpsychist idea is to say that maybe that intrinsic property is consciousness or experience, or maybe proto-experience.

0:46:16 SC: Or mind enthusiasts, yeah.

DC: Mind lies at this bottom level, serving as source of the intrinsic properties that underlie physical structure. If that’s a bowl that plays, we don’t suddenly need to revise physics. The structure of physics can stay exactly as it was. We’re just going to have some intrinsic properties that ground that structure. Then you might say, “How is mind making a difference?” It’s not making a difference by suddenly creating new laws in the picture of physics for minds. It’s making a difference by being the thing that grounds the physical web. Any time one particle’s mass interacts with another two particles—say by attracting each other by gravitational force— on this picture it’s going to be their mental properties doing the work.

0:47:01 SC: In this picture, which may or may not be right, we’re not saying that the mental properties affect the physical behavior of the electrons. So a physicist, and I know some personally, might worry that this isn’t saying anything at all. Everything the electrons do is still just governed by the laws of physics, because these mental properties do not affect it. You’re saying that’s just the wrong way to ask the question. The kinds of things that are being explained by this positing of a mental character underlying everything are not the behavior of the electrons, but something deeper and something that kind of flowers once you get complex organisms that we recognize as conscious.

DC: Does the experience affect the behavior? In one sense, yes, in another sense, no. I mean, it’s certainly true that this is not going to be so exciting for a current working physicist. All physics can stay the same. Physics, with the experience underneath it or without it, is a good thing.

0:47:56 SC: We have all the excitement we need.

DC: If we had to revise physics too, that would give rise to all kinds of extra crazy complexities. This is more of an interpretation of current physics and of what’s going on in the world underneath current physics. It’s saying that what is doing the work in physics at the bottom level is the intrinsic property of mind or consciousness. The fundamental laws, which we think of as laws connecting mass and mass, or mass and motion, are going to be laws connecting little bits of experience in this structure. From the outside, all we see is the structure, and we give it a mathematical description. We call that the laws of physics, and it’s great.

We’re used to the idea that what underlies a physical theory may involve more than what actually gets into the experimental results. On this hypothesis, what underlies reality is a whole bunch of minds or experiences pushing and pulling each other. Is this wildly speculative? Of course, it is. But is it ruled out by anything we know? Well, I think not. So in a speculative vein, it’s at least a philosophical view to take seriously.

0:49:07 SC: It must be tempting to look toward quantum mechanics for a place to implement these kinds of ideas.

DC: Quantum mechanics is, of course, it’s a magnet for anyone who wants to find a place for crazy properties of the mind to interact with the physical world, because quantum mechanics is so ill-understood. It seems to have suggestive properties that connect observation and the mind. I would actually not connect or combine quantum mechanics and panpsychism. There are people who connect quantum mechanics and panpsychism and, somehow, the right degree of quantum mechanical holism. You could see how all those individual experiences might add up to a big experience. Lately, though, I’ve actually been thinking about quantum mechanics in the context of a different kind of view, which is more a kind of property dualism, with properties of consciousness distinct from properties in physics, but somehow interacting with it.

If you’re not gonna be a panpsychist and say consciousness is present at the bottom level of physics, then consciousness has to be somehow… The property of consciousness has to be separate from those other ones— space, time, mass charge— and that raises the question, how does it interact? Either you say it doesn’t, it’s epiphenomenal, it does nothing. Well, that’s kind of weird that consciousness has no effect at all in the physical world. Or you say, it has an effect on the physical world. And then the question is, how on earth do you reconcile that with physics, which doesn’t seem on the face of it to have any room for consciousness to play that role?

There’s a fairly traditional interpretation of quantum mechanics, where minds could play a role via the process of observation, which collapses the quantum wave function. Of course, it’s very controversial, but it is a very traditional picture of quantum mechanics. There’s two kinds of dynamics of the quantum wave function. There’s Schrödinger evolution, the normal thing, and there’s something weird which happens on measurement. Standard quantum mechanics says, “make a measurement, the wave function collapses” and that’s different from Schrödinger evolution.

This immediately raises a million questions like, “What on earth is measurement, and why should that get any special treatment?” That’s the quantum measurement problem. Many people run a mile at that point saying, “Oh, I don’t want minds to play a role in physics. Let’s try something else.” And they find themselves in Everett-style many worlds quantum mechanics, or Bohm-style hidden-variables quantum mechanics, or GRW-style collapse quantum mechanics, which doesn’t give minds a role. All those programs are great and very interesting, but I’m also interested in a possibility, which may have been overlooked: trying to make rigorous sense of a more face value interpretation of quantum mechanics, where there is something special that takes place upon measurement.

For your average physicist it just seems very strange to treat measurements as fundamental, because that would involve treating the mind as fundamental, and that’s not something that everyone wants to do. If on the other hand there’s already reason to think the mind involves something fundamental, and that consciousness is somehow a fundamental element in nature, then there will not be a good reason to reject the view. The question for me is just, “Can we actually make rigorous mathematical sense of the idea that once consciousness comes into the picture, that the wave function collapses?”

0:53:02 SC: Is it fair to associate this view with something like idealism where you’re putting mind as the first thing that creates reality?

DC: Maybe there’s an idealist version of this, but I would actually think of it as a version of property dualism. The quantum wave function is real. It’s got an existence. It has nothing to do with the mind. The universe has an objective wave function just as it might on, say, an Everett-style view. It’s rather that there’s this aspect of the dynamics of the wave function which is affected by the mind. Under certain circumstances, physical systems will produce consciousness. Under certain circumstances, consciousness will collapse the quantum wave function. Descartes thought that the body affects the mind, and the mind affects the body. That was classic interactionist dualism. Think of this as an updated version of Descartes in a property dualist framework. You’ve got the quantum wave function. You’ve got some dynamics by which the wave function affects consciousness. You’ve got some laws.

It might be something like Tononi’s integrated information theory, that says when the wave function has enough integrated information, then you get a bit of consciousness. And then you need some other bit of dynamics by which consciousness can affect the wave function. I was working on this with Kelvin McQueen, a former student of mine who’s now in Philosophy and Physics at Chapman University in California.

The idea we started working with was that there’s something special about consciousness or maybe about the physical correlates of consciousness, so that it resists quantum superposition. Most properties can evolve into quantum superpositions. But maybe there are some special properties that resist quantum superpositions. They go into superposition for a moment, but then they always collapse back. Or maybe the moment they’re about to superpose, they pick up determinate state. Then the thought was, if that happens, consciousness is like that. Consciousness never enters a superposition. The moment brain processes would produce a superposition of consciousness, then somehow they collapse into a definite state. You might see that as an effect of consciousness on the physical processes in the brain that could in principle give you an effective consciousness in the physical world. It’s a wild, weird and speculative picture, of course, but anyone’s theory of consciousness is weird and speculative.

0:55:34 SC: It’s picking up old ideas from people like Wigner…

0:55:36 DC: Absolutely.

0:55:36 SC: And they’ve dropped out of favor now, but you want to re-examine them?

DC: Absolutely. So, Wigner in 1961 remarked on the mind-body question as probably the locus classicus for this. People think they find the idea, or hints of the idea at least, in von Neumann. And earlier in the 1970s, this got associated with The Dancing Wu Li Masters and so on. At which point, physicists started running away from this idea.

0:56:00 SC: Lost some respectability, yeah.

DC: It has been used in some unfortunate ways. I want to examine this idea, see if we can get it on the table as one of the many alternative interpretations of quantum mechanics, which has upsides and downsides. For me, the question is ultimately, “Can you give it a good coherent mathematical dynamics that works and is consistent with all of our predictions?” If that could be done, then we can take it seriously. Now, I should say that the version Kelvin and I started with does have one rather serious problem with the so-called quantum Zeno effect.

0:56:36 SC: Okay, yeah.

DC: Roughly, the quantum Zeno effect says if you’ve got some quantities that are constantly being measured,  then they never enter into superpositions and they never change. So if you constantly measure the position of a particle, it’ll never move.

0:56:52 SC: I can see where this would be a problem. Yes.

DC: If consciousness is never enters into a superposition, it’s as if consciousness is always being measured, which means that consciousness can never change. So if you start out with an early universe with no consciousness, then consciousness will never get a chance to come into existence. The moment there’s a little glimmer of consciousness, it’s going to snap back. Only in one tiny, little, low amplitude part of the wave function will there be consciousness. With probability one, it will snap back to no consciousness. So consciousness can never evolve. Furthermore, you can never wake up from a nap. If you’re unconscious, you’ll never get to consciousness. There’ll be little branches that develop consciousness, but they’ll snap back to unconscious or worse.

0:57:38 SC: That sounds like a good world. I like the never waking up from the nap world.

DC: Naps go on forever. It was a small, small problem for the initial simplest version of the theory, which we’re now trying to work this into a negative result paper called “Zeno Goes To Copenhagen”.

Bridging the Zeno effect is a problem for a class of interpretations. Then the question is,  “Is there a version of this you can make work, that won’t suffer from this Zeno problem?” We’ve been playing around with probabilistic versions and versions where consciousness superposes for a while and collapses back. We haven’t exactly solved the problem yet, but I think there’s at least an interesting class of interpretations here worth taking seriously if you are inclined to take consciousness seriously. And after all, quantum mechanics is enough of a mess, that…

0:58:29 SC: It’s worth trying, yes.

DC: It’s not like any interpretation is free of problems. If there’s something here that A) gives you a perfectly adequate quantum mechanics, and B) allows a role for consciousness in the physical world, that would at least be reason to take the view seriously.

0:58:44 SC: And if you are a property dualist, if you believe in mental properties as well as physical properties of stuff, does that have implications for questions like artificial intelligence or consciousness on a computer?

DC: I think it does not have immediate implications. Some people think that if you’re a property dualist, you should think that computers won’t be conscious. To me, that’s kind of odd. We’re biological systems who are with brains and somehow we’re conscious. Why should silicone be any worse off than brains. That almost seems like a weirdly materialist idea, to privilege things made of DNA over things made of silicone. Why should that make a difference? I think dualism is just neutral on the question. The kind of property dualism I like is a fairly scientific, naturalistic property dualism with fundamental laws of nature. I think it’s going come down to: are the properties of matter that get connected to consciousness in our theory of consciousness going to be more like specific biological properties, or are they more like computational or informational properties?

If something like Tononi’s integrated information gives you consciousness, then that could be present just as much in a silicone system as in a biological system. In the work I’ve done, I’ve tried to argue that it’s really the computational properties of matter that are relevant to consciousness. If that’s the case, then an AI system will be able to do the job just as well. In principle, we could even replace some neurons one at a time: the biological neurons by silicone, prosthetic neurons. If they work well enough, we’ll be left with a functionally identical system. I would argue that the functionally identical system will retain the same consciousness throughout. The alternative would be to say that consciousness…

1:00:39 SC: Fades away.

DC: Fades away or disappears. But I think that gives rise to all kinds of problems.

1:00:43 SC: Right. Before, I was asking if I’m sure I’m not a zombie, this leads us to ask if we are sure that we’re not a computer simulation, right?

DC: This is one of the great problems of philosophy. Descartes said, “How do we know that there’s an external world? How do you know you’re not being fooled by an evil demon who’s merely producing experiences in you, as of an external reality, when all of this is just being generated by the demon?” The simulation idea is a wonderful 21st century version of Descartes, as illustrated by movies like The Matrix. I’m still a fan of this movie that really, I think, got quite a lot of this right. How do you know you’re not living in a computer simulation? That is, the computer simulation is playing the role of the evil demon, running a model of a world, feeding your brain experiences. You think you’re in an ordinary physical reality, but in fact, you’re in this computer-generated reality. The people who wrote movies like The Matrix say, “If this is the case, then you’re living a life of illusion and deception, and none of it is real.”

1:02:00 SC: It’s not the real world.

DC: Which is exactly what Descartes thought about the evil demon hypothesis. I’ve been thinking about this lately, and I do take the simulation idea seriously. There’s nothing we know with certainty that rules out the idea we are in a computer simulation. The philosopher Nick Bostrom has actually given a statistical argument that we should take it very seriously. Roughly, his idea is that any sufficiently intelligent population will have the capacity to create lots and lots of computer simulations of whole populations. So, as long as they go ahead and use their abilities and create computer simulations, then the majority of beings in the universe will be simulated beings and not unsimulated beings.

The thought is, do the statistics. 99.9% of beings in the universe are simulated, including a whole bunch who are just like me. What are the odds that I’m one of lucky ones at Ground Zero, the 0.1%? So you might say, “I should be 999 out of 1000, 99.9% confident that I am a simulated being.” You can raise issues with the reasoning here and there. One question is, “Would a simulated being be conscious?” Some people might say, “No. They are not conscious. They would be zombies.” If so, the fact that I’m conscious shows that I’m not in a simulation.

1:03:28 SC: You think you’re conscious. Go ahead.

DC: But that won’t help me, because I’m on record as thinking that a simulated system, an AI system, could be just as conscious as a biological system. So all those beings in computer simulations may well be conscious. Even if it’s only 50% likely they’re conscious, then that still should give a big dose of probability to the hypothesis that I’m in a simulation. So I can’t rule out that we’re in a simulation. Where I get off the boat, though, is this idea that simulations are illusions, that simulations aren’t real.

We could be in a world which is a simulation, but if so, that doesn’t mean that there are no tables and chairs in the world around us. That there’s no matter, that it’s all an illusion. I think what we should say is instead, “We’re in a world with tables and chairs and matter.” Then if we discover we’re in a simulation, we’ll have made a surprising discovery about what tables and chairs are made of. They are ultimately made of information and computational processes at the next level down, which may ultimately be realized in processes in the next universe up.  Importantly, it’s all still real. It’s not, as Descartes thought, a world where nothing around you exists. Yes, the world around me exists, it just has a surprising nature. This connects nicely to the ideas about structural realism we were talking about before. Physics tells you about the structure of the world, but it doesn’t tell you, ultimately, about what that structure is made up of.

If we’re in a simulation, the mathematical structure of our reality may be exactly as physics says. It’s just that it’s all implemented or realized on a computer in the next universe up. So yeah, the structure of physics is real, so the electrons are still real. They’re just ultimately electrons made of bits, made of whatever is fundamental in the next universe up.

1:05:32 SC: You said, “If we ever find out,” is there any way we would ever find out?

DC: It depends how well the simulation is made, doesn’t it? If it’s like the one in The Matrix, where they gave us some potential ways out, like the red pill…

1:05:45 SC: That’s very buggy code, yeah.

That’s a dumb way to build a simulation, if you ask me, unless you want people to escape. If it’s a perfect simulation, we may never find out. If we are not in a simulation, we may never be able to prove that we are not in a simulation. In a perfect simulation, any evidence, any proof we can get, could be simulated through the same experiences. We’ll never know for sure the negative claim, that we’re not in a simulation. If we are in a simulation, we could get some very decisive evidence for that. If the simulators suddenly move the Moon around in the sky and write big signals, and we look at our genetic code and we find messages written in there, saying, “Hey, losers, you’re in a simulation,” then we’d take that to be pretty strong evidence.

1:06:37 SC: There is the pre-existing hypothesis of God having done all this. It’s not that different, God doing these things from our programmers doing these things.

DC: Exactly. The question of evidence arises for God as well. We could, in principle, get decisive evidence that there is a God. It’s very hard to get decisive evidence that there’s not a God.

1:06:55 SC: And you think that it’s realistic to imagine doing simulations that are so good that a multiplicity of intelligent, conscious creatures exist there in our simulations?

DC: It’s just a matter of computer power. Once we know the laws of physics well enough, we could set up a universe with allowable boundary conditions for a universe like ours. We could set up the differential equation simulators. Maybe it would need to be a quantum computer,  especially to get the quantum mechanics right. Maybe it’d be hard to get a universe as complex as our universe.

1:07:43 SC: Oh, yeah, it would have to be less.

DC: Every universe is finite.

1:07:45 SC: Has to be less, right?

DC: Yeah. If our universe is finite, and it has, say, one billion units of complexity, then we can’t simulate something with one billion units of complexity, but maybe something with one million units of complexity, just not to tax the universe too much. If we are in the enormous universe that we seem to be in now, with enormous resources, probably it’ll have resources to be able to simulate some pretty complicated universes without too much trouble, in principle.

1:08:17 SC: In these kinds of scenarios, whether it’s Descartes or simulations or whatever, or God creating the universe, can we apply some kind of anthropic reasoning here and ask, if this were the case, what the universe would look like? And then ask, does it or does it not look like that. If you want to depend on the argument that the fine-tuning in the universe of certain fundamental physics parameters that allow for the existence of life is evidence for the existence of God, then you should be consistent in that argument and point out that there are other things about our universe that look wildly unlike what you would expect if the point of the universe existing was for our life to exist. Can we say similar things about the purported simulators?

DC: Yeah, you might worry that most simulations are gonna have certain properties and that our universe does or doesn’t have those properties. I mean, one thing about our universe is that it is enormous. It seems to be enormously big.

1:09:19 SC: It’s very big.

DC: It’s so complicated. Why would you waste your time…

1:09:21 SC: Yeah. It does seem…

DC: If you’re gonna be making simulations, you might think most simulations are gonna be a whole lot smaller and local. Why would the simulators be generating a universe quite as big and as complex as ours? Whatever you do to make a universe like that— like us— it’s gonna involve a whole bunch of people making simulations of universes which are simpler in turn. Universes which are simpler in turn, more and more of those ever simpler universes. So you might think that actually, most universes are gonna be very, very simple.

1:09:55 SC: Exactly, that’s what I would think.

DC: Yeah, I think I might have heard Sean Carroll making it…

1:09:58 SC: I think I made this point. Yes, right.

DC: The very fact that we’re in a complicated universe is gonna be at least some reason to disfavor the simulation hypothesis.

1:10:14 SC: There’s a little bit of back and forth here. One could respond to that by saying, “well, we don’t know the universe is big. We see galaxies in the sky, but really, we see photons that have recently reached us. We don’t see the galaxies themselves. Maybe there’s nothing more than a few million light years away, and it’s all just set up to make us think that.” But then, we’re in some sort of skeptical nightmare and really, do we have to do anything at all?

1:10:36 DC: Yeah, semi-skepticism. Maybe it’s just like, everything out to the Solar System. We’ve actually sent probes out to the…

1:10:42 SC: We think we have.

DC: To other planets and so on. It’s gotta be hard just to simulate New York City, because there’s so many people leaving New York City all the time and coming back and the news from the outside… At the very least, you’re gonna have to have a pretty detailed simulation of the rest of the Earth to keep all of the newspapers…

1:11:05 SC: Right. It’ll be incredible.

DC: And TV and everything else going on. But once you move outside the Earth, it gets at least a bit easier. You’re going to need a fairly detailed simulation of the Moon, because of the role it plays in our lives. Maybe beyond a certain point, you can run a very cheap simulation, maybe beyond Pluto. We’ve just got a very cheap simulation of the rest of the universe. Every now and then, maybe the simulators say, “Hah, they’ve just made a new discovery. They’ve discovered a new form of…”

1:11:36 SC: Planet 9.

DC: “A new way to monitor stuff. They’re looking a little bit closer at these exo planets.” And maybe they scramble, and they come up with some new data for us. Maybe that’s gonna turn out to be much easier to run in a cheap simulation.

1:11:49 SC: Doesn’t even saying these words make you think that maybe this is not the world we live in? These are all arguments against living in a simulation. Our universe does look way bigger. You can imagine things the simulators could do, but why are they going to all this trouble?

DC: I don’t know whether our universe is infinite. It’s quite possible that the basic universe is infinite. And maybe in the next universe up, they have infinite resources. It turns out that simulating a large finite universe is no problem at all. In fact, they can simulate infinite universes, because in an infinite universe, you’ll have the resources to simulate an infinite number of infinite universes without problems. As long as we don’t fall into the trap of thinking that the next universe up has to be just like ours, then I think all bets are off.

1:12:37 SC: Are there ethical implications for this,? Are there implications of this idea for how we should think about ethics? Number one, should we think about ethics in our world differently if we’re simulations, and should we worry about making simulations with conscious creatures and treating them well?

DC: I think that the ethics of our world needn’t be affected drastically by this any more than it has to be affected drastically by the theistic hypothesis that we are in a universe with a God. We are still conscious beings living our lives. Treat other people well. Make sure they have, by and large, positive conscious experiences rather than negative ones. Maybe we need to think about the impact of our actions on the people in the next universe up. But since we don’t really know what that impact is, a self-interest comes into this. After all, to live on religious hypotheses, people modify their behavior greatly in order that they can live forever. We might want to make sure the simulators keep us around.

1:13:36 SC: I mean, it does open the possibility of an afterlife if we’re in a simulation.

DC: Yeah. Just as the simulation hypothesis here has a very naturalistic version of God, it could have a naturalistic version of an afterlife. We already see in TV shows like Black Mirror that people come to the end of their lives, and they upload into a simulation and keep going, keep going that way.

1:13:57 SC: Have you read Iain Banks’s Culture novels?

DC: I haven’t, actually. I should.

1:14:00 SC: You certainly should. Though it’s a small part except for one novel where it plays a major role, there’s this idea that they do simulations all the time. There are consciousnesses and agents in the simulations. Therefore, the intergalactic organization has passed laws that you can’t end the simulations, because that would be genocide. Then there are certain very, very bad civilizations that turn them into hells where they torture the AIs that didn’t behave in the right way.

DC: The ethical questions absolutely get a grip once we start thinking about creating our own simulations. And I’m sure any number of people are going to be tempted once we’ve got the capacity to start up a copy of a SIM universe running on our iPhone, and maybe get a thousand copies up and running, to see what happens overnight, to run the entire history of this universe, gather the statistics. Could be useful for scientific purposes, could be useful for marketing purposes, predicting what products are going to do well.

1:15:00 SC: I didn’t even go there, but yes.

DC: You want to test your different products and see which iPhone is gonna sell the best? I think the ethical issues really are enormous. We’re going to be creating universes with billions of billions, trillions maybe, infinitely many people, and each of which is living a life as a conscious being. And if they are lives of suffering, then we’ve done something horrific. If they are lives of pleasure, then maybe we’ve done something good. But people talk about, “when creating us, did God create the best of all possible worlds? Why is there so much evil?”

1:15:40 SC: Evil and suffering.

DC: Maybe God created many different universes. All the ones that had a net balance of positive experiences over negative experiences, then somehow there was a net positive in creating them. We’re going to face questions like this, too. If you want to create experimental worlds where there’s suffering, you can only do that when there’s a net balance of positive experiences in your simulations to make up for it. Even then, someone will say you could have created an even better world with a bit less suffering and a bit more pleasure or fulfillment or satisfaction. You did something immoral by creating this world. We’re going to have to face all those questions. They’re not going to have easy answers.

1:16:29 SC: Okay, speaking of easy answers. Two last questions for you, David. One is you’re working on a book. I know it’s well in advance, but we can prime our audience to be ready. Do you want to say anything about what the book will be?

DC: Sure, yeah. The focus of the book is very much this set of issues we’ve been talking about for the last few minutes about simulations and about virtual reality. Probably won’t be the final title, but the working title is Reality 2.0: Artificial Worlds and the Great Problems of Philosophy. It’s about exploring philosophical problems like our knowledge of the external world and the nature of reality through the idea of artificial or virtual reality. Are we in a matrix? That’s one of them. Also I really want to develop my own philosophical line, which is that virtual reality, simulated reality is a genuine kind of reality. It’s not a fake or a second-class reality.

It’s a perfectly respectable way for a world to be. This is relevant not just for way out speculative science fiction scenarios, like that we’re living in a simulation, but for very practical scenarios, like the virtual reality technology that’s being developed today. Things like the Oculus Rift, where people enter into virtual worlds and start spending more and more of their time there. It’s easy to imagine that 50 or 100 years in the future, we’re all going to be spending a lot of our time in these virtual worlds. The question will arise can you actually lead a meaningful life there?

1:17:58 SC: Is this aimed at a popular audience, or professional philosophers, or both?

DC: I would say both, but I’m absolutely trying to make it as accessible as possible so anyone can read this book.

1:18:11 SC: You have tenure so you can do that.

DC: I hope they won’t revoke it. But yeah, it’s meant to be both introducing a whole lot of philosophical ideas and also putting forward a substantive philosophical view of my own, roughly, that virtual reality is a first-class reality across all of these domains. I think it has bearing on the great philosophical problems: how do we know there’s an external world, Descartes’ problem. It bears on the question of the relationship between mind and body and has a bearing on these ethical questions about what makes a meaningful and valuable existence, or life of that kind. It’s a way to come at some of the deepest philosophical problems through this lens of thinking about artificial intelligence, which turns out to shed light on many questions about the human mind. Artificial realities turn out to shed light on all kinds of questions about the actual natural reality we find ourselves in. So that’s what I’m trying to do.

1:19:09 SC: And the last question is Tom Stoppard. He’s one of my favorite living playwrights, or playwrights, period. He wrote a play called The Hard Problem. How does it feel to have a phrase you coined become the title of a Tom Stoppard play? [laughter]

1:19:22 DC: Oh, I was very pleased. I think it was my friend Dan Dennett who sent me the email. He read this in an article, and said, “Hey, there’s a Tom Stoppard play coming up, called The Hard Problem.” I said, “Great. Has this got something to do with consciousness?” It turns out it does, and I’ve actually gotten to know Tom Stoppard a little as a result of this process. It had its American opening in Philadelphia a couple of years ago at the Wilma Theater in Philadelphia. I went down there and did an event with Tom, where the two of us were talking on stage about the hard problem of consciousness.

The play is very interesting. I’m not convinced it’s actually about consciousness at its root. It’s about a much broader set of questions, some of which involve consciousness, some of which involve God, some of which involve value. In the discussion, it emerged that the problem that was really generating things for Tom was not the problem of consciousness, but the problem of value. How can you have the experience of some things being better than others, of life being meaningful, of sorrow versus happiness? Of course, that’s very deeply connected to consciousness.

I suggested to him that really his hard problem is the problem of value. And he agreed. He said, “Yes, thank you. I think maybe that’s what’s really moving me, the hard problem of value.”

1:20:45 SC: It’s another famously hard problem. Okay, it’s not the hard problem. But they’re all mixed up. I mean, you gotta write the best play, right? When I’m an advisor for Hollywood movies about science, the goal is to make the best movie, not to be the best science documentary.

DC: But the play is about to open, actually, here in New York at the Lincoln Center. So I have another round of all of this coming up. I don’t want to give away any spoilers about the play. But at some point, they mention that the main character goes to work with a professor at NYU, whose ideas are said to be indemonstrable. And various people have asked me whether that’s me. I’m actually fairly confident that it’s not. I think it’s my colleague, Tom Nagel.

1:21:27 SC: You think it’s Tom Nagel? Okay. Right.

DC: Tom Nagel, who wrote “What Is It Like To Be A Bat”.

1:21:29 SC: Yes.

DC: And he’s the professor at NYU.

1:21:33 SC: But simply the label of being a philosopher whose ideas have been called non-demonstrable doesn’t really narrow things down too much.

DC: Oh, no. I was talking about this with my colleague, Ned Block. There’s three of us at NYU who work on the philosophy of consciousness and we decided that the philosopher in question surely couldn’t be either of us, because our ideas are demonstrable.

1:21:52 SC: Absolutely right. David Chalmers, thanks so much for being on the podcast.

DC: Thanks. It’s been a pleasure.




1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s