Carroll, Chalmers, and the Hard Problem: a Commentary, Part I

What is consciousness anyway?  And how did it get here?  How can 3 pounds of brain tissue create conscious experience?  Can the currently familiar science ever get us to a complete understanding?  Or will the understanding of consciousness require concepts that now seem more like the supernatural? 

Is there something fundamentally different about us vs the world we inhabit?  The witness and the witnessed? 

In this post, we at BQTA engage with Sean Carroll and David Chalmers over the great conversation they had last December. The additional participants are Richard Lenon, M.D. and me (GNS). We hope Sean and David don’t mind us butting in.

The original podcast can be found at https://www.preposterousuniverse.com/podcast/2018/12/03/episode-25-david-chalmers-on-consciousness-the-hard-problem-and-living-in-a-simulation/

The BQTA edited version of the transcript is at https://better-questions-than-answers.blog/2019/02/10/david-chalmers-and-sean-carroll/

0:00:00 Sean Carroll: Hello, everyone, and welcome to the Mindscape Podcast. I’m your host, Sean Carroll. If any of you have read The Big Picture, my most recent book, you know that one of the things we have to think about if we’re naturalists trying to come to terms with the world of our experience, is the phenomenon of consciousness.

GNS: The Big Picture is well worth reading. I’m a big fan. He wants to have a theory of everything in science and philosophy. He is not hostile to philosophy, but he thinks science ultimately has all the answers.

SC: The question, of course, is, “What is demanded of us by the fact of consciousness?” Can we hope simply to explain consciousness using the same tools we explain other things with, atoms and particles moving according to the laws of physics, according to standard model and the core theory, or do we need something else somehow that helps us explain what consciousness is and how it came about?

GNS: This is a good way of putting the problem, but it’s worth mentioning that physics does not explain much of anything outside of physics. History? Psychology? Economics? These fields think they work with solid chunks of reality, entirely consistent with physics, but they cannot do their job with just the  “standard model and the core theory”; they think they “need something else” besides physics. Nothing can be done in evolutionary biology solely with the standard model and the core theory.

RL:  Give physics enough time, you get consciousness.  You don’t need all of physics, just the part that’s relevant to chemistry.  From chemistry, not that far to molecular genetics.  Darwin didn’t know about genes, let alone DNA, but the physics behind the chemistry that gets you to DNA is pretty much all there.  We do have physics sufficient to get us all the way to Darwin. 

GNS: What you mean is that whatever exists must have evolved via natural processes. True. My point is that the concepts of the standard model and the core theory are not sufficient to explain biological processes. Additional concepts are needed. That’s why there are branches of science other than physics. How do the laws of physics “explain” economics? This is not a criticism of physics and it is not a claim that economics does not have a physical basis. Obviously the laws of economics cannot violate the laws of physics.

RL: But we don’t have to explain consciousness in terms of physics, any more than we have to explain architecture in terms of the silicon in the sand that goes into making concrete. 

GNS: Carroll says we can explain consciousness in terms of physics—“using the same tools we explain other things with, atoms and particles moving according to the laws of physics, according to standard model and the core theory.” I’m not sure there is a major disagreement here. It’s good to be think about what we mean by “explain” and what an “explanation of consciousness” is supposed to be.

RL: Once you have social animals, with language, you’re going to get history, psychology, economics.  Zombies could conceivably do all of that. (Chalmers explains the concept of the philosopher’s zombie at 0:25:05 in the original transcript) AI certainly could.  Much faster than Darwin did, if you started it off with an imperative to survive. (By “Darwin” I mean evolution.) Trying multiple approaches simultaneously, keeping records of what had worked well and not.  Entities trying different approaches would want models of each other.  Resource allocation would be another necessary discipline. 

Nobody has that much trouble with that idea… Familiar science gets you that far, easily enough.  So Darwin did all of that, from nothing.  None of it was there before he stepped in.  But he couldn’t do consciousness, with the physics on hand?  I part with Carroll when he assumes consciousness is a “higher function.”  From what I’ve seen, judgement is a much more demanding function. Maybe even balance.  I’ve seen lots of people who are having conscious experience when they can’t stand erect, or remember stuff better than my cats.

GNS: I thought your view was that consciousness was a higher level function, so I hope you will expand on this point later.

SC: I’m someone who thinks we don’t need anything else. I think it’s just understanding the motion and interactions of physical stuff from which consciousness emerges as a higher level phenomenon.

RL: Alternatively, I would say that consciousness does not require anything that contemporary physics cannot explain.  No new particles or forces.  Evolution is just physics over time.  I assume that there are elementary particles that have been around since shortly after the Big Bang.  Atoms, almost as long.  The first molecules, much newer.  Some last longer than others, and there you have evolution already.  But then you get self-replicating molecules, and then conglomerations of them, and then we get Gregg. None of those steps require more than contemporary physics, operating over a very long period of time. 

That’s not to say that contemporary physics alone is sufficient to describe conscious experience, at least to the satisfaction of human beings. We humans use chemistry to describe the interactions of atoms, not physics.  Biochemistry for organic molecules, not inorganic chemistry, and then molecular genetics.

GNS: Emergence is basically a dualistic concept. There is this physical stuff—described by the standard model and the core theory—and then other stuff somehow “emerges”. If the other stuff is not the same—type identical—as the the physical stuff, what is it? Carroll doesn’t see that the idea of emergence, though maybe true in some sense, is not sufficient as an explanation.

RL: This may be at the core of my failure to understand what you mean by dualism.  For me, if I start with clay and straw, pour them into a mold, fire the stuff, and get a brick… The brick is not type identical with the clay/straw/water/firing; but rather emerges.  Pile of bricks, plan, masons… we get a building.  Another emergence.  Is that dualism?

GNS: No, so you have a point. Chalmers addresses it by talking about weak emergence and strong emergence. See the discussion starting at 0:37:40.

0:00:57 SC: Our guest today is David Chalmers, who’s probably the most well known and respectable representative of the other side, of the people who think that you need something beyond just the laws of physics as we currently know them to account for consciousness.

GNS: Much depends on what is meant by “account for consciousness”. What could Carroll’s idea be other than finding the NCC—neural correlates of consciousness. Would that be a sufficient accounting? We would still have two fundamentally different kinds of stuff: brain stuff and mind stuff. The correlation does not explain how mind “emerges” from brain.

RL: Brain stuff and mind stuff?  I assume you’d have no problem with the Zombie doing Gregg well enough to fool Susan, with just brain stuff?  Or stepping back a bit, is there a problem with a human doing mind stuff, with just a brain? [note to BQTA readers: this is one of those RL shots that sounds naive but has hidden depth and is not easy to answer]

GNS: Is there a problem doing mind stuff with just a brain? Maybe, if mind stuff gets eliminated.  Neither Carroll nor Chalmers want to do that. Do you?

SC: David is the philosopher who coined that the term “the hard problem of consciousness,” the idea being that the easy problems are how you look at things, and why you react in certain ways, how you do math problems in your head. The hard problem being our personal experience, what it is like to be you or me rather than somebody else, the first person subjective experience. That’s the hard problem, and someone like me thinks, “Oh, yeah. We’ll get there. It’s just a matter of words and understanding and philosophy.”

GNS: Deep in their heart of hearts, physicists do not believe that words matter much. Only math matters. Reality is mathematical. (e.g. Max Tegmark Our Mathematical Universe) The rest is “just a matter of words and understanding and philosophy.” My theory (not fully developed) is that understanding occurs only in words, not via math alone. That is not a criticism of physics. It just means that when physicists give us the explanation of physical reality in words, rather than math, it’s important. Carroll might agree with that. 

RL:  You might be talking about what it feels like to understand.  The math starts from defined terms, and tells you what will happen next.  That’s how the Zombie plays tennis.  The math would be much more useful than the same information as text.  But he won’t feel what it’s like to understand. 

GNS: Zombies do not understand or feel anything. They don’t need to. They do not need to do math calculations to play tennis any more than you do. Their brains need to process sensory information and create behavior, same as yours. Is that a sub-personal calculation?

RL: I think of humans playing tennis with math.  They observe a trajectory, and predict subsequent positions, and what the result will be if that trajectory is interrupted. Some people do it better than others, but the Zombie could do it perfectly. 

GNS: Animals need to able to track moving objects. Are they performing mathematical calcualtions? Not consciously, no. Do their brains/neurons/biochemistry somehow perform unconscious calcualtions? Posssibly. They certainly get the job done. I think this is some of what Jerry Fodor had in mind by proposing his “Language of Thought”. Lions do not have language and cannot say to themselves “I can catch that antelope”. But in some sense, lions know where the antelope will be a moment in the future given its current speed and trajectory. How do they do that? 

The strict concept of the Zombie is that he does what he does with the same sub-personal physiological and neurological processes you do, because he is an exact physical duplicate of you. That Zombie does not play tennis perfectly. They play it exactly the same as you do. No better, no worse.

RL:  Depends on whether you are focused on Zombies as duplicating us, or on how tasks are accomplished.  My take was that originally the Zombie was intended to separate the easy problems from the Hard Problem.  So I’m imagining a Zombie that can do easy problem stuff way better than we can.  I do think of tennis, or chasing fly balls, as mathematical modeling.  It doesn’t have to be perfect to be math.  A set of observations, a method for predicting what happens next, that’s math.  Balance for upright posture.. that’s math.  The process itself doesn’t have to be conscious, just quantitative. 

GNS: As far as I know, the idea that there must be a mathematical model for how animals function is both plausible and popular. But that does not mean that animals are doing math. The earth orbits the sun according to mathematical laws, but the earth does not do math, even unconsciously. Maybe “mathematical modeling” just means obeying the mathematical laws of physics. Everything does that.

There are several variations on the Zombie concept, I believe. The most strict concept is of a Zombie who is an exact physical duplicate of you, down to the last particle. That duplicate would do everything exactly like you. No better, no worse. The debate then centers on whether such a duplicate would necessarily be conscious, just like you, or whether it could be a Zombie lacking consciousness. 

A less strict Zombie concept is of an android or robot who, while not an exact physical duplicate, is a functional duplicate. The robot might look and sound just like us, might be indistinguishable from us in their behavior, yet do it all without consciousness. Think Westworld. This kind of Zombie could indeed do things better than us, play tennis better, while appearing to be one of us. 

RL: If we think of humans as vehicles produced by DNA because they carry it forward in time, then the functionality of Zombies makes perfect sense.  The Zombie models its environment, tests options before proceeding, assuming there’s time. 

GNS: Thinking of humans and all other organisms as DNA vehicles is a powerful idea. (does it threaten the Meaning of Life?) Why can’t zombies be just as good as DNA vehicles as we are? Are social insects conscious? If they are not, they are zombies. Is the architecture of the termite mound explained by physics? In a way, yes, via evolutionary biology. Can human architecture be explained the same way? Maybe so.

RL: “Understanding” has too many meanings.  Does a self-driving car have understanding? 

GNS: Good point. Self-Driving car is not conscious, but it passes the Turing Test for driving. (Turing Test defined and discusssed more at 0:13:53) SD car “understands” driving because it can drive better than we can. So understanding does not require consciousness. At least not for driving. SD car is a Zombie.

RL: How about a drunk who feels like he understands how to drive?  Not to say consciousness has no function. It does, it’s just that I don’t understand it.  When I look at the neural correlates of walking erect, I won’t get a thorough understanding of walking erect.  When I look at neural correlates of consciousness, why I should expect them to reveal all there is to know about consciousness?

GNS: Walking is behavior separate from the activity of neurons. Is consciousness?

SC: Someone like David thinks we need a real change in our underlying way of looking at the world. He describes himself as a naturalist, someone who believes in just the natural world, no supernatural world. Not a dualist, who thinks it’s our disembodied mind or anything like that. But neither is he a physicalist. He thinks that the natural world has not only natural properties, not only physical properties, but mental properties as well.

GNS: Carroll thinks that eliminating “disembodied minds” or immortal souls is the end of dualism. It’s not. Thinking that the world contains both physical properties and mental properties is a form of dualism.   

RL: A deep mind robot, given enough elements from the periodic table, and a available energy, might build the Salesforce tower in furtherance of its ends.  Would that entail dualism?

GNS: No. The robot is a Zombie. Hint of dualism: “in furtherance of its ends”. Where does the robot get its “ends”? The robot does not care about its “ends” or about anything else. Our “ends” are the same as any other organism: to project DNA into the future. That serves no purpose, no “end”. It’s just an accidental consequence of bio-chemistry. So we are Zombies after all.

RL: Robots can be given priorities.  A 1977 sci fi novel, The Adolescence of P-1 by Thomas J. Ryan, posits a computer program that was designed to modify itself to maximize information acquisition, and to protect itself against attempts to prevent it from doing that.  When the danger it threatened became apparent, and people started trying to stop it, the program crashed their airplanes.  Humans thought they’d won when they shut off all the power on the planet, for about an hour.  Might have worked back then.  But P-1 had planted spores…

Nowadays, we have viruses that are designed to spread, and to protect themselves. But that’s just behavior, not conscious desire. 

GNS: The point is that robots have no “ends” of their own. So no robot can build anything in furtherance of “its ends”, because they have none. Where does the robot get “its ends”? From human programmers. Robots can be programmed to function in ways desired by the programmers and maybe this can get out of control. The “ends” are the programmers, not the robot’s.

RL:  If the person who writes the virus puts it on a deep learning machine, and tells it to defend itself, it will.  We could give the viruses sex, and they could exchange techniques with each other. The virus will behave as if it had purpose. How is that different from having a purpose? 

GNS: “As if”? It’s the difference between having a purpose and behaving as if you have a purpose. The direction of your thinking is that mechanisms— machines and viruses— can have a purpose, in the sense of ends focused behavior, just as much as we can. But that does not give the mechanisms a purpose. It eliminates ours.

RL:  The programmer gave the virus its purpose, and Darwin gave you yours.

GNS: The only “purpose” evolution gives to every organism is “make more DNA”. What’s the point of that? There isn’t any. Making more DNA is not a purpose. It’s just something DNA happens to do. Blocking sunlight is not the purpose of clouds; it’s just something they happen to do.

RL:  We’re here because we’re here because we’re here because we’re here….  But doesn’t that seem a worthwhile purpose?  Being here as opposed to not?  I like it that there are stars, better than if there weren’t. 

0:02:01 SC: So I would characterize David as convinced of the problem, but he’s not wedded to any particular answer.

GNS: That’s my position too.

SC: David Chalmers is a philosopher whom everyone respects even if they don’t agree with him.

RL: I respect him for the terms “hard problem” and for the philosophical zombie.  But only because they help formulate the problem.  Certainly not because they display some fundamental deficiency in current physics, or require panpsychism.  Problem with panpsychism is that it has no physics.  And it smells of the supernatural.

GNS: A nice way to put it. I share your skepticism of panpsychism. The guys who like it, such as Chalmers and Galen Strawson, do not think it invokes the supernatural. Here would be a challenge for either of us: make the case for panpsychism. Is “emergence” panpsychism in disguise?

SC: David is a delight to talk to because he is very open-minded about considering different things. Like I said, he’s convinced of this problem, but when it comes to solving the problem, he will propose solutions, but he won’t take them too dogmatically. He will change his mind when good arguments come along. So he’s a great person to talk to about this very, very important problem for naturalists when they try to confront how to understand what it means to be a human being and where consciousness comes from. Also, David has developed a recent interest in the simulation hypothesis, the idea that maybe we could all be living in a simulation running on a computer owned by a very, very advanced civilization in a completely different reality.

0:02:51 SC: So we’ll talk about the hard problem of consciousness, we’ll talk about various philosophical issues, and I won’t pin him down on anything. So, this is a fun conversation, I’m sure you’ll like it, and let’s go.

0:03:35 SC: David Chalmers, welcome to The Mindscape Podcast.

David Chalmers: Thanks. It’s great to be here.

SC: You’re one of the world’s experts on the philosophy of consciousness, you coined the phrase “the hard problem of consciousness.” How would you define what the hard problem is?

DC: The hard problem of consciousness is the problem of explaining how physical processes in the brain somehow give rise to subjective experience. When you think about the mind, there’s a whole a lot of things that need to be explained. Some of them involve our sophisticated behavior, all the things we can do. We can get around, we can walk, we can talk, we can communicate with each other, we can solve scientific problems. But a lot of that is at the level of sophisticated behavioral capacities, things we can do. When it comes to explaining behavior, we’ve got a pretty good bead on how to explain it. In principle at least, you find that circuit in the brain, a complex neural system, which maybe performs some computations, produces some outputs, generates the behavior, then in principle, you’ve got an explanation. It may take a century or two to work out the details, but that’s roughly the standard model in cognitive science.

0:05:08 SC: And you’ve wrapped this together as the easy problem. Slightly tongue in cheek.

DC: This is what 20-odd years ago, I called the easy problems of the mind and of consciousness in particular, roughly referring to these behavioral problems. Nobody thinks they’re easy in the ordinary sense. The sense in which they’re easy is that we’ve got a paradigm for explaining them. Find the neural mechanism or a computational mechanism; that’s the kind of thing that could produce that behavior. In principle, find the right one, tell the right story, you’ll have an explanation. But when it comes to consciousness, to subjective experience, it looks as if that method doesn’t so obviously apply. There are some aspects of consciousness which are, roughly speaking, behavioral or functional, and you could use the word consciousness for the difference between being awake and responsive, for example, versus being asleep or maybe just for the ability to talk about certain things.

The really distinctive problem of consciousness is posed not by the behavioral parts, but by the subjective experience, by how it feels from the inside to be a conscious being. I’m seeing you right now. I have a visual image of colors and shapes that are sort of present to me as an element in the inner movie of the mind. I’m hearing my voice, I’m feeling my body, I’ve got a stream of thoughts running through my head and the hub. And this is what philosophers call consciousness or subjective experience, and I take it to be one of the fundamental facts about ourselves that we have this kind of subjective experience.

But then the question is, how do you explain it? And the reason why we call it the hard problem is it looks like the standard method of just explaining behaviors and explaining the things we do doesn’t quite come to grips with the question of why is there subjective experience. It seems you could explain all of these things we do: the walking, the talking, the reports, the reasoning. Why doesn’t all that go on in the dark? Why do we need subjective experience? That’s the hard problem.

RL: Here he is saying that “Why” is the problem… not how meat could make it.  Occurred to me in passing that for minds such as ours, consciousness might be a function we are gradually leaving behind. That whatever it does is more important for creatures in which physical capabilities are more important.  So consciousness might be gradually fading away, like the capability of grasping branches with our feet.

GNS: Chalmers is giving us an excellent introductory explanation of the Hard Problem. The problem is not what is the function of consciousness. It’s that our behavior can be explained by neurological and physiological mechanisms without invoking consciousness. So what the hell is this “consciousness” stuff?

RL: The fundamental problem I have with Chalmers is his certainty that The Hard Problem is somehow different from other problems, to the point that it must require properties not heretofore recognized.

GNS: I agree with Chalmers on that.

RL: I was quite surprised to see him putting forth the “meta-problem of consciousness; why do we think we’re conscious, and why do we think there is a problem of consciousness.”  He goes into this after 32:04 in the transcript. For me, that’s far and away the most promising question.

GNS: I was surprised too. Chalmers has written a paper about it. “The Meta-Problem of Consciousness” Journal of Consciousness Studies, 25, No. 9–10, 2018, pp. 6–61. Chalmers thinks that addressing the question why we think there is a Hard Problem might break the logjam by bringing scientists and philosophers together. But it is not the main question or the most interesting. 

RL: Why is it that The Hard Problem seems to require some supernatural explanation?  Chalmers would protest that, but he is saying that something more is required, beyond the tools we have now, no matter how much time we give them. 

GNS: Chalmers is a science kind of guy. He does not believe in the supernatural. But you’re right that he believes that something more is required to explain consciousness than anything contemplated by current science. Maybe a Thomas Kuhn style scientific revolution will occur one day.

0:07:41 SC: Sometimes I hear it glossed as the question of what it is like to be a subjective agent, to be a person.

DC: That’s a good definition of consciousness, actually first put forward or at least made famous by my colleague Tom Nagel here at NYU in an article back in 1974 called “What Is It Like To Be A Bat?” His thought was we don’t know what it’s like to be a bat, we don’t know what a bat’s subjective experience is like. It’s got this weird sonar perceptual capacity which doesn’t really correspond directly to anything that humans have. But presumably, there is something it’s like to be a bat. A bat is conscious. Most people would say, on the other hand, “There’s nothing it’s like to be a glass of water.” So if that’s right, then a glass of water is not conscious. This “what it’s like” way of speaking is a good way at least of serving as an initial intuition term for what is the basic difference we’re getting at between systems which are conscious, and systems which are not.

SC: And the other word that is sometimes invoked in this context is qualia, the experiences that we have. There’s one thing to see the color red, and a separate thing (if I get it right) to have the experience of the redness of red.

DC: My sense is that this word qualia has gone a little bit out of favor over the last, say, 20-odd years. Maybe 20 years ago you had a lot of people speaking of qualia as a word for the sensory qualities that you come across in experience, and the paradigmatic ones would be the experience of red versus the experience of green. You can raise all these familiar questions about this. How do I know that my experience of the things we call “red” is the same as yours. Maybe it’s the same as the experience you have when you are confronted with the things we call “green”. Maybe your internal experiences are swapped with respect to mine. And people call that inverted qualia. That would be “your red is my green.” The feeling of pain would be a quale.

I’m not sure that these qualities are all there is, though, to consciousness. Maybe that’s one reason why qualia have gone out of favor. There’s also maybe an experience to thinking, to reasoning, and to feeling. That’s much harder to pin down in terms of sensory qualities, but there’s still something it’s like. You might think there’s something it’s like to think and to reason, even though it’s not the same as what it’s like to sense.

0:10:11 SC: I want to talk about this question of whether or not you and I have the same experience when we see the color red. I’m not sure I know what that could possibly mean for it to be either the same experience or a different experience. I mean, one is going on in my head, one is going on in your head. In what sense could they be the same? But maybe when I say that, it’s just a reflection of the fact that there’s a hard problem.

RL: The idea would be that we have the same type of red experience, but different tokens. One in your head and another in mine. My red is the same to me as yours is to you. The what-it’s-like of red is not learned, so it has to be hardwired.  Darwin did it, and it makes no sense that he would have wired yours to be different from mine.

DC: To pick a much easier case, some people are red-green color-blind. Most people have a red-green axis for color vision and a blue-yellow axis. But some people, due to things going wrong in their retinal mechanisms, don’t make the distinction between red and green. I’ve got friends who are red-green color-blind. I’m often asking them, “What is it like to be you?”

“Is it, like, you just see everything in shades of blue and yellow and you don’t get the reds and greens? Or, is it something different entirely?” But we know what it’s like to be them can’t be the same as what it’s like to be us, because reds and greens, which are different for us, are the same for them, so there’s got to be some difference between us, as a matter of logic. My red can’t be the same as their red and my green can’t be the same as theirs. As a matter of logic, there has to be some difference there.

GNS: There are several forms of red-green color blindness. Apparently people experience shades of yellow, beige or brown instead of red or green. Purple can also appear as blue. https://www.aoa.org/patients-and-public/eye-and-vision-problems/glossary-of-eye-and-vision-conditions/color-deficiency

 

Types_of_color_deficiency .jpg

 

0:11:39 SC: Everybody is different from everybody else, and their experiences are different. I guess the question then is, in what sense could they ever be the same? What is the meaningfulness? I can imagine some kind of operational sameness, right? Like you say the word “red” when you see the word “red”, in that behavioral sense, but that’s exactly what you don’t want to count.

DC: Most people think intuitively that we can at least grasp the idea that my red is the same as your red. Then it’s an empirical open question that they are, in fact, exactly the same. Now, you might say, “I’m a scientist, I want an operational test.” On the other hand, I’m a philosopher and I’m very skeptical of the idea that you can operationalize everything, that a hypothesis has got to be operationalizable to be meaningful.

GNS: The sorts of physiological defects that cause red-green color blindness might have the capacity to cause red-green inversion, as Chalmers mentioned. The inverted person would experience red when we see green, and green when we see red. See Stephen E. Palmer, “Color, consciousness, and the isomorphism constraint” Behavior and Brain Sciences (1999) 22, 932-989. Red-green color blindness shows up in behavior and can be tested with color charts. But the red-green inversion does not affect behavior. Red and green can be inverted without trouble because they lie along the same the dimensions of saturation and lightness. The inverted person can see red and green just as well as everyone else, so no test can detect his inversion, but he is having a different conscious experience. Carroll is suggesting that if the alleged inversion cannot be detected behaviorally, then the claim that there is an inverted experience lack meaning. See also https://plato.stanford.edu/entries/qualia-inverted/ 

 

images
Three-dimensional color space. Colors are represented as points in a three-dimensional space according to the dimensions of hue, saturation, and lightness. The positions of the six unique colors (or Hering primaries) within this space are shown by small squares. The large square is tilted, with yellow closer to white and blue closer to black. Red and green are at the same height, showing they are the same along the light/dark dimension.  

 

 

 

 

DC: There was a movement in philosophy in the first part of the 20th century, logical positivism or logical empiricism, where they said the only meaningful hypotheses are the ones we can test. For various reasons that turned out to have a lot of problems, not least because this very philosophical hypothesis of verificationism turned out not to be one that you could test.

0:12:55 SC: There’s a renaissance of logical positivism on Philosophy Twitter these days.

DC: Oh, is that right? Rudolf Carnap, who was one of the great logical positivists, is one of my heroes. I’ve written a whole book called Constructing the World that was partly based around some of his ideas. Nonetheless, verificationism is not one of them.

I think when it comes to consciousness, in particular, we’re dealing with something essentially subjective. I know I’m conscious, not because I measured my behavior or anybody else’s behavior. It’s because it’s something I experience directly from the first person point of view. I think you’re probably conscious, but it’s not as if I can give a straight out operational definition of it: if you say you’re conscious, then you’re conscious. Most people think that doesn’t absolutely settle the question. Maybe we’d come up with an AI that says it’s conscious. That would be very interesting, but would it settle the question of whether it’s having a subjective experience? Probably not.

0:13:53 SC: Well, so Alan Turing tried, right? The Turing test was supposed to be a way to judge what’s conscious from what’s not. What are your feelings about the success of that program?

DC: I think it’s not a bad approach. Of course, no machine right now is remotely close to passing the Turing Test.

RL: Not true.  Way out of date.  Unless he means do so indefinitely.  We can fool some of the people some of the time.

GNS: Passing the Turing Test should require more than fooling some of the people some of the time. That’s a very weak test, which was passed in the early 1960s.  What should passing TT mean? Shouldn’t it mean fooling most of the people over a fairly long period of time?  No machine is close to that. “Artificial intelligence” does not mean passing Turing anymore. Now it just means machine learning, plus maybe artificial neural nets. No machine fools anybody anymore, not for more than a moment or two. Today we are more likely to mistake live humans for machines.

0:14:07 SC: You might as well say what the Turing Test is.

DC: The Turing Test is a test to see whether a machine can behave in a manner indistinguishable from a normal human being, at least in a verbal conversation over, say, text messaging and the like. Turing thought that eventually we’ll have machines that pass this test: they are indistinguishable from a human interlocutor over hours of conversational testing. Turing didn’t say that at that point then machines can think. What he said was that at that point the question of whether machines can think becomes basically meaningless, and I’ve provided an operational definition to substitute for it. So, once they pass this test, he says, “That’s good enough for me.”

RL: The Turing Test was about intelligent behavior, not consciousness.  Wikipedia:  “…test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human….  In the remainder of the paper, he argued against all the major objections to the proposition that “machines can think”. Turing, about his test:  “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.  So it was really intelligence he talking about, not consciousness.

0:14:56 SC: He talked in the paper about the consciousness objection. You might say that it’s just mimicking consciousness, but not really conscious. As I recall, his response is, “Well, who cares? I can’t possibly test that. Therefore, it’s not meaningful.”

DC: It turns out that consciousness is one of the things that we value. First, it’s one of the central properties of our minds. And second, many of us think it’s what actually gives our lives meaning and value. If we weren’t conscious, if we didn’t have subjective experience, then we would basically just be automata for whom nothing has any meaning or value. When it comes to the question of developing more and more sophisticated Artificial Intelligence, the question of whether they’re conscious is going to become absolutely central as to how we treat them—  whether they have moral status, whether we should care about whether they continue to live or die, and whether they get rights. Many people think if they’re not having subjective experiences, then they’re basically machines and we can treat them the way we treat machines. But if they’re having conscious experiences like ours, then it would be horrific to treat them the way we currently treat machines. If you simply operationalize all those questions, then there’s a danger, I think, that you lose the things that we really care about.

RL:  Does meaning/value require consciousness?

GNS: Consciousness is our entire experience of the world. Without it, we have nothing. Chalmers is suggesting that any entities with conscious experience, whether made out of hydro-carbons or silicon, have a claim to moral treatment. Your car does not have conscious experience. When it becomes time to junk it, you might feel sad, because you liked it. But you have no moral obligation to the car to treat it one way rather than another. Your car does not care about itself, you or anything else. 

0:16:08 SC: Neither you nor I nor the guys who are butting in to this discussion–RL and GNS– are coming from a strictly dualist perspective. We’re not trying to explain consciousness in terms of a Cartesian, disembodied, immaterial mind that is a separate substance. You and I are made of atoms, we’re obeying the laws of physics, and consciousness is somehow related to that, but not an entirely separate category interacting with us.

DC: There are different kinds and different degrees of dualism. My background is very much in mathematics and computer science and physics, and all of my instincts are materialist—to try to explain everything in terms of the processes of physics. Explain biology in terms of chemistry, and chemistry in terms of physics. And this is a wonderful, great chain of explanation. But I do think when it comes to consciousness, this is the one place where that great chain of explanation seems to break down. Roughly because, when it comes to biology and chemistry and all these other fields, the things that need explaining are all basically these easy problems of structure and dynamics and ultimately the behaviors of these systems.

GNS: Science should be able to explain everything.

DC: When it comes to consciousness we seem to have something different that needs explaining. The standard kinds of explanation that you get out of physics derived sciences—physics, chemistry, biology, and neuroscience and so on—just ultimately won’t add up to an explanation of subjective experience, because it always leaves open this further question, “Why is all that sophisticated processing accompanied by consciousness, by a subjective experience?” That doesn’t mean, though, we suddenly need to say it’s all properties of a soul or some religious thing which has existed since the beginning of time and will go on to continue after our death. People sometimes call that substance dualism. Maybe there’s a whole separate substance that’s the mental substance and somehow interacts, connects up with our physical bodies and interacts with it. That view is much harder to connect to a scientific view of the world.

GNS: Mind cannot float free of the body. For any mental change, there must be a physical change.

DC: The direction I end up going is what people sometimes call property dualism, the idea that there are some extra properties of things in the universe. This is something we’re used to in physics. Around the time of Maxwell, we had physical theories that took space and time and mass as fundamental. And then Maxwell wanted to explain electromagnetism, and there was a project of trying to explain it in terms of space and time and mass. Turns out, it didn’t quite work. You couldn’t explain it mechanically and eventually we ended up positing charge as a fundamental property and some new laws governing electromagnetic phenomena. That was just an extra property in our scientific picture of the world.

I’m inclined to think that something analogous to that in some respects is what we have to do with consciousness as well. Basically, explanations in terms of space and time and mass and charge and whatever the fundamentals are in physics these days are not going to add up to an explanation of consciousness. So, we need another fundamental property in there as well. And one working hypothesis is, “Let’s take consciousness as an irreducible element of the world, and then see if we can come up with a scientific explanation of it.”

GNS: Official BQTA doctrine, not necessarily endorsed by all contributors, is that if consciousness is not an irreducible element of the world, then it doesn’t really exist at all. Consciousness would be eliminated, if it turned out that it is nothing but brain function.

0:19:44 SC: I think we should absolutely be open to property dualism. I don’t go down that road myself. I don’t find it very convincing, but maybe in the next 45 minutes, you’ll convince me.

to be continued…

 

figure6.gif
Inverted Qualia, Stanford Encyclopedia of Philosophy. The normal view is upper left, and the red-green inverted experience is upper right. The puzzle is that the person experiencing the inversion will say the same thing. Left is normal, right is inverted. This kind of inversion is apparently undetectable. The lower images portray other types of inversions, but those would be detectable because they alter the light-dark dimension.

 

 

Leave a comment