Riccardo Manzotti and Tim Parks have been doing a nice job of creating a fun and thought provoking series of dialogs in the New York Review Daily. The latest one, The Pizza Thought Experiment, just came out. Before focusing on that one, let’s take another look at one from a year ago. http://www.nybooks.com/daily/2016/12/30/consciousness-does-information-smell/ Noted commentator and lecturer Rick Lenon and I (GNS) happily inject ourselves into their conversation. Information is a hot topic these days.
Parks: In our first two dialogues, we presented the standard, or “internalist” version of how our conscious experience of the world comes about: very bluntly, it assumes that the brain receives “inputs” from the sense organs—eyes, ears, nose, etc.—and transforms them into the physical phenomenon we know as consciousness, perhaps the single most important phenomenon of our lives.
GNS: The “internalist” version makes a lot of sense, but it runs into severe problems. Many philosophers, therefore, are trying to find another path. Wittgenstein’s “meaning is use” is a step in that direction.
Lenon: Consciousness does seem/feel profoundly important to us. A philosophical zombie could insist “Anything you can do I can do better!”, and be right. Substituted for you, your wife wouldn’t be able to tell, except that it really could do everything (!) better than you.
GNS: Why better? Wouldn’t the zombie be exactly the same as me, by definition?
Lenon: But if you had a choice between being that zombie, without phenomenal consciousness, and being your limited self, you would deprive your wife, and remain the lesser being that you are.
GNS: Without consciousness, there is nothing.
Lenon: Consider Darwin’s perspective, as an engineer; or maybe that of an omniscient alien, who sets evolution in motion, as we might a fireworks display, out of curiosity, or just to amuse himself. Starting from a chemical soup, they want to make something spectacular happen. So where then does phenomenal consciousness rank in importance? What if we could set in motion thousands of self-improving artificial intelligences (DLM’s), that would be dedicated to modeling the universe? They would be in instantaneous communication with each other, sharing improvements in their own architecture, and progress toward their goal. But no phenomenal consciousness… ?
GNS: Zombies are a terrific thought experiment. Why aren’t we zombies? They do all the physical stuff. From your own point of view, turning into a zombie would be the same as dying. To the rest of the world, however, there would be no difference at all. According to Dave Chalmers, zombies are not physically possible, and I think he’s right.
Lenon: If you are convinced that phenomenal consciousness is more than a passive spectator, that it actually creates ideas, composes music, makes art, then it’s profoundly important, from a third person perspective. If not? What if the idea becomes conscious only after it’s already constructed?
GNS: That’s a good way of stating the dilemma. But let’s get back to Parks.
Parks: We also pointed out, particularly with reference to color perception, how difficult it has been for scientists to demonstrate how, or even whether, this really happens. Neuroscientists can correlate activity in the brain with specific kinds of experience, but they cannot say this activity is the experience. In fact, the neural activity relating to one experience often seems nearly indistinguishable from the neural activity relating to another quite different experience. So we remain unsure where or how consciousness happens. All the same, the internalist model remains dominant and continues to be taught in textbooks and broadcast to a wider public in TV documentaries and popular non-fiction books. So our questions today are: Why this apparent consensus in the absence of convincing evidence? And what new ideas are internalists exploring to advance the science?
GNS: Parks is a bit uncertain here. There must be NCCs—neural correlates of consciousness. Big scientific problems persist in trying to nail those down. How to go inside a living brain and track the fabulous complexity down to individual neurons? Maybe it’s necessary to get down to individual molecules. One activity is almost “indistinguishable” from another. Still, that’s what Chalmers calls the easy problem.
Lenon: I thought he has “the easy problem” as explaining how the zombie does it, and “the hard problem,” how phenomenal consciousness comes about.
Parks: Riccardo, I know I should be asking the questions, not answering them. But I’m going to suggest that one reason for this consensus is that we are in thrall to the analogy of the brain as computer. For example, a recent paper I was reading about the neural activity that correlates with the sense of smell begins, “The lateral entorhinal cortex (LEC) computes and transfers olfactory information from the olfactory bulb to the hippocampus.” Words like “input,” “output,” “code,” “encoding,” and “decoding” abound. It all sounds so familiar, as if we knew exactly what was going on.
GNS: Whether animal brains are digital computers is an open question. However, the computer analogy to brains is not trivial. Digital computers can do a lot of tasks that seems a lot like thinking. Playing chess, for example. The current world champion is Komodo (per wiki), which could easily beat any human. Isn’t Komodo thinking? What more does it have to do? One point is that Komodo is not conscious.There is nothing it is like to be Komodo. Komodo is a zombie. What needs to be added to take it out of zombiehood?
Riccardo Manzotti: We must distinguish between internalism as an approach to the problem of consciousness (the idea that it is entirely produced in the head) and neuroscience as a discipline. The neuroscientists have made huge progress in mapping out the brain and analyzing the nitty-gritty of what goes on there, which neurons are firing impulses in which rhythms to which others, what chemical exchanges are involved, and so on. But you are right, the way they describe their experiments by way of a computer analogy—in particular of information processing and memory storage—can give the mistaken impression that they’re getting nearer to understanding what consciousness is.
When physiologists address other parts of the body—the immune system, the kidneys, our blood circulation—they don’t feel the need to use anything but the language of biology. Read a paper on, say, the liver, and it will be talking about biochemical mechanisms—metabolites, ion homeostasis, acetaminophen poisoning, sepsis, infection, fibrosis, and the like, all terms that refer to actual physical circumstances. Yet, when dealing with the brain, we suddenly find that neurons are processing “information,” rather than chemicals.
GNS: Not all bio-chemical processes are information, but some are. How to make the distinction is a problem. I don’t see how you can talk about brain function without information.
Parks: Is this because while we know what other organs are doing—I mean, which physical processes in the body each is responsible for—we’re not sure what all this neural activity is for?
Manzotti: On the contrary. We know very well that neural activity controls behavior, the nervous system having evolved to meet complex external circumstances with appropriate reactions. The question is, did it also evolve to orchestrate an internal mental theater for us—David Chalmers’s “movie-in-the-head”? Or to “process information”? Stanislas Dehaene and Jean Pierre Changeux, two leading neuroscientists, recently claimed that to explain consciousness we must show “how an external or internal piece of information goes beyond nonconscious processing and gains access to conscious processing, a transition characterized by the existence of a reportable subjective experience.” There’s barely a word here that refers to anything physical.
GNS: “Conscious processing”? Does that mean conscious thinking?
Parks: But is it really not possible to connect the notion of information with chemical exchanges occurring in the brain? Surely when we use a computer the information input is moved along toward the output through electrical signals. Can’t this also be the case with the brain? Hasn’t the philosopher Luciano Floridi claimed that “information is a physical phenomenon, subject to the laws of thermodynamics”?
Manzotti: Listen, when something physically exists and obeys the laws of thermodynamics, then you can find it, concretely. Electrons were predicted to exist and then found. Likewise the planet Neptune and a host of other things. But information, or data, is not a thing. It’s an idea we stipulated because it served a certain purpose, but it doesn’t exist physically, as an entity in its own right in the causal chain. Brutally, when we look inside a computer, or a brain, we don’t see or even detect information. Or data. We see physical stuff: voltage levels in a computer, chemicals in the brain.
GNS: Here Manzotti gets carried away. The information must be in a pattern of neuron firings. True, the pattern is a not an additional physical object in addition to the neurons. But the patterns do exist physically. Take the simplest possible: an ordinary ocean wave coming into the beach. Does the wave “exist physically”? There are only water molecules there; no additional physical object arises when the wave comes in. The wave is just a shape the molecules are in, but it certainly “exists physically.” Manzotti is drifting close to a too restrictive idea of “physical.”
Parks: So what you’re saying is that everything that goes on in a computer or in a brain could be fully and properly described without resorting to words like information or data?
Manzotti: Absolutely. Imagine you’re describing a battery; you will have to refer to electricity. It is an indispensable part of the thing. But, when you describe what the brain or even a calculator does, everything can be exhaustively described in terms of causal processes, chemical releases, and voltage changes without ever using the word information.
GNS: No, I think he’s wrong. You might not need the word “information”, but if you are not describing the electron patterns inside the calculator which constitute adding 67 and 58, you are missing something. Those patterns are information.
Parks: But then what is information? How can Floridi make the claims he does? What part can information have in the consciousness debate?
Manzotti: Obviously there is the definition of the word in common use: “facts, data, communicated about something.” The bus leaves at six. Yesterday it rained. The cash machine is out of order. That meaning has been around in English since the fifteenth century.
Manzotti: Then there is the technical IT definition established by the mathematician Claude Shannon in 1949. Shannon was concerned about achieving accurate communication through technological devices and described information as an estimate of the probability that a given channel would successfully transmit words, images, or sound between a source and a receiver.
Lenon: This sounds like he’s talking about receiver operating characteristic, which is basically described by graphing sensitivity against probability of a false positive….. or false negative. Maximum sensitivity, lots of false positives. Minimal, you miss a lot. Shannon did not define information as a probability. He did talk about channel capacities, and probabilities of error rising with approaching/exceeding capacity.
GNS: Signal vs. noise. The noise can act a lot like the signal.
Parks: Sorry, what do you mean exactly by a channel? I’m lost.
Manzotti: A channel is the physical structure or circumstances that allow two separate events to be connected—the air pressure waves that occur when Romeo utters loving words to Juliet, the wire between a switch that you flip and a light that turns on, or everything that happens between your typing some letters on your phone and someone else reading them on theirs. Essentially, Shannon broke down any communication of data into its most basic constituents, namely a multitude of yes/no questions, that he called bits. Eight bits would make a byte. Information, in this new manifestation, is expressed as a number that tells us how many yes/no questions can be asked and answered through a given channel. A megabyte, for example, indicates capacity for around eight million such questions. Your smart phone requires a few million bits—yes/no questions—to put together, point by point, a photo on the screen. But, there is no internal semantic content, no data or image inside the device, no point along the causal chain where you can put your finger and say, Aha, information!
Lenon: “Semantic” is the problem here. Is he going to say there is no information inside a book? Two poles, one’s two meters long, the other only one. We could describe them in cubits, inches, grains of rice laid end to end. That one is longer than the other, that’s information. Length exists, whether Manzotti is there to “semantic” it or not. The units are arbitrary. Bits are much more flexible than inches… you can describe just about anything with them.
You probably noticed that Manzotti left out the time dimension, when he said “.. how many questions can be asked and answered through a given channel…”
GNS: “Semantic” is not the problem. I’m not sure what Manzotti has in mind. Inside your phone, there must be a version of real language, if it is able to display real words on the screen. There are no images or words inside your phone, but there are bits representing words and images. The bits are another version of language, like morse code. This is information in the most ordinary sense. On the other hand, not every property of an object is information. The pole by itself is not information. The properties of the pole are there whether anybody knows anything about it or not. The statement that the pole is longer than another, that’s information. If physicists use “information” to refer to properties, rather than information about properties, they must have their reasons.
Parks: Could we say that there is no more information in a cell phone, than there is information in the air between my voice speaking and your ear listening? Or between a radio transmitter and a radio receiver?
GNS: Is there “information in the air between my voice speaking and your ear listening”? I will go our on a limb: that is the paradigm of language, and therefore the paradigm of information. The sounds created by the motions of my mouth are the core of what language is. The black letters on the white background you see now are derived from those sounds.
Manzotti: You could indeed. Information here is simply the capacity of any channel to effect a causal coupling between two events, speaking and hearing, typing letters and reading them. It is not a thing between those events. If there is no one on the receiving end to hear the voice or read the letters then quite simply there is no information because we don’t have our two events.
GNS: They must be getting distracted by the problem of meaning. On one level, the sound of language is just the vibration of the molecules (mostly nitrogen and oxygen, N2 and O2) in the air. How do those vibrations get their meaning? That is a significant problem, one addressed by Wittgenstein. But if any language ever has meaning, it’s in the spoken word.
Lenon: Here he seems to be reading Shannon as restricting his concept of information to what flows from a sender to a receiver. Part of what Shannon did was come up with a method for quantifying information. He knew full well that physicists included information as a concept in thermodynamics, and considered using “negative entropy” as a term for information. But he was at Bell Labs, and if you want to measure information, then doing it during transmission has obvious advantages. Say yes if it’s the Rembrant, no if it’s Vermeer. Transmitting the Rembrant is a lot of bits; yes is only one. The sender is looking at the Rembrant, the receiver is calling it to mind. Does the “yes” have semantic content?
GNS: You may be right that they are thinking about transmission of information. Let’s assure Manzotti that if he goes for a solo walk in the woods and recites poetry out loud, his words have the same meaning and information whether anyone is around to hear him or not.
Parks: So what do neuroscientists mean when they talk of information processing in relation to the brain? For example, a mouse’s brain when the animal smells a piece of cheese.
Manzotti: Honestly, it is a bit like when we say that the sun goes down. Of course, we know it doesn’t literally go down, but it is a nice expression and it saves a lot of explanation. The problem with the concept of “information” comes when we start to take it literally, as Floridi does. We start to imagine there really is a mental, non-physical stuff called information. A subtle dualism creeps in, as if the brain contained organic material on the one hand and this mysterious, immaterial “information” on the other. In fact Floridi speaks of moving from a materialist vision “in which physical objects and processes play a key role, to an informational one,” as if there were some sphere of existence that is not physical.
GNS: I might be on Floridi’s side. Some physical processes carry information, most do not. When I write on my laptop, I set in motion a physical and informational process. I don’t deal with the physics, of course. I’m busy with the typing, which works on a certain level of abstraction. It’s like the difference between hardware and software. Programmers work with code and do not have to worry about the motion of elections. Would Manzotti mock mathematicians for believing in “some sphere of existence that is not physical”?
Manzotti: However, in its precise scientific usage—and certainly most neuroscientists would see it this way—“information processing” simply means that a physical system—a computer, or the human body, the brain—allows given events to pass along their causal influence to further events. When your mouse recognizes the smell of cheese and moves toward it, the cheese becomes the cause of effects in the olfactory bulb, which themselves cause effects in the lateral entorhinal cortex, which themselves cause effects in the hippocampus, and so on. But there is no immaterial “message” being passed along, no code, or coded representation of “cheese,” existing separately from these organic changes, which are very many and very, very complex. The notion of information and information processing is then built on top of all that causation. It is a kind of shorthand for describing a causal chain so complex as to be beyond any visualization or easy explanation.
GNS: Not all causation is an informational process, but the transmission of information must cause something physical to happen. When you speak, the vibrations in the air strike my ears and cause physical changes inside my body. Similarly with perception. When a molecule of the cheese floats through the air and into the mouse’s nose, it “becomes the cause of effects in the olfactory bulb, which themselves cause effects in the lateral entorhinal cortex, which themselves cause effects in the hippocampus, and so on.” A signal reaches the mouse’s brain about the presence of cheese. The signal is not separate from the bio-chemical chain-reaction, but it is nevertheless just as real as the chemistry. There is information passed along in perception. Manzotti seems to think that we are being unduly anthropocentric, but I don’t see what his alternative is. To talk only about the chemistry without the information is to miss the point of perception. Are you with me on this?
Lenon: Oh yes there is an “immaterial ‘message’” being passed along,” and it is a “coded representation.” The “organic changes” are different for roquefort vs blue; and that’s information. What if it were a recipe in a pneumatic tube… would that be information?
Parks: But does the chain end anywhere? Is there a point where we could say, this is where everything arrives, where conscious experience happens?
Manzotti: Alas no. Rather than ending, the causal chain branches and every branch is a constant back and forth with as much feedback as input. In this sense the brain is completely different from IT devices which are always channels leading somewhere, usually to a person who reads off the message that arrives—the second of the two events we talked about.
Lenon: Computers have long utilized feedback and branching.
GNS: Yes, consciousness is a big question. These guys are conflating problems of information, meaning and consciousness in confusing ways. Here’s a question: if we assume the mouse is conscious, and experiences the smell of the cheese, is that experience the information? We already know that a series of bio-chemical reactions are occurring in the mouse that allow the mouse’s brain to detect the cheese. Viewed at the the appropriate level of abstraction, there are patterns that are information. The brain sends signals to the motor neurons so the mouse can go get the cheese. Mission accomplished.
GNS: But what about the smell? That experience is something “over and above” the bio-chemistry and neurons, in the sense that you can describe everything happening in the olfactory bulb, in the lateral entorhinal cortex, in the hippocampus, and so on, including the information, without mentioning the smell. If you built a mechanical mouse that could find cheese, you would need to build it with some equivalent of the olfactory bulb that could detect cheese molecules in the air (wet ware needed) and pass a signal (software) on to the central processing unit (hardware). That would give your mechanical mouse the capability to find the cheese. The mechanical mouse would not need to experience the smell of the cheese. In fact, we have no idea how to give the mechanical mouse the ability to smell or to have any other kind of experience. Mechanical Mouse is a zombie.
So is the smell of the cheese information? Our normal answer would be of course! The problem is to understand how smell can affect the bio-chemical goings-on. The smell cannot be information unless it can get back into the bio-chemical loop and that seems impossible, unless you are willing to reduce phenomenal consciousness to bio-chemistry completely. Doing that would eliminate consciousness, and that seems like a bad move.
Lenon: Let’s use me instead of a mouse. The signal that will result in my consciously experiencing the smell of blue cheese is specific for blue cheese before it becomes conscious; it can probably affect my behavior, or at least my predispositions, before it becomes conscious. Which is not to say that having it become conscious doesn’t matter.
Parks: OK, let me try to sum up so far. The neuroscientists, for the most part internalists, continue to fill us in on the brain’s exceedingly complex chemical and electronic activity. Meantime, the extended computer metaphor that they almost always employ conveys the impression that what is going on is not just organic, but “mental,” that the brain is producing consciousness, storing memories, decoding representations, processing data. So there is a general feeling of promise and expectation, but actually we get no nearer to an explanation of consciousness itself, since we are simply describing, with ever greater precision, what neurons organically do.
Manzotti: I’d agree with that. And perhaps add that maybe people are not unhappy with the situation: we get regular, often melodramatic updates on how marvelously complex we are and how clever scientists have become, while consciousness remains blissfully mysterious. In short, we get to feel very special all round.
GNS: Now they are talking about the hard problem of consciousness, the “explanation of consciousness itself.” Nobody has that. But the neuroscience is no con job. Building the mechanical mouse would be a daunting scientific and engineering challenge, well beyond current technology and well worth achieving. Giving Mechanical Mouse a literal sense of smell is a problem of another order of magnitude. Nobody has any idea how to start on that.
Parks: Let’s stick to substance. Aware of this situation, some internalists have made other suggestions. David Chalmers, if I’m not mistaken, has suggested a sort of second and secret life of information—hopefully you can explain. Giulio Tononi has developed an elaborate theory of “integrated information” and “emergence.”
Manzotti: Both Chalmers and Tononi seem to see information processing as a sort of intermediate step toward the conscious mind. I’m not sure this is very enlightening, because if it is hard to imagine how consciousness might “emerge” from neurons, it is even harder to conceive how it might “emerge” from information, which, as we said, is not a physical thing, not “a thing” at all in fact. To put it another way, information can hardly form the basis for a natural phenomenon like conscious experience, which—and we must always remember this—is a thing, a physical phenomenon that we all experience at every waking moment.
GNS: Usually consciousness is treated as something immaterial. If consciousness is “a thing, a physical phenomenon”, it’s a physical phenomenon invisible to physics. This must be a hint toward his externalism, which they plan to discuss in future installments. The emergence of consciousness from neurons, information, or anything else is problematic, to put it mildly, and saying “consciousness emerges” explains nothing. And yet, in some sense, that’s what has happened or is happening. At one time, the planet had no consciousness anywhere and now it does. Somehow the molecules rearranged themselves and consciousness emerged from those rearrangements. What is the alternative way to understand this? Panpsychism? Eliminativism?
Lenon: It starts with sparks on your retina, and you get the picture of an orange. Information processing is a necessary part of that. Let’s pretend I programmed a computer to know the location and schedule of every fruit tree on an island in the South Pacific, maybe Kauai, where natives had never seen a computer. They could ask it where to get a mango, and it would display a map, with current location of said native and the closest mango. No way there are any mangos inside that thing. Look inside, just a bunch of little boards with shiny things on them, and some wires, a battery maybe. Tell them there are electrons moving around in there. Then try to tell them how it knows were they are, and where the mango is, and that the fruit is ripe. They just know there is a spirit in there…
Parks: Let’s take the positions one by one. What is this dual aspect of information that Chalmers proposes?
Manzotti: Chalmers agrees that information, as Shannon construes it, lacks any phenomenal character (colors, smells, feelings), or indeed intrinsic meaning, in that a string of zeros and ones in a computer might mean anything. Yet he believes that the brain is basically a computational device crammed with information. So how do all those zeroes and ones, or some neuronal version of the same, become colors, sounds, pains, and pleasures? His solution is that information has a dual aspect—the functional aspect (the zeroes and ones that govern our behavior) and the phenomenal aspect that constitutes conscious experience (colors, sound, itches, whatever). He does not explain why or how this should be and admits himself that his position is basically dualist: information has two sides, one that science can deal with, neurons controlling behavior, and another that is, simply, consciousness.
GNS: In some sense, the brain must be a computational device. But must the brain be a digital computer? As a matter of philosophical truth? I’m not aware of any proof of that and I’d be surprised to hear Chalmers say he has such proof. Dual aspect theory was an idea Bertrand Russell took very seriously. There is only one thing but it has two aspects: the physical and the mental. Unless you dispense with consciousness entirely, it’s very difficult to explain what’s going on without two of something. The guys who think they have overcome dualism are fooling themselves.
Parks: At which point we’re back with Descartes deciding what belongs to science and what doesn’t.
Manzotti: Pretty much. Tononi also distinguishes between two kinds of information. Standard information, of the IT variety, and “integrated information,” which we find in the brain and which, like Chalmers’s second, “phenomenal” aspect of information, gives rise to consciousness.
Parks: But what is “integrated information”?
Manzotti: It’s a model for quantifying how much any system brings together, or integrates within itself, the causal influences of the external world. For instance, in a starfish, none of the separate arms knows what the others are up to or what is happening to them. There is no integration at a neural level. Or consider an image on your computer screen; each pixel is quite independent from the pixels around it; you can change one without altering the others. Human beings are very different. Change one neuron and changes will occur in hundreds if not thousands of others. Read about Joyce’s Stephen Dedalus in Ulysses, or Proust’s Marcel in La Recherche and you’ll see that everything that happens to them is immediately mixed up with everything else. Everything connects. A human being is the ultimate causal Gordian knot. You can’t disentangle it. So Tononi’s integrated information is a formula that expresses quantitatively the extent of such integration in different creatures and systems.
GNS: Nice explanation of integrated information.
Parks: Does that mean it can calculate how those creatures or systems react to a given stimulus?
Manzotti: Perhaps potentially yes, but at the moment no. Tononi’s formula is so complex to compute that even if you posit an unrealistically simple nervous system, it is still beyond the capacity of the most powerful computers to handle. Aside from that, the formula does not explain how or why this super integration might transform itself into the things we experience, for example the color red.
Parks: It seems whichever way internalism turns, however exhilarating its interim discoveries, when it comes to consciousness it reaches an impasse. We have the impression—or simply we’re used to believing—that consciousness is in our heads, that memories are stored in our brains, that there is a world outside and a representation of the world inside, and so on. Yet nothing we have found in the brain warrants this. In our next dialogue, then, I propose that we break out of our skulls and see if there is any other approach to this question that offers more promise.
Manzotti: Very good, but this time I’m going to have the last word! Internalism, like dualism, is, if you’ll allow me the joke, a monster with many heads. We’re going to have to come back to it again and again, to look at dreams, visualizations, hallucinations, and all kinds of other exciting creatures. And some of them will be harder to tackle than the basic premise in itself.
Lenon: Nobody knows how information becomes phenomenal consciousness. But whenever humans are up against something they don’t understand, they pretty much always posit an agent, or some supernatural explanation. But there is a reason why phenomenal consciousness is especially difficult for people to imagine as the product of some machine… I still think it has to do with agency detection, that happens well before anything becomes conscious.
GNS: Nobody here is reaching for “some supernatural explanation.” Worries about agency detection don’t get us far. But you’re right that science eliminates agency from nature. The recognition that God is not pushing the planets around was the beginning of physics. Related to your feelings is a legitimate concern that our conception of how things work is still unduly anthropomorphic. Manzotti sometimes seems to think seems any mention of information is illicit: “The problem with the concept of “information” comes when we start to take it literally.” We’re not entitled to take it literally? Why not? What would be the non-anthropomorphic discussion of language and perception?
Lenon: https://en.wikipedia.org/wiki/Information has a broad survey of how the term “information” is used, depending on the arena. Manzotti might benefit. It’s hard for me to imagine a conscious experience that doesn’t have content. If I recall Alva Noë well enough, I think it was the content that he said was external, and that the process involved was too interactive to be sufficiently described as internal. But I don’t believe him either. And I don’t think his perspective generates useful questions.
Seems like it might help if we knew what function consciousness fulfills; but even if we did, that might not get us all that much closer to how information plus grey goo creates phenomenal consciousness. Seems like too many of these guys go from there to a conviction that grey goo plus information couldn’t possibly suffice. It does!
I’ve yet another half-baked idea about where consciousness fits in. Maybe we don’t need it for finding cheese, but we do for telling somebody else about it? We need a representation of cheese, and of its location; and we need something to test the adequacy of the communication against. But it doesn’t seem like everything we say gets consciously rehearsed in advanced, either. Maybe there is something analogous to what happens when physicists are working together at a white board? Or alone, for that matter.
What I keep struggling with is that I just don’t see phenomenal consciousness doing the hard part. From Romeo and Juliet…
Oh, she doth teach the torches to burn bright! It seems she hangs upon the cheek of night Like a rich jewel in an Ethiope’s ear, Beauty too rich for use, for earth too dear. So shows a snowy dove trooping with crows As yonder lady o'er her fellows shows. The measure done, I’ll watch her place of stand, And, touching hers, make blessèd my rude hand. Did my heart love till now? Forswear it, sight! For I ne'er saw true beauty till this night.
Now he could test that against his own conscious experience of it, but it was all somewhere before that happened; or at least the elements had to be. So is it what it felt like for him, or that he did it?