We are continuing our excursion through a modified version of John Heil’s article “The Mind-Body Problem”. See Part I for our introduction.
Philosophical behaviorism succumbed to pressure from the identity theory and transformed itself into functionalism. The stumbling block for psychological behaviorism came with the advent of the computing machine and the Chomskeyean revolution in linguistics. Noam Chomsky (Cartesian Linguistics: A Chapter in the History of Rationalist Thought 1966) argued that behaviorist categories were hopelessly inadequate to account for human linguistic capacities. At the same time, computing machines were coming to be seen as affording explanatorily tractable models of intelligent behavior. Alan Turing (1912-1954), echoing Hobbes, argued that intelligence could be understood as computation. It would be possible in principle to build a mind by programming a machine that would “process symbols” so as to mimic an intelligent human being.
Turing proposed a test for intelligence, the “imitation game”. Start with two people, A and B, a man and a woman, communicating via teletype with a third person, the interrogator. The interrogator queries A and B in an effort to determine which is the woman. The woman must answer truthfully, but the man can prevaricate. A wins the game when he convinces the interrogator that he is B. Now, imagine a cleverly programmed digital computer replacing A. If the machine succeeds in fooling the interrogator about as often as a person would, we should, Turing contends, count it as intelligent. The arrival of conscious computers was thought to be just a few decades away in 2001: A Space Odyssey, Stanley Kubrick’s 1968 film. Today, nobody seems to think that computers will become conscious anytime soon.
Still, work in artificial intelligence (AI) has progressed on several, less adventurous fronts. Although attacks on AI (most famously in Berkeley by Hubert Dreyfus and John Searle) have been inconclusive, philosophical enthusiasm for the thesis that the nature of the mind can be captured by a computer program has waned. One question is whether consciousness might supply some needed spark, and this brings us back to the fundamental mind–body problem. [I am mystified by Heil’s “needed spark” function for consciousness, but intrigued enough to leave in the account.]
The advent of the digital computer encouraged philosophers to separate what could be called “hardware” questions from questions about “software”. Perhaps we should view the mind, not as a physical machine, but more abstractly, as a program running on a physical machine, the brain. What is important is not the mind’s physical “implementation”, but networks of internal relationships that mediate inputs and outputs. So long as this pattern is preserved, whatever the nature of the underlying “hardware”, we have a mind. Maybe minds are to brains as computer software is to the hardware.
This is one way of thinking about functionalism (Jerry Fodor, Psychological Explanation: An Introduction to the Philosophy of Psychology 1968). Functionalists note that we are comfortable ascribing states of mind to very different kinds of physical systems. A human being, an octopus, and a Martian could all be said to feel pain, although physical states that might be thought to “realize” pain in each could be very different. This thought led to the thesis that states of mind are “multiply realizable” [“multiply” here is an adverb pronounced multiplee]. A property – the pain property, for instance – that has different physical realizers cannot be identified with any of those realizers. This sounds like old-fashioned dualism. But realized properties are realized physically. In this regard they are shaped by, and dependent on, physical goings-on. The problem is that while there is no doubt that mental events are dependent—in some sense—on physical events, if the token pain property cannot be identified as, or reduced to, a token physical property, functionalism has not escaped the dualist orbit.
Functionalists focus on structure. What matters to a mind is not the medium in which it is embodied (flesh and blood, silicon and metal, ectoplasm), but its organization. Thus construed, functionalism is sometimes traced to Aristotle, who, at times, seemed to be thinking along these lines (De Anima Book II, 1–3). One difficulty for any such view is that it seems possible to imagine systems that preserve the same patterns of internal relations as minds, but are not minds. Ned Block imagines the population of China organized in the way an intelligent system might be organized. Although the Chinese nation is a functional duplicate of a conscious agent, it is hard to think that the nation, as opposed to the individuals who make it up, constitutes a conscious mind. The functions themselves exist on the higher, more abstract level, but seem not to have captured what we mean by consciousness.
The functionalist picture is one of “higher-level” mental properties realized by, but distinct from “lower-level” physical realizers. The result is “non-reductive physicalism”: minds and their properties are grounded in the physical world, but not reducible to their physical grounds. A similar picture has been inspired by Donald Davidson’s “anomalous monism”. Davidson (1917-2003) describes the mental as “supervening” on the physical. Davidson borrows the notion of supervenience from R.M. Hare, who had borrowed it from G.E. Moore. Both Hare and Moore were concerned with issues in ethics. Both, though for different reasons, held that, although moral assertions could not be translated into non-moral, “natural” assertions, moral differences required non-moral differences. If St. Frances is good, an agent indistinguishable from St. Frances in relevant non-moral respects – a “molecular duplicate” of St. Frances – must be good as well. Davidson applied this idea to the relation between mental and physical descriptions: agents alike physically (agents answering to all the same physical descriptions) must be alike mentally (must answer to the very same mental descriptions). Reduction fails – in both ethics and psychology – because agents could be alike morally or mentally, yet differ physically.
These criteria are necessary to fix the ontology of the mental, but are they sufficient? Consider digestion. That’s a higher level function found in all animals. Digestion supervenes on the body of the organism. The physical realizers are the various organs found in animals, and on a still lower level micro-cellular bio-chemistry. Agents alike physically must be alike digestively. Does reduction fail? Can agents be alike digestively yet differ physically? Maybe. All humans differ physically. Yet most have more or less the same digestive system.
Supervenience fits nicely with multiple realization, so nicely that some philosophers began to think of supervenience as providing an account of the realizing relation. Considerable effort was expended on refining the supervenience concept. The result was a proliferation of kinds and grades of supervenience and much discussion as to which best reflected the relation between mental and physical properties (Jaegwon Kim, “Supervenience as a Philosophical Concept,” 1990). Supervenience, however, is a purely formal, modal notion. If you know that the As supervene on the Bs (moral truths supervene on natural truths, mental truths supervene on physical truths), you know that the Bs in some fashion necessitate the As. But what is responsible for this necessitation? What is it about the Bs that necessitates the As?
There are a number of possibilities: (1) the As are the Bs; (2) the As are made up of the Bs; (3) the Bs include the As as parts; (4) the As are caused by the Bs; (5) the As and the Bs have a common cause. None of these fit what proponents of supervenience or multiple realizability appear to have in mind, however. Sydney Shoemaker (“Causality and Properties” 1980) has suggested that “causal powers” “bestowed” by mental properties are a subset of powers “bestowed” by a variety of physical realizing properties. When one of these physical properties is on the scene, the mental property is thereby on the scene, option (3) above. Derek Pereboom (“Robust Nonreductive Materialism” 2002), invoking the idea that a statue, although “constituted by” a particular lump of bronze, is not identical with the lump, argues that instances of mental properties are wholly constituted by, but not identifiable with their physical realizers, option (2).
These accounts of the realization relation locate mental properties within the physical causal nexus. It is hard to see, however, how any such account could preserve the thought that mental properties are really distinct from their realizers while mingling their causal powers with powers of the realizers. Powers comprising a subset of a thing’s physical powers would seem to be physical powers; and powers of a statue are hard to distinguish from powers of the bronze that “constitutes” the statue. Our concepts of “identity” and “supervenience” are hard to pin down.
Non-reductive physicalism has proved popular because it promises to preserve the distinctiveness and autonomy of the mental, while anchoring it firmly in the physical world. However, non-reductive physicalism has come under fire from Jaegwon Kim (Physicalism, or Something Near Enough 2005) and others for failing adequately to accommodate mental causation, the centerpiece of the mind–body problem. If mental properties are distinct, higher-level properties, how are they supposed to figure in causal relations involving lower-level physical goings-on? So long as we embrace closure, it appears that physical events – bodily motions, for instance – must have wholly physical causes. The prospect of mental properties making a causal difference in the physical world is evidently inconsistent with mental properties’ being irreducible to physical properties and the physical world’s being causally closed. We must choose, it seems, between epiphenomenalism – mental properties, although real, are physically impotent – and systematic over- determination – some events have mental causes as well as physically sufficient causes. Kim argues that over-determination is a false option. We thus face a choice between epiphenomenalism, on the one hand, and, on the other hand, the abandonment of the non-reductivist hypothesis. Mental properties are either reducible to physical properties or epiphenomenal. Perhaps, Kim suggests, most mental properties are reducible. Those that are not, qualitative properties of conscious experiences, for instance, the qualia, must be epiphenomenal: real, but causally impotent. The heartbreak of epiphenomenalism reappears.
This is close to the line advanced by David Chalmers (The Conscious Mind: In Search of a Fundamental Theory 1996) in a ringing defense of the irreducible nature of qualia. Chalmers divides mental attributes into those characterizable in “information processing” terms and those that are essentially conscious. The former “logically supervene” on fundamental physical features of organisms: a system with the right sort of functional organization will be intelligent and, in general, psychologically explicable. Consciousness, on the other hand, although determined by the physical facts, is not reducible.
To facilitate the distinction he has in mind, Chalmers imagines zombies, creatures resembling us but altogether lacking in conscious experiences. Such creatures are impossible “in our world”, that is, given actual laws of nature. The conceivability of zombies, however, suggests that laws governing the production of conscious qualities are fundamental in the sense that they are additions to laws governing fundamental physical processes. Think of such laws as analogous to Euclidian axioms. Laws governing consciousness resemble the parallel postulate in being independent of the rest. Their presence or absence has no effect on physical goings-on. Outwardly, a zombie world is indistinguishable from ours.
We can extend the Zombie concept beyond humans. Then it can be applied to any entity which reacts to the world in ways which in humans would indicate consciousness. Are insects conscious? The behave in ways that indicate that they have an experience of pain, hunger, lust, color, frustration and so on. But are they really nothing but little biological machines, without consciousness? Then they are zombies and concept is not as far-fetched as some argue. Or consider self-driving cars. If you drive around town, you must be conscious; you must have a conscious experience. Self-driving cars drive around better than you can, and do it without consciousness. They are zombies. The best chess programs like Stockfish or Shredder can defeat any human player. Should their chess playing be counted as “thinking” (should it be counted as “playing”)? They would pass a chess Turing Test. If they are “thinking” or “intelligent, they are zombies, since they do it without consciousness. See Joan Crawford in BQTA “Help! My Self-Driving Car Is a Zombie!” https://better-questions-than-answers.blog/2017/10/28/help-my-self-driving-car-is-a-zombie/
Both Kim and Chalmers render conscious qualities – qualia – epiphenomenal, perfectly real, but physically irrelevant. The result is what Kim calls “modest physicalism” – physicalism plus a “mental residue” – a conception reminiscent of Descartes’s idea that much human behavior is explicable on mechanical principles alone. The difference is that, whereas Descartes embraced interactionism – mental properties are causally potent – Kim and Chalmers regard consciousness as qualitatively remarkable but causally inert.
Next: Qualia. The crux of the problem?