No Two Alike Page 20
“The mind,” declared Steven Pinker in How the Mind Works, “is not a single organ but a system of organs, which we can think of as psychological faculties or mental modules.” More than one psychological faculty is needed because a single general-purpose device—a jack of all trades—wouldn’t do any jobs well. “The mind has to be built out of specialized parts,” Pinker explained, “because it has to solve specialized problems.” It was by a process of abduction that evolutionary psychologists came to this conclusion. “No single discovery proves the claim…but many lines of evidence converge on it.” How the Mind Works is an enthralling introduction to those many lines of evidence.5
It has been said that research in psychology produces only two kinds of results. The first elicits the reaction “We already knew that, so why did you bother to do the research?” the second, “That can’t be true, so you must be wrong.” The research that shows that the mind is a system of organs, not a single organ, is of the second kind. From inside it doesn’t feel like a system of organs. We see one visual world out there—“So various, so beautiful, so new,” as the poet Matthew Arnold described it.6 We decide to do something—say, pick up a cup—and we do it. There is only one “me” picking up this cup. There is only one cup and it is in only one location at a particular point in time. It has a certain color, a certain shape, a certain weight.
But the same visual system that provides us with a picture of the various and beautiful world also provides the most convincing evidence that the mind is made up of a collection of specialized mechanisms or modules. The working parts of the visual system happen to be fairly well localized in the physical brain, which means that a minor brain injury (caused, perhaps, by a stroke) might knock out one module and leave others unharmed. Pinker has described some of the weird effects produced by such injuries:
Selected aspects of visual experience are removed while others are left intact. Some patients see a complete world but pay attention to only half of it…. Other patients lose their sensation of color, but they do not see the world as an arty black-and-white movie. Surfaces look grimy and rat-colored to them…. Still others can see objects change their positions but cannot see them move…. Other patients cannot recognize the objects they see: their world is like handwriting they cannot decipher. They copy a bird faithfully but identify it as a tree stump…. Some patients can recognize inanimate objects but cannot recognize faces. The patient deduces that the visage in the mirror must be his, but does not viscerally recognize himself.7
There are separate modules in the visual system for seeing color, shape, and motion, but the conscious mind is not aware of them. We see the world as a seamless whole—so seamless that a person with a neurologically intact brain has trouble even imagining what it would be like to see an object change position without seeing it move.
The organs of the mind are hierarchically arranged. The visual system includes a mechanism that permits us to see the world in three dimensions. Several different lower-level modules feed into it. There is one that calculates the disparity between the views of the right eye and the left eye, one that uses shading (balls appear round, not flat, even in photographs), one that makes use of perspective (railroad tracks converge in the distance), and one that makes use of cues provided by the motion of your head or body. The result is a unified perception of three-dimensionality. If one of the modules can’t provide the necessary information, the others silently fill in for it. Multiple inputs yield a single output, and all this is going on behind your back, so to speak.
The combined information doesn’t even have to come from the same modality. A perceptual system called “proprioception” makes use of sensory inputs from your joints and muscles to keep you informed about the position of the parts of your body even when your eyes are closed. But when you can see your arms and legs, the system makes use of the more precise (and usually more reliable) information provided by vision to update the perception. My husband demonstrated that in his dissertation research. Charles Harris had his subjects wear goggles that bend light in such a way that objects appear off to one side of where they really are. A subject who tries to reach for something while wearing these goggles misses by several inches at first. But after a brief period of practice she adapts to the displaced vision and is able to reach accurately again. Charlie showed that what changes during practice is the subject’s perception of the position of her arm and hand: she comes to feel that her hand is where she sees it.8
It’s a proprioceptive illusion, and it persists even after the subject can no longer see her hand. If you close your eyes after practicing with my husband’s goggles, you will feel that your unseen hand is somewhere between where you saw it through the goggles and where it really is. Two modalities, vision and the joint and muscle sense, have provided conflicting information, and the proprioceptive system has taken the visual information into account in computing its output. The mental mechanism that performs these computations ordinarily goes unnoticed because, under ordinary conditions, it provides accurate results.
The concept of the modular mind is the outcome of a long series of discoveries. Much of the early evidence came from noticing what happens when something goes wrong with a particular mechanism, due either to neurological abnormalities or to experimental manipulations that feed abnormal inputs into a normal system (as in my husband’s experiments with displaced vision). Such observations lead to new questions. “Scientists do not conduct research to find things whose existence they don’t suspect,” the evolutionary psychologists John Tooby and Leda Cosmides pointed out.9 You may never suspect that your automobile has a device for determining the sequence in which the cylinders fire until something goes wrong with it.
Like the human mind—like almost any complicated machine—automobiles are modular. A brand new car will contain some modules “inherited” from earlier models and some that are recent innovations. Many of the mechanisms in the human mind are shared with other mammals or other primates, but some are recent innovations. Our species is a brand new model—less than 200,000 years old, an eyeblink in evolutionary time—and quite an innovative one. “Groundbreaking!” the critics might have raved, had there been any critics back then. “So various, so beautiful, so new!”
Sadly, the modules of the human mind, unlike those of the automobile, are not replaceable when something goes wrong with them—not yet replaceable, I should say. I can offer only my sympathy to the parents of children with the disorder called autism; autism is a tragedy for everyone involved. But it was partly through the study of autistic children that cognitive scientists and evolutionary psychologists like Simon Baron-Cohen came to appreciate some of the capabilities of neurologically normal children—capabilities that had gone unnoticed because they almost always work so reliably.
Autism has devastating effects on virtually every aspect of social behavior, including language; yet most nonsocial functions are spared. Baron-Cohen and his colleagues were the first to propose that the behavioral anomalies seen in children with autism are due to the malfunctioning of an organ of the brain called the “theory of mind” or “mindreading” mechanism. As I mentioned in chapter 1, Baron-Cohen believes that autistic people suffer from a disability he calls mindblindness. “Imagine what your world would be like,” he suggests, “if you were aware of physical things but were blind to the existence of mental things”—things like thoughts, beliefs, knowledge, desires, and intentions. Say, for example, you see John walk into the bedroom, walk around, and then walk out. No doubt you could come up with a number of plausible explanations for John’s behavior, such as: “Maybe John was looking for something he wanted to find, and he thought it was in the bedroom.”10 But if you were unable to guess about John’s thoughts and motives, his behavior would be baffling.
For someone who is blind to the contents of other people’s minds, there is no fundamental difference between a person and an object. Here is a classic description of an autistic child, written in 1943 by the child psychiatris
t Leo Kanner:
On a crowded beach he would walk straight toward his goal irrespective of whether this involved walking over newspapers, hands, feet, or torsos, much to the discomfiture of their owners. His mother was careful to point out that he did not intentionally deviate from his course in order to walk on others, but neither did he make the slightest attempt to avoid them. It was as if he did not distinguish people from things, or at least did not concern himself about the distinction.11
Autistic children don’t try to draw another person’s attention to something interesting by pointing at it; they don’t check to see what another person is looking at. They don’t play games of pretense; they don’t recognize when other people are pretending. They don’t engage in deception. They don’t feel pride in meeting or exceeding other people’s expectations; they don’t administer praise. All these things require an awareness that other people have feelings, expectations, beliefs, and intentions.12
Baron-Cohen’s theory is that neurologically normal people have a specialized mental mechanism dedicated to reading other people’s minds, and that it evolved, through natural selection, to solve the particular adaptive problems of an intensely social lifestyle. Like Robin Dunbar, whose theory I described in chapter 1, Baron-Cohen believes that brains got bigger during hominid evolution mainly because of the need to process complex social information. Hominids who were able to understand the behavior of other hominids—to predict what they would do in a given situation, to outmaneuver them, perhaps to influence their behavior—were better able to function successfully in the network of social relationships that make up a primate group. They were better players in the game of “social chess,” a game whose rules are immeasurably more intricate than those of real chess.13 In real chess you know right from the start who your enemies are and what they intend to do, and nobody switches sides.
To show that mindreading depends on a specialized mechanism—that it isn’t simply the result of a larger brain and an increase in general smartness—Baron-Cohen compared autistic children to normally developing children, to children with various types of mental retardation, and to blind children, testing them to see what each could and could not do. The tests showed very specific deficits in those with autism. Autistic children can do things that mentally retarded children cannot, and vice versa.
Baron-Cohen also compared neurologically normal children of different ages. The mindreading mechanism isn’t ready to go at birth; it takes time, and no doubt some input from the environment, for it to develop. At the age of three, children can guess some of the contents of other people’s minds by their facial expressions and where their eyes are pointing, but it isn’t until they are about four that they can solve more difficult kinds of mindreading problems, such as the “Sally-Anne test,” shown in the figure. The child watches Sally (played by a real person or a puppet) put the ball in the basket and leave the room. After she’s gone, Anne takes the ball out of the basket and puts it in a box. Then Sally returns. The child is asked, “Where will Sally look for her ball?”
Normally developing three-year-olds flunk the Sally-Anne test and so do nearly all children with autism—even high-functioning ones, even in their teens.14 They fail to take into account what Sally has seen and what she does and doesn’t know; they say she’ll look in the box because that’s where they know the ball is. But nonautistic four-year-olds realize that Sally is unaware that the ball has been moved and correctly say that she’ll look for it in the basket. The four-year-olds understand, not only that people have beliefs, but also that the beliefs may be untrue. One of the consequences of this advance in cognitive sophistication is that four-year-olds are capable of deliberate deception. How quickly innocence is lost!
The case for mindreading ability in nonhuman animals is still up in the air; cognitive scientists have yet to come to an agreement on whether our closest primate relative, the chimpanzee, has it. The brighter members of that species occasionally do things that are hard to interpret as anything other than a purposeful attempt to deceive.15 On the other hand, chimpanzees flunk some pretty easy tests. For example, a tasty morsel is hidden in one of two opaque containers and then the animal is allowed to try to find it. A chimpanzee, it turns out, is not very good at using cues from a human to solve this problem. If the human looks at the correct container or even points at it, the chimpanzee might nonetheless choose the wrong one. Interestingly, the nonhuman animal that is best at using these cues to solve the two-containers problem is the dog. Wolves can’t do it but dogs can—even kennel-raised dogs that have had little human contact. Evidently a skill at interpreting human social cues had survival value for the ancestors of the dog.16
What someone is looking at is a very useful clue to what they are thinking. The human mindreading mechanism proposed by Baron-Cohen receives information from a lower-level module he calls the “eye-direction detector.” This device picks out eyes or eyelike stimuli from the visual array and ascertains where they are pointing—in particular, whether they are pointing at the self. If you see a pair of eyes and they are directed at you, your eye-direction detector signals (beep!) that someone or something is looking at you.
Perhaps only humans and dogs can tell which of two hiding places someone is looking at and use that information as a cue. But the ability to detect whether someone is looking at you has a long phylogenetic history—it goes way back. Animals that are preyed upon by other animals need to be aware of whether their predators have seen them. Being stared at is scary (hence the beep); it means you may have been marked as a target. A common response—observed in mammals, birds, and even reptiles—is to go still.
The sensitivity to eyes is present in humans from birth. I’ve already said that young infants look at faces; well, the part of the face they look at most is the eyes.17 Eye contact—looking at someone who is looking at you—is physiologically arousing for babies, and they become uncomfortable if it goes on too long. (The same is true of adults, with one sweet exception: people who are madly in love can gaze into each other’s eyes endlessly.) When I was introduced to my infant grandchildren, I knew enough to hold off gazing at them until they had had a chance to look me over without making eye contact.
It is characteristic of mental modules that they respond selectively to stimuli; that’s one of their jobs. The language acquisition device pays close attention to spoken words but ignores sneezes, coughs, and the cat’s meow. The eye-direction detector looks for eyes. In an earlier chapter I described an experiment in which rats made ill by X-rays learned to associate their illness with a taste but not with a light or a sound.18 The responsible mechanism, designed to keep rats (and people) from eating food that previously made them ill, responds selectively to tastes.
But selectivity on the input side is not necessarily matched by selectivity on the output side. Like a subroutine in a computer program, a low-level mental module may serve more than one purpose; its output may be sent to two or three different higher-up mechanisms. Remember the eye-direction detector; I will put it to use again later on.
The evidence that the human mind contains specialized mechanisms, and that these mechanisms are processors of information, opened the door to further advances in cognitive science. “In this new phase of the cognitive revolution,” John Tooby and Leda Cosmides predicted in 1995, “discovering and mapping the various functionally specialized modules of the human brain will be primary activities.”19
In the absence of clear-cut examples of what happens when something goes wrong with one of these mechanisms, how do we go about discovering and mapping them? In How the Mind Works, Steven Pinker suggested one method; he called it “reverse-engineering.”
In forward-engineering, one designs a machine to do something; in reverse-engineering, one figures out what a machine was designed to do…. We all engage in reverse-engineering when we face an interesting new gadget. In rummaging through an antique store, we may find a contraption that is inscrutable until we figure out what it was designed to do
. When we realize that it is an olive-pitter, we suddenly understand that the metal ring is designed to hold the olive, and the lever lowers an X-shaped blade through one end, pushing the pit out the other end. The shapes and arrangements of the springs, hinges, blades, levers, and rings all make sense in a satisfying rush of insight.20
But reverse-engineering has a serious drawback: it can be employed, as Pinker admitted, “only when one has a hint of what the device was designed to accomplish.”21 Fortunately, there is an alternative way to find out about mental mechanisms; perhaps we should call it “reverse reverse-engineering.” One starts with the purpose—some job that the human mind might have been called upon to perform repeatedly during its evolutionary history—and then one figures out how a device to serve that purpose might be built. Then one can look for evidence that the human mind contains such a device. As Cosmides, Tooby, and their colleague Jerome Barkow put it,
If one knows what adaptive functions the human mind was designed to accomplish, one can make many educated guesses about what design features it should have, and can then design experiments to test for them. This can allow one to discover new, previously unsuspected, psychological mechanisms.22
What I’m after in this case requires an understanding of the adaptive functions of the immature human mind. What does the child’s mind need to accomplish while the child is growing up? To put it another way, what are the purposes of childhood? As Sherlock Holmes said to Dr. Watson, “I think that it is quite clear that there must be more than one of them.”