Êíèãà: Phantoms in the Brain

Notes

<<< Íàçàä
Âïåðåä >>>

Notes

Chapter 1: The Phantom Within

1. I am of course talking about style here, not content. Modesty aside, I doubt whether any observation in this book is as important as one of Faraday’s discoveries, but I do think that all experimental scientists should strive to emulate his style.

2. Of course, one doesn’t want to make a fetish out of low-tech science. My point is simply that poverty and crude equipment can sometimes, paradoxically, actually serve as a catalyst rather than a handicap, for they force you to be inventive.

There is no denying, though, that innovative technology drives science just as surely as ideas do. The advent of new imaging techniques like PET, fMRI and MEG is likely to revolutionize brain science in the next millennium by allowing us to watch living brains in action, as people engage in various mental tasks. (See Posner and Raichle, 1997, and Phelps and Mazziotta, 1981.)

Unfortunately, there is currently a lot of gee whiz going on (almost a repeat of nineteenth-century phrenology). But if used intelligently, these toys can be immensely helpful. The best experiments are ones in which imaging is combined with clear, testable hypotheses of how the mind actually works. There are many instances where tracing the flow of events is vital for understanding what is happening in the brain and we will encounter some examples in this book.

3. This question can be answered more easily using insects, which have specific stages, each with a fixed life span. (For instance, the cicada species Magicicada septendecim spends seventeen years as an immature nymph and just a few weeks as an adult!) Using the metamorphosis hormone ecdysone or an antibody to it or mutant insects, which lack the gene for the hormone, one could theoretically manipulate the duration of each stage separately to see how it contributes to the total life span. For example, would blocking ecdysone allow the caterpillar to enjoy an indefinitely long life, and conversely would changing it into a butterfly allow it to enjoy a longer life as a butterfly?

4. Long before the role of deoxyribonucleic acid (DNA) in heredity was explained by James Watson and Francis Crick, Fred Griffiths proved in 1928 that when a chemical substance obtained from a heat-killed bacterium of one species — called strain S pneumococcus — was injected simultaneously into mice along with another strain (strain R), the latter actually became “transformed” into strain S! It was clear that something was present in S bacteria that was causing the R form to become S. Then, in the 1940s, Oswald Avery, Colin Macleod and Maclyn McCarty showed that this reaction is caused by a chemical substance, DNA. The implication — that DNA contains the genetic code — should have sent shock waves through the world of biology but caused only a small stir.

5. Historically there have been many different ways of studying the brain. One method, popular with psychologists, is the so-called black box approach: You systematically vary the input to the system to see how the output changes and construct models of what is going on in between. If you think this sounds boring, it is. Nevertheless, the approach has had some spectacular successes, such as the discovery of trichromacy as the mechanism of color vision. Researchers found that all the colors that you can see could be made by simply combining different proportions of three primary ones — red, green and blue. From this they deduced that we have only three receptors in the eye, each of which responds maximally to one wavelength but also reacts to a lesser extent to other wavelengths.

One problem with the black box approach is that, sooner or later, one ends up with multiple competing models and the only way to discover which one is correct is to open up the black box — that is, do physiological experiments on humans and animals. For example, I doubt very much whether anyone could have figured out how the digestive system works by simply looking at its output. Using this strategy alone, no one could have deduced the existence of mastication, peristalsis, saliva, gastric juices, pancreatic enzymes or bile nor realized that the liver alone has over a dozen functions to help assist the digestive process. Yet a vast majority of psychologists — called functionalists — cling to the view that we can understand mental processes from a strictly computational, behaviorist or “reverse engineering perspective” — without bothering with the messy stuff in the head.

When dealing with biological systems, understanding structure is crucial to understanding function — a view that is completely antithetical to the functionalist or black box approach to brain function. For example, consider how our understanding of the anatomy of the DNA molecule — its double-helical structure — completely transformed our understanding of heredity and genetics, which until then had remained a black box subject. Indeed, once the double helix was discovered, it became obvious that the structural logic of this DNA molecule dictates the functional logic of heredity.

6. For over half a century, modern neuroscience has been on a reductionist path, breaking things down into ever smaller parts with the hope that understanding all the little pieces will eventually explain the whole. Unfortunately, many people think that because reductionism is so often useful in solving problems, it is therefore also sufficient for solving them, and generations of neuroscientists have been raised on this dogma. This misapplication of reductionism leads to the perverse and tenacious belief that somehow reductionism itself will tell us how the brain works, when what is really needed are attempts to bridge different levels of discourse. The Cambridge physiologist Horace Barlow recently pointed out at a scientific meeting that we have spent five decades studying the cerebral cortex in excruciating detail, but we still don’t have the foggiest idea of how it works or what it does. He shocked the audience by suggesting that we are all like asexual Martians visiting earth who spend fifty years examining the detailed cellular mechanisms and biochemistry of the testicles without knowing anything at all about sex.

7. The doctine of modularity was carried to its most ludicrous extremes by Franz Gall, an eighteenth-century psychologist who founded the fashionable pseudoscience of phrenology. One day while giving a lecture, Gall noted that one particular student, who was very bright, had prominent eyeballs. Gall started thinking, Why does he have prominent eyeballs? Maybe the frontal lobes have something to do with intelligence. Maybe they are especially large in this boy, pushing his eyeballs forward On the basis of this tenuous reasoning, Gall embarked on a series of experiments that involved measuring the bumps and depressions on people’s skulls. Finding differences, Gall began to correlate the shapes with various mental functions. Phrenologists soon “discovered” bumps for such esoteric traits as veneration, cautiousness, sublimity, acquisitiveness and secretiveness. In an antique shop in Boston, a colleague of mine recently saw a phrenology bust that depicted a bump for the “Republican spirit”! Phrenology was still popular in the late nineteenth and early twentieth centuries.

Phrenologists were also interested in how brain size is related to mental capacity, asserting that heavier brains are more intelligent than lighter ones. They claimed that, on average, the brains of black people are smaller than white people’s and that women’s brains are smaller than men’s and argued that the difference “explained” differences in average intelligence between these groups. The crowning irony is that when Gall died, people actually weighed his brain and found that it was a few grams lighter than the average female brain. (For an eloquent description of the pitfalls of phrenology, see Stephen Jay Gould’s The Mismeasure of Man.)

8. These two examples were great favorites of the Harvard neurologist Norman Geschwind when he gave lectures to lay audiences.

9. Hints about the role of medial temporal lobe structures, including the hippocampus, in memory formation go all the way back to the Russian psychiatrist Sergei Korsakov. Patient H.M. and other amnesics like him have been studied elegantly by Brenda Milner, Larry Weiskrantz, Elizabeth Warrington and Larry Squire.

The actual cellular changes that strengthen connections between neurons have been explored by several researchers, most notably Eric Kandel, Dan Alkon, Gary Lynch and Terry Sejnowski.

10. Our ability to engage in numerical computations (add, subtract, multiply and divide) seems so effortless that it’s easy to jump to the conclusion that it is “hardwired”. But, in fact, it became effortless only after the introduction of two basic concepts — place value and zero — in India during the third century A.D. These two notions and the idea of negative numbers and of decimals (also introduced in India) laid the foundation of modern mathematics.

It has even been claimed that the brain contains a “number line”, a sort of graphical, scalar representation of numbers with each point in the graph being a cluster of neurons signaling a particular numerical value. The abstract mathematical concept of a number line goes all the way back to the Persian poet and mathematician Omar Khayy?m, in the ninth century, but is there any evidence that such a line exists in the brain? When normal people are asked which of two numbers is larger, it takes them longer to make the decision if the numbers are closer together than if they are wider apart. In Bill, the number line seems unaffected because he is okay at making crude quantitative estimates — which number is bigger or smaller or why it seems inappropriate to say the dinosaur bones are sixty million and three years old. But there is a separate mechanism for numerical computation, for juggling numbers about in your head, and for this you need the angular gyrus in the left hemisphere. For a very readable account of dyscalculias, see Dehaene, 1997.

My colleague here at UCSD Dr. Tim Rickard has shown by using functional magnetic resonance imaging (fMRI) that the “numerical calculation area” actually lies not entirely in the classical left angular gyrus itself but slightly in front of it, but this doesn’t affect my main argument and it’s only a matter of time before someone also demonstrates the “number line” using modern imaging techniques.

Chapter 2: “Knowing Where to Scratch”

1. Throughout this book I use fictitious names for patients. The place, time and circumstances have also been altered substantially, but the clinical details are presented as accurately as possible. For more detailed clinical information, the reader should consult the original scientific articles.

In one or two instances when I describe a classic syndrome (such as the neglect syndrome in Chapter 6) I use several patients to create composites of the kind used in neurology textbooks in order to emphasize salient aspects of the disorder, even though no single patient may display all the symptoms and signs described.

2. Silas Weir Mitchell, 1872; Sunderland, 1972.

3. Aristotle was an astute observer of natural phenomena, but it never occurred to him that you could do experiments; that you could generate conjectures and proceed to test them systematically. For instance, he believed that women had fewer teeth than men; all he needed to do to verify or refute the theory was to ask a number of men and women to open their mouths so he could count their teeth. Modern experimental science really began with Galileo. It astonishes me when I sometimes hear developmental psychologists assert that babies are “born scientists”, because it is perfectly clear to me that even adults are not. If the experimental method is completely natural to the human mind — as they assert — why did we have to wait so many thousands of years for Galileo and the birth of the experimental method? Everyone believed that big, heavy objects fall much faster than light ones, and all it took was a five-minute experiment to disprove it. (In fact, the experimental method is so alien to the human mind that many of Galileo’s colleagues dismissed his experiments on falling bodies even after seeing them with their own eyes!) And even to this day, three hundred years after the scientific revolution began, people have great difficulty in understanding the need for a “control experiment” or “double-blind” studies. (A common fallacy is, I got better after I took pill A, therefore I got better because I took pill A.)

4. Penfield and Rasmussen, 1950.

The reason for this peculiar arrangement is unclear and probably lost in our phylogenetic past. Martha Farah of the University of Pennsylvania has proposed a hypothesis that is consistent with my view (and Merzenich’s) that brain maps are highly malleable. She points out that in the curled-up fetus, the arms are usually bent at the elbow with the hands touching the cheek and the legs are bent with the feet touching the genitals. The repeated coactivation of these body parts and the synchronous firing of corresponding neurons in the fetus may have resulted in their being laid down close to each other in the brain. Her idea is ingenious, but it doesn’t explain why in other brain areas (S2 in the cortex) the foot (not just the hand) lies next to the face as well. My own bias is to think that even though the maps are modifiable by experience, the basic blueprint for them is genetic.

5. The first clear experimental demonstration of “plasticity” in the central nervous system was provided by Patrick Wall of the University College, London, 1977, and by Mike Merzenich, a distinguished neuroscientist at the University of California in San Francisco, 1984.

The demonstration that sensory input from the hand can activate the “face area” of the cortex in adult monkeys comes from Tim Pons and his colleagues, 1991.

6. When people are pitched from a motorcycle at high speed, one arm is often partially wrenched from the shoulder, producing a kind of naturally occurring rhizotomy. As the arm is pulled, both the sensory (dorsal) and motor (ventral) nerve roots going from the arm into the spine are yanked off the spinal cord so that the arm becomes completely paralyzed and devoid of sensation even though it remains attached to the body. The question is, How much function — if any — can people recover in the arm during rehabilitation? To explore this, physiologists cut the sensory nerves going from the arm into the spinal cord in a group of monkeys. Their goal was to try to reeducate the monkeys to use the arm, and a great deal of valuable information was obtained from studying these animals (Taub et al., 1993). Eleven years after this study was done these monkeys became a cause c?l?bre when animal rights activists complained that the experiment was needlessly cruel. The so-called Silver Spring monkeys were soon sent to the equivalent of an old age home for primates and, because they were said to be suffering, scheduled to be killed.

Dr. Pons and his collaborators agreed to the euthanasia but decided first to record from their brains to see whether anything had changed. The monkeys were anesthetized before the recordings were made, so that they would not feel any pain during the procedure.

7. Ramachandran et al., 1992a, b; 1993; 1994; 1996.

Ramachandran, Hirstein and Rogers-Ramachandran, 1998.

8. It had been noticed by many previous researchers (Weir Mitchell, 1871) that stimulating certain trigger points on the stump often elicits sensations from missing fingers. William James (1887) once wrote, “A breeze on the stump is felt as a breeze on the phantom” (see also an important monograph by Cronholm, 1951). Unfortunately, neither Penfield’s map nor the results of Pons and his collaborators were available at the time, and these early observations were therefore open to several interpretations. For example, the severed nerves in the stump would be expected to reinnervate the stump; if they did, that might explain why sensations from this region are referred to the fingers. Even when points remote from the stump elicited referred sensations, the effect was often attributed to diffuse connections in a “neuromatrix” (Melzack, 1990). What was novel about our observations is that we discovered an actual topographically organized map on the face and also found that relatively complex sensations such as “trickling”, “metal” and “rubbing” (as well as warmth, cold and vibration) were referred from the face to the phantom hand in a modality-specific manner. Obviously, this cannot be attributed to accidental stimulation of nerve endings on the stump or to “diffuse” connections. Our observations imply instead that highly precise and organized new connections can be formed in the adult brain with extreme rapidity, at least in some patients.

Furthermore, we have tried to relate our findings in a systematic way to physiological results, especially the “remapping” experiments of Pons et al., 1991. We have suggested, for example, that the reason we often see two clusters of points — one on the lower face region and a second set near or around the amputation line — is that the map of the hand on the sensory homunculus in the cortex and the thalamus is flanked on one side by the face and the other side by the upper arm, shoulder and axilla. If the sensory input from the face and from the upper arm above the stump were to “invade” the cortical territory of the hand, one would expect precisely this sort of clustering of points. This principle allows one to dissociate proximity of points on the body surface from proximity of points in brain maps, an idea that we refer to as the remapping hypothesis of referred sensations. If the hypothesis is correct, then one would also expect to see referral from the genitals to the foot after leg amputation, since these two body parts are adjacent on the Penfield map. (See Ramachandran, 1993b; Aglioti et al., 1994.) But one would never see referral from the face to a phantom foot or from the genitals to a phantom arm. Also see note 10.

9. Recently David Borsook, Hans Breiter and their colleagues at the Massachusetts General Hospital (MGH) have shown that in some patients sensations such as touch, paintbrush, rubbing and pinpricks are referred (in a modality-specific manner) from the face to the phantom just a few hours after amputation (Borsook et al., 1998). This makes it clear that disinhibition or “masking” of preexisting connections must at least contribute to the effect, although some sprouting of new connections probably occurs as well.

10. If the remapping hypothesis is correct, then cutting the trigeminal nerve (supplying half the face) should result in the exact opposite of what we noticed in Tom. In such a patient, touching the hand should cause sensations to emerge in the face (Ramachandran, 1994). Stephanie Clark and her colleagues recently tested this prediction in an elegant and meticulous series of experiments. Their patient had the trigeminal nerve ganglion cut because a tumor had to be removed in its vicinity, and two weeks later they found that when the hand was touched, the patient felt the sensations emerging from the face — even though the nerves from the face were cut. In her brain, the sensory input from the skin of the hand had invaded territory vacated by the sensory input from her face.

Intriguingly, in this patient the sensations were felt only on the face — not on the hand — when the hand was touched. One possibility is that during the initial remapping there is a sort of “overshoot” — the new sensory input from hand skin to the face area of the cortex is actually stronger than the original connections and as a result the sensations are felt predominantly on the face, masking the weaker hand sensations.

11. Caccace et al., 1994.

12. Referred sensations provide an opportunity for studying changing cortical maps in the adult human brain, but the question remains, What is the function of remapping? Is it an epiphenomenon — residual plasticity left over from infancy — or does it continue to have a function in the adult brain? For example, would the larger cortical area devoted to the face after arm amputation lead to improved sensory discrimination — measured by two-point discrimination — or tactile hyperacuity on the face? Would such improvement, if it occurred at all, be seen only after the abnormal referred sensations have disappeared, or would it be seen immediately? Such experiments would settle, once and for all, the question of whether or not remapping is actually useful for the organism.

Chapter 3: Chasing the Phantom

1. Mary Ann Simmel (1962) originally claimed that very young children do not experience phantoms after amputation and that children born with limbs missing also do not experience phantoms, but this idea has been challenged by others. (A lovely series of studies was conducted recently by Ron Melzack and his colleagues at McGill University; Melzack et al., 1997.)

2. The importance of frontal brain structures in planning and executing movements has been discussed in fascinating detail by Fuster, 1980; G. Goldberg, 1987; Pribram et al., 1967; Shallice, 1988; E. Goldberg et al., 1987; Benson, 1997; and Goldman-Rakic, 1987.

3. Next I asked Philip to move his index finger and thumb of both hands and simultaneously look in the mirror but this time the phantom thumb and finger remained paralyzed; they were not revived. This is an important observation, for it rules out the possibility that the previous result was simply a confabulation in response to the peculiar circumstances surrounding our experiment. If it was confabulatory, why is it he was able to move his whole hand and elbow but not individual fingers?

Our experiments of the use of mirrors to revive movements in phantom limbs were originally reported in Nature and Proceedings of the Royal Society of London B (Ramachandran, Rogers-Ramachandran and Cobb, 1995; Ramachandran and Rogers-Ramachandran, 1996a and b).

4. The notion of learned paralysis is provocative and may have implications beyond treating paralyzed phantom limbs.

As an example, take writer’s cramp (focal dystonia). The patient can wiggle his fingers, scratch his nose or tie his necktie with no problem, but all of a sudden his hand is incapable of writing. Theories about what causes the condition range all the way from muscle cramps to a form of “hysterical paralysis”. But could it be another example of learned paralysis? If so, would as simple a trick as using a mirror help these patients as well?

The same argument might also apply to other syndromes that straddle the boundary between overt paralysis and a reluctance to move a limb — a sort of mental block. Ideomotor apraxia — the inability to perform skilled movements on command (the patient can write a letter independently but not pretend to wave good-bye or to stir a cup of tea when asked to do so) — is certainly not “learned” in the sense that a paralyzed phantom might be learned. But could it also be based on some sort of temporary neural inhibition or block? And if so, can visual feedback help overcome the block?

Finally, there is Parkinson’s disease, which causes rigidity, tremor and poverty of movements (akinesia) involving the entire body including the face (a masklike expression). Early in this disease, the rigidity and tremor affect only one hand, so, in principle, one could try the mirror technique, using the reflection of the good hand for feedback. Since it is known that visual feedback can indeed influence Parkinson’s disease (for example, the patient ordinarily can’t walk, but if the floor has alternate black and white tiles, he can), perhaps the mirror technique will help them as well.

5. Another fascinating observation on Mary deserves comment. In the previous ten years she had never felt a phantom elbow or wrist; her phantom fingers were dangling from the stump above the elbow, but upon looking into the mirror, she gasped, exclaiming that she could now actually feel — not merely see — her long-lost elbow and wrist. This raises the fascinating possibility that even for an arm lost a long time ago, a dormant ghost still survives somewhere in the brain and can be resurrected instantly by the visual input. If so, this technique may have application for amputees contemplating the use of a prosthetic arm or leg, since they often feel the need to animate the prosthesis with a phantom and complain that the prosthesis feels “unnatural” once the phantom is gone.

Perhaps transsexual women contemplating becoming men can try out a dress rehearsal and revive a dormant brain image of a penis (assuming something like this even exists in a female brain) using a trick similar to the mirror device used on Mary.

6. Forked phantoms were described by Kallio, 1950. Multiple phantoms in a child were described by La Croix et al., 1992.

7. These are highly speculative explanations, although at least some of them can be tested with the help of imaging procedures such as MEG and functional magnetic resonance imaging (fMRI). These devices allow us to see different parts of the living brain light up as a patient performs different tasks. (In the child with three separate phantom feet, would there be three separate representations in her brain that could be visualized using these techniques?)

8. Our phantom nose effect (Ramachandran and Hirstein, 1997) is quite similar to one reported by Lackner (1988) except that the underlying principle is different. In Lackner’s experiment, the subject sits blindfolded at a table, with his arm flexed at the elbow, holding the tip of his own nose. If the experimenter now applies a vibrator to the tendon of the biceps, the subject feels not only that his arm is extended — because of spurious signals from muscle stretch receptors — but also that his nose has actually lengthened. Lackner invokes Helmholtzian “unconscious inference” as an explanation for this effect (I am holding my nose; my arm is extended; therefore, my nose must be long). The illusion we have described, on the other hand, does not require a vibrator and seems to depend entirely on a Bayesian principle — the sheer statistical improbability of two tactile sequences being identical. (Indeed, our illusion cannot be produced if the subject simply holds the accomplice’s nose.) Not all subjects experience this effect, but that it happens at all — that a lifetime’s evidence concerning your nose can be negated by just a few seconds of intermittent tactile input — is astonishing.

Our GSR experiments are mentioned in Ramachandran and Hirstein, 1997, and Ramachandran, Hirstein and Rogers-Ramachandran, 1998.

9. Botvinik and Cohen, 1998.

Chapter 4: The Zombie in the Brain

1. Milner and Goodale, 1995.

2. For lively introductions to the study of vision, see Gregory, 1966; Hochberg, 1964; Crick, 1993; Marr, 1981; and Rock, 1985.

3. Another line of evidence is the exact converse: Your perception can remain constant even though the image changes. For example, every time you swivel your eyeballs while observing everyday scenes, the image on each retina races across your photoreceptors at tremendous speed — much like the blur you see when you pan your video camera across the room. But when you move your eyes around, you don’t see objects darting all over the place or the world zooming past you at warp speed. The world seems perfectly stable — it doesn’t seem to move around even though the image is moving on your retina. The reason is that your brain’s visual centers have been “tipped off’ in advance by motor centers controlling your eye movements. Each time a motor area sends a command to your eyeball muscles, causing them to move, it also sends a command to visual centers saying, “Ignore this motion; it’s not real.” Of course, all this takes place without conscious thought. The computation is built into the visual modules of your brain to prevent you from being distracted by spurious motion signals each time you glance around the room.

4. Ramachandran, 1988a and b, 1989a and b; Kleffner and Ramachandran, 1992. Ask a friend to hold the page (with the pictures of shaded disks) upright while you bend down and look at the page with your head hanging upside down between your legs. The page will then be upside down with respect to your retina. You will find once again that the eggs and cavities have switched places (Ramachandran, 1988a). This is quite astonishing because it implies that in judging shape from shading, the brain now assumes that the sun is shining from below: That is, your brain is making the assumption that the sun is stuck to your head when you rotate your head! Even though the world still looks upright because of correction from the balance organ in the ear, your visual system is unable to use this knowledge to interpret shape from shading (Ramachandran, 1988b).

Why does the visual system incorporate such a foolish assumption? Why not correct for head tilt when interpreting the shaded images? The answer is that as we walk around the world, most of the time we keep our heads upright, not tilted or upside down. So the visual system can take advantage of this to avoid the additional computational burden of sending the vestibular information all the way back to the shape from shading module. You can get away with this “shortcut” because, statistically speaking, your head is usually upright. Evolution doesn’t strive for perfection; your genes will get passed down on to your offspring so long as you survive long enough to leave babies.

5. The architecture of this brain region has been studied in fascinating detail by David Hubei and Torsten Weisel at Harvard University; their research culminated in a Nobel Prize. During the two decades 1960-1980 more was learned about the visual pathways as a result of their work than during the preceding two hundred years, and they are rightly regarded as the founding fathers of modern visual science.

6. The evidence that these extrastriate cortical areas are exquisitely specialized for different functions comes mainly from six physiologists — Semir Zeki, John Allman, John Kaas and David Van Essen, Margaret Livingstone and David Hubei. These researchers first mapped out these cortical areas systematically in monkeys and recorded from individual nerve cells; it quickly became clear that the cells had very different properties. For example, any given cell in the area called MT, the middle temporal area, will respond best to targets in the visual field moving in one particular direction but not other directions, but the cell isn’t particularly fussy about what color or shape the target is. Conversely, cells in an area called V4 (in the temporal lobes) are very sensitive to color but don’t care much about direction of motion. These physiological experiments strongly hint that these two areas are specialized for extracting different aspects of visual information — motion and color. But overall, the physiological evidence is still a bit messy, and the most compelling evidence for this division of labor comes, once again, from patients in whom one of these two areas has been selectively damaged.

A description of the celebrated case of the motion blind patient can be found in Zihl, von Cramon and Mai, 1983.

7. For a description of the original blindsight syndrome, see Weiskrantz, 1986. For an up-to-date discussion of the controversies surrounding blindsight see Weiskrantz, 1997.

8. For a very stimulating account of many aspects of cognitive science, see Dennett, 1991. The book also has a brief account of “filling in”.

9. See especially the elegant work of William Newsome, Nikos Logotethis, John Maunsell, Ted DeYoe, and Margaret Livingstone and David Hubei.

10. Aglioti, DeSouza and Goodale, 1995.

11. Here and elsewhere, when I say that the self is an “illusion”, I simply mean that there is probably no single entity corresponding to it in the brain. But in truth we know so little about the brain that it is best to keep an open mind. I see at least two possibilities (see Chapter 12). First, when we achieve a more mature understanding of the different aspects of our mental life and the neural processes that mediate them, the word “self” may disappear from our vocabulary. (For instance, now that we understand DNA, the Krebs cycle and other biochemical mechanisms that characterize living things, people no longer worry about the question “What is life?”) Second, the self may indeed be a useful biological construct based on specific brain mechanisms — a sort of organizing principle that allows us to function more effectively by imposing coherence, continuity and stability on the personality. Indeed many authors, including Oliver Sacks, have spoken eloquently of the remarkable endurance of self — whether in health or disease — amid the vicissitudes of life.

Chapter 5: The Secret Life of James Thurber

1. For an excellent biography of Thurber, see Kinney, 1995. This book also has a bibliography of Thurber’s works.

2. Bonnet, 1760.

3. My blind-spot experiments were originally described in Scientific American (1992). For the claim that genuine completion does not occur in scotomas, see Sergent, 1988. For the demonstration that it does occur, see Ramachandran, 1993b, and Ramachandran and Gregory, 1991.

4. The famous Victorian physicist Sir David Brewster was so impressed by this filling-in phenomenon that he concluded, as Lord Nelson did for phantom limbs, that it was proof for the existence of God. In 1832 he wrote, “We should expect, whether we use one eye or both eyes, to see a black or dark spot on every landscape within fifteen degrees of the point which most particularly attracts our notice. The Divine Artificer, however, has not left his work thus imperfect . . . the spot, in place of being black, has always the same color as the ground.” Curiously, Sir David was apparently not troubled by the question of why the Divine Artificer would have created an imperfect eye to begin with.

5. In modern terminology, “filling in” is a convenient phrase that some scientists use when referring to this completion phenomenon — the tendency to see the same color in the blind region as in the surround or background. But we must be careful not to fall into the trap of assuming that the brain recreates a pixel- by-pixel rendering of the visual image in this region, for that would defeat the whole purpose of vision. There is, after all, no homunculus — that little man inside the brain — watching an internal mental screen who would benefit from such filling in. (For instance, you don’t say the brain “fills in” the tiny spaces between retinal receptors.) I like to use the term simply as a shorthand to indicate that the person quite literally sees something in a region of visual space from which no light or other information is reaching the eye. The advantage of this “theory-neutral” definition is that it keeps open a door to doing experiments, allowing us to search for neural mechanisms of vision and perception.

6. Jerome Lettvin of Rutgers University (1976) performed this clever experiment. The explanation of this effect — as having something to do with stereoscopic vision — is my own (see note 7).

I have also seen the same effect in patients with scotomas of cortical origin: the lining up of horizontally misaligned vertical bars (Ramachandran, 1993b).

7. Since you look at the world from two slightly different vantage points corresponding to the two eyes, there are differences between the retinal images of the two eyes that are proportional to the relative distances of objects in the world. The brain therefore compares the two images, measures the horizontal separations and “fuses” the images so that you see a single unified picture of the world — not two. In other words, you already have in place in your visual pathway a neural mechanism for “lining up” horizontally separated vertical edges. But since your eyes are separated horizontally and not vertically, you have no such mechanism for lining up horizontal edges that are vertically misaligned. In my view you are tapping the very same mechanism when you are trying to deal with edges that are “misaligned” across a blind spot. This would explain why the vertical lines get “fused” into a continuous line, whereas your visual system fails to cope with the horizontal lines. The fact that you are using only one eye in the blind spot experiment doesn’t negate this argument because you may very well be unconsciously deploying the same neural circuits even when you close the other eye.

8. These exercises are amusing for those of us with normal vision and natural blind spots, but what would life be like with a damaged retina, so that you developed an artificial blind spot? Would the brain compensate by “filling in” the blind regions of the visual field? Or might there be remapping; do adjacent parts of the visual field now map onto the region that’s no longer getting any input?

What would be the consequence of the remapping? Would the patient experience double vision? Imagine I hold up a pencil next to his scotoma. He’s looking straight ahead and obviously sees the original pencil, but since it now also stimulates the patch of cortex corresponding to the scotoma, he should see a second, “ghost” image of the pencil in his scotoma. He should therefore see two pencils instead of one, just as Tom felt sensations on both his face and hand.

To explore this possibility, we tested several patients who had a hole in one retina, but not one person saw double. My immediate conclusion was, Oh, well, who knows, maybe vision is different. And then suddenly I realized that although one eye has a scotoma, the patient has two eyes, and the corresponding patch in the other eye is still sending information to the primary visual cortex. The cells are stimulated by the good eye, so perhaps remapping does not occur. To get the double vision effect, you’d have to remove the good eye.

A few months later I saw a patient who had a scotoma in the lower left quadrant of her left eye and had completely lost her right eye. When I presented spots of light in the normal visual field, she did not see a doubling, but to my amazement if I flickered the spot at about ten hertz (ten cycles per second), she saw two spots — one where it actually was and a ghostlike double inside her scotoma.

I can’t yet explain why Joan only sees double when the stimulus is flickering. She often has the experience while driving, amid sunlight, foliage and constant movement. It may be that a flickering stimulus preferentially activates the magnocellular pathway — a visual system involved in motion perception — and that this pathway is more prone to remapping than others.

9. Ramachandran, 1992.

10. Sergent, 1988.

11. I subsequently verified that this happened every time I tested Josh and also observed the same phenomenon in one of Dr. Hanna Damasio’s patients (Ramachandran, 1993b).

12. An early draft of this chapter, based on my clinical notes, was written in collaboration with Christopher Wills, but the text has been completely rewritten for this work. I have, however, retained one or two of his more colorful metaphors, including this one about the fun house.

13. Kosslyn, 1996; Farah, 1991.

14. Evidence for this comes from the fact that even though most Charles Bonnet patients don’t remember having seen the same images before (perhaps they are from the distant past), in some patients the images are either objects that they saw just a few seconds or minutes ago or things that might be logically associated with objects near the scotoma. For example, Larry often saw multiple copies of his own shoe (one that he had seen a few seconds earlier) and had difficulty reaching out for the “real” one. Others have told me that when they’re driving a car, a vivid scene that they had passed several minuses ago suddenly reemerges in the scotoma.

Thus the Charles Bonnet syndrome blends into another well-known visual syndrome, called palinopsia (which neurologists often encounter after a patient’s head injury or brain disease that has damaged the visual pathways), in which patients report that when an object moves, it leaves behind multiple copies of itself. Although ordinarily conceived of as a motion detection problem, palinopsia may have more in common with Charles Bonnet syndrome than ophthalmologists realize. The deeper implication of both syndromes is that we may all be subconsciously rehearsing recently encountered visual images for minutes or even hours (after they have been seen) and this rerunning emerges to the surface, becoming more obviously manifest, when there is no real input coming from the retina (as can happen after injury to the visual pathway).

Humphrey (1992) has also suggested the idea that the deafferentation is somehow critical for visual hallucinations and that such hallucinations may be based on back-projections. Any claim to novelty by me derives from the observation that in both my patients the hallucination was confined entirely to the inside of the scotoma, never spilling across the margins. This observation gave me the clue that this phenomenon can only be explained by back-projections (since back-projections are topographically organized) and that no other hypothesis is viable.

15. If this theory is correct, why don’t we all hallucinate when we close our eyes or walk into a darkroom? After all, there’s no visual input coming in. For one thing, when people are completely deprived of sensory input (as when they float in a sensory isolation tank), they do indeed hallucinate. The more important reason, however, is that even when you close your eyes, the neurons in your retina and the early stages of your visual pathways are continuously sending baseline activity (we call it spontaneous activity) to higher centers, and this might suffice to veto the top-down induced activity. But when the pathways (retina, primary visual cortex and optic nerve) are damaged or lost, producing a scotoma, even this little spontaneous activity is gone, thereby permitting the internal images — the hallucinations — to emerge. Indeed, one could argue that spontaneous activity in the early visual pathways, which has always been a puzzle, evolved mainly to provide such a “null” signal. The strongest evidence for this comes from our two patients in whom the hallucinations were sharply confined within the margins of their scotoma.

16. This somewhat radical view of perception, I suggest, holds mainly for recognizing specific objects in the ventral stream — a shoe, a kettle a friend’s face — where it makes good computational sense to use your high-level semantic knowledge base to help resolve ambiguity. Indeed, it could hardly be otherwise, given how underconstrained this aspect of perception — object perception — really is.

For the other more “primitive” or “early” visual processes — such as motion, stereopsis and color — such interactions may occur on a more limited scale since you can get away with just using generic knowledge of surfaces, contours, textures, and so on, which can be incorporated into the neural architecture of early vision (as emphasized by David Marr, although Marr did not make the particular distinction that I am making here). Yet even with these low-level visual modules, the evidence suggests that the interactions across modules and with “high-level” knowledge are much greater than generally assumed (see Churchland, Ramachandran and Sejnowski, 1994).

The general rule seems to be that interactions occur whenever it would be useful for them to occur and do not (and cannot) occur when it isn’t. Discovering which interactions are weak and which are strong is one of the goals of visual psychophysics and neuroscience.

Chapter 6: Through the Looking Glass

1. For descriptions of neglect, see Critchley, 1966; Brain, 1941; Halligan and Marshall, 1994.

2. No one has described the selective function of consciousness more eloquently than the eminent psychologist William James (1890) in his famous essay “The Stream of Thought”. He wrote, “We see that the mind is at every stage a theatre of simultaneous possibilities. Consciousness consists in the comparison of these with each other, the selection of some, and the suppression of the rest by the reinforcing and inhibiting agency of attention. The highest and most elaborated mental products are filtered from the data chosen by the faculty next beneath, out of the mass offered by the faculty below that, which mass in turn was sifted from a still larger amount of yet simpler material, and so on. The mind, in short, works on the data it receives very much as a sculptor works on his block of stone. In a sense the statue stood there from eternity. But there were a thousand different ones beside it, and the sculptor alone is to thank for having extricated this one from the rest. We may, if we like, by our reasonings unwind things back to that black and jointless continuity of space and moving clouds of swarming atoms which science calls the only real world. But all the while the world we feel and live in will be that which our ancestors and we, by slowly cumulative strokes of choice, have extricated out of this, like sculptors, by simply rejecting certain portions of the given stuff. Other sculptors, other statues, from the same stone! Other minds, other worlds from the same monotonous and inexpressive chaos! My world is but one in a million alike embedded, alike real to those who may abstract them. How different must be the world in the consciousness of ant, cuttlefish or crab!”

3. This positive feedback loop involved in orienting has been described by Heilman, 1991.

4. Marshall and Halligan, 1988.

5. Sacks, 1985.

6. Gregory, 1997.

7. What would happen if I were to throw a brick at you from the backseat so that you saw the brick coming at you in the mirror? Would you duck forward (as you should), or would you be fooled by the expanding image in the mirror and duck backward? Perhaps the intellectual correction for the mirror reflection, deducing accurately where the real object is located, is performed by the conscious what pathway (object pathway) in the temporal lobes, whereas ducking to avoid a missile is done by the how pathway (spatial stream) in the parietal lobe. If so, you might get confused and duck incorrectly — it’s your zombie that’s ducking!

8. Edoardo Bisiach added a brilliant twist to this line-bisection test that suggests that this interpretation can’t be the whole story, although it’s a reasonable first-pass explanation. Instead of having the patient bisect a predrawn horizontal line, he simply gave him a sheet of paper with a tiny vertical line in the middle and said, “Pretend that this vertical mark is the bisector of a horizontal line and draw the horizontal line.” The patient confidently drew the line, but once again the portion of the line on the right side was about half the size of the portion on the left. This suggests that more than simple inattention is going on. Bisiach argues that the whole representation of space is squished to enlarge the healthy right visual field and shrink the left. So the patient has to make the left side of the line longer than the right to make them appear equal to his own eyes.

9. The good news is that many patients with neglect syndrome — caused by damage to the right parietal lobe — recover spontaneously in a few weeks. This is important, for it implies that many of the neurological syndromes that we’ve come to regard as permanent — involving destroyed neural tissue — may in fact be “functional deficits”, involving a temporary imbalance of transmitters. The popular analogy between brains and digital computers is highly misleading, but in this particular instance I am tempted to use it. A functional deficit is akin to a software malfunction, a bug in a program rather than a problem with the hardware. If so, there may be hope yet for the millions of people who suffer from disorders that have traditionally been deemed “incurable” because up to now we have not known how to debug their brain’s software.

To illustrate this more directly, let me mention another patient, who, as a result of damage to parts of his left hemisphere, had a striking problem called dyscalculia. Like many patients with this syndrome, he was intelligent, articulate and lucid in most respects but when it came to arithmetic he was hopelessly inept. He could discuss the weather, what happened in the hospital that day and who had visited him. And yet if you asked him to subtract 7 from 100, he was stymied. But surprisingly, he didn’t merely fail to solve the arithmetic problem. My student Eric Altschuler and I noticed that every time he even attempted to do so, he confidently produced incomprehensible gibberish — what Lewis Carroll would call jabberwocky — and he seemed unaware that it was gibberish. The “words” were fully formed but devoid of any meaning — the sort of thing you see in language disorders such as Wernicke’s aphasia (indeed even the words were largely neologisms). It was as if the mere confrontation with a math problem caused him to insert a “language floppy disk” with a bug in it.

Why does he produce gibberish instead of remaining silent? We are so used to thinking about autonomous brain modules — one for math, one for language, one for faces — that we forget the complexity and magnitude of the interactions among the modules. His condition, in particular, makes sense only if you assume that the very deployment of a module depends on the current demands placed on the organism. The ability to sequence bits of information rapidly is a vital part of mathematical operations as well as the generation of language. Perhaps his brain has a “sequencing bug”. There may be a requirement of a certain special type of sequencing that is common to both math and language that is deranged. He can carry on ordinary conversation because he has so many more clues — so many backup options — to go by that he doesn’t need the sequencing mechanism in full gear. But when presented with a math problem, he is forced to rely on it to a much larger extent and is therefore thrown off completely. Needless to say, all this is pure speculation, but it provides food for thought.

10. Some sort of conversation between the what system in the temporal lobe and the how pathway in the parietal lobe must obviously occur in normal people, and this communication is perhaps compromised in patients with the looking glass syndrome. Released from the influence of the what pathway, the zombie reaches straight into the mirror.

11. Some patients with right parietal disease actually deny that their left arm belongs to them — a disorder called somatoparaphrenia; we consider such patients in Chapter 7. If you grab the patient’s lifeless left arm, raise it and move it into the patient’s right visual field, he will insist that the arm belongs to you, the physician, or to his mother, brother or spouse. The first time I saw a patient with this disorder, I remember saying to myself, “This must be the strangest phenomenon in all of neurology — if not in all of science!” How could a perfectly sane, intelligent person assert that his arm belongs to his mother?

Robert Rafael, Eric Altschuler and I recently tested two patients with this disorder and found that when they looked at their left arm in a mirror (placed on the right to elicit the looking glass syndrome), they suddenly started agreeing that it was indeed their arm! Could a mirror “cure” this disorder?

Chapter 7: The Sound of One Hand Clapping

1. This may seem harsh, but it’s frustrating for the physical therapist to begin rehabilitating patients when they’re in denial, so overcoming the delusion is of great practical importance in the clinic.

2. For descriptions of anosognosia see Critchley, 1966; Cutting, 1978; Damasio, 1994; Edelman, 1989; Galin, 1992; Levine, 1990; McGlynn and Schacter, 1989; Feinberg and Farah, 1997.

3. The distinguished evolutionary psychologist Robert Trivers at the University of California in Santa Cruz has proposed a clever explanation for the evolution of self-deception (Trivers, 1985). According to Trivers, there are many occasions in daily life when we need to lie — say, during a tax audit or an adulterous affair or in an effort to protect someone’s feelings. Other research has shown that liars, unless they are very practiced, almost always give the game away by producing an unnatural smile, a slightly flawed expression or a false tone of voice that others can detect (Ekman, 1992). The reason is that the limbic system (involuntary, prone to truth telling) controls spontaneous expressions, whereas the cortex (responsible for voluntary control, also the location where the lies are concocted) controls the facial expressions displayed when we are fibbing. Consequently, when we lie with a smile, it’s a fake smile, and even if we try to keep a straight face, the limbic system invariably leaks traces of deceit.

There is a solution to this problem, argues Trivers. To lie effectively to another person, all you have to do is first lie to yourself. If you believe it’s true, your expressions will be genuine, without a trace of guile. So by adopting this strategy, you can come up with some very convincing lies — and sell a lot of snake oil.

But it seems to me there is an internal contradiction in this scenario. Suppose you’re a chimp who has hidden some bananas under a tree branch. Along comes the alpha male chimp, who knows you have bananas and demands you give them to him. What do you do? You lie to your superior and say that the bananas are across the river, but you also run the risk of his detecting your lie by the expression on your face. So then what do you do? According to Trivers, you adopt the simple device of first convincing yourself that the bananas really are on the other side of the river, and you say that to the alpha male, who is fooled, and you are off the hook. But there’s a problem. What if you later got hungry and went looking for the bananas? Since you now believe that the food is across the river, that’s where you’d go look for it. In other words, the strategy proposed by Trivers defeats the whole purpose of lying, for the very definition of a lie is that you must continue to have access to the truth — otherwise there’d be no point to the evolutionary strategy.

One escape from this dilemma would be to suggest that a “belief” is not necessarily a unitary thing. Perhaps self-deception is mainly a function of the left hemisphere — as it tries to communicate its knowledge to others — whereas the right hemisphere continues to “know” the truth. One way to approach this experimentally would be to obtain galvanic skin responses in anosognosics and, indeed, in normal people (for example, children) when they are confabulating. When a normal person generates a false memory — or when a child confabulates — would he/she nevertheless register a strong galvanic skin response (as he would if he were lying)?

Finally, there is another type of lie for which Trivers’s argument may indeed be valid, and that concerns lying about one’s own abilities — boasting. Of course, a false belief about your abilities can also get you into trouble (“I am a big strong fellow, not puny and weak”) if it leads you to strive for unrealistic goals. But this disadvantage may be outweighed in many instances by the fact that a convincing boaster may get the best dates on Saturday night and may therefore disseminate his genes more widely and more frequently so that the “successful boasting through self-deception” genes quickly become part of the gene pool. One prediction for this would be that men should be more prone to both boasting and self-deception than women. To my knowledge this prediction has never been tested in a systematic way, although various colleagues assure me that it is true. Women, on the other hand, should be better at detecting lies since they have a great deal more at stake — an arduous nine-month pregnancy, a risky labor and a long period of caring for a child whose “maternity” is in no doubt.

4. Kinsbourne, 1989; Bogen, 1975; and Galin, 1976, have all warned us repeatedly of the dangers of “dichotomania”, of ascribing cognitive functions entirely to one hemisphere versus the other. We must bear in mind that the specialization in most instances is likely to be relative rather than absolute and that the brain has a front and a back and a top and a bottom, and not just left and right. To make matters worse, an elaborate pop culture and countless self-help manuals are based on the notion of hemispheric specialization. As Robert Ornstein (1997) has noted, “It’s a clich? in general advice to managers, bankers and artists, it’s in cartoons. It’s an advertisement. United Airlines offers reasons to fly both sides of you coast to coast. The music for one side and the good value for the other. The Saab automobile company offered their Turbo charged sedan as ‘a car for both sides of your brain.’ A friend of mine, unable to remember a name, excused this by describing herself as a ‘right atmosphere sort of person.’ ” But the existence of such a pop culture shouldn’t cloud the main issue — the notion that two hemispheres may indeed be specialized for different functions. The tendency to ascribe mysterious powers to the right hemisphere isn’t new — it goes all the way back to the nineteenth-century French neurologist Charles Brown-Sequard, who started a fashionable right hemisphere aerobics movement.

For an up-to-date review of ideas on hemispheric specialization, see Springer and Deutsch, 1998.

5. Much of our knowledge of hemispheric specialization comes from the groundbreaking work of Gazzaniga, Bogen and Sperry, 1962, whose research on split brain patients is well known. When the corpus callosum bridging the two hemispheres is cut, the cognitive capacities of each hemisphere can be studied separately in the laboratory.

What I’m calling “the general” is not unlike what Gazzaniga, 1992, calls “the interpreter” in the left hemisphere. However, Gazzaniga does not consider the evolutionary origin or biological rationale for having an interpreter (as I attempt to do here), nor does he postulate an antagonistic mechanism in the right hemisphere.

Ideas on hemispheric specialization similar to mine have also been proposed by Kinsbourne, 1989, not to explain anosognosia, but to explain laterality effects seen in depression following stroke. Although he does not discuss Freudian defenses or “paradigm shifts”, he has made the ingenious proposal that the left hemisphere may be needed for maintaining ongoing behavior, whereas right hemisphere activation may be required for interrupting behavior and producing an orienting response.

6. I would like to emphasize that the specific theory of hemispheric specialization that I am proposing certainly doesn’t explain all forms of anosognosia. For example, the anosognosia of Wernicke’s aphasics probably arises because the very part of the brain that would ordinarily represent beliefs about language is itself damaged. Anton’s syndrome (denial of cortical blindness), on the other hand, may require the simultaneous presence of a right hemisphere lesion. (I have seen a single “two-lesion” case like this, with Dr. Leah Levi, but additional research is needed to settle the matter.) Would a Wernicke’s aphasic become more aware of his deficit if his ear were irrigated with cold water?

7. Ramachandran, 1994, 1995a, 1996.

8. We are still a long way from understanding the neural basis of such delusions, but the important recent work of Graziano, Yap and Gross, 1994, may be relevant. They found single neurons in the monkey supplementary motor area that had visual receptive fields “superimposed” on somatosensory fields of the monkey’s hand. Curiously, when the monkey moved its hand, the visual receptive field moved with the hand, but eye movements had no effect on the receptive field. These hand-centered visual receptive fields (“monkey see, monkey do cells”) may provide a neural substrate for the kinds of delusions I see in my patients.

9. The notion that there is a mechanism in the right hemisphere not only for detecting and orienting to discrepancies of body image (as suggested by our virtual reality box and by Ray Dolan and Chris Frith’s experiment) but also for other kinds of anomalies receives support from three other studies that have been reported in the literature. First, it has been known for some time that patients with left hemisphere damage tend to be more depressed and pessimistic than those with right hemisphere strokes (Gainotti, 1972; Robinson et al., 1983), a difference that is usually attributed to the fact that the right hemisphere is more “emotional”. I would argue instead that because of damage to the left hemisphere, the patient does not have even the minimal “defense mechanisms” that you and I would use for coping with the minor discrepancies of day-to-day life, so that every trifling anomaly becomes potentially destabilizing.

Indeed, I have argued (Ramachandran, 1996) that even idiopathic depression seen in a psychiatric setting may arise from a failure by the left hemisphere to deploy Freudian defense mechanisms — perhaps as a result of transmitter imbalances or clinically undetectable damage to the left frontal region of the brain. The old experimental observation that depressed people are actually more sensitive to subtle inconsistencies (such as a briefly presented red ace of spades) than normal people is consistent with this line of speculation. I am currently administering similar tests to patients with anosognosia.

A second set of experiments supporting this idea comes from the important observation (Gardner, 1993) that after damage to the right (but not left) hemisphere patients have trouble recognizing the absurdity of “garden path sentences” in which there is an unexpected twist at the ending that contradicts the beginning. I interpret this finding as a failure of the anomaly detector.

10. Bill’s denials would seem comical if they were not tragic. But his behavior “makes sense” in that he is doing his utmost to protect his “ego” or self. When one is faced with a death sentence, what’s wrong with denial? But even though Bill’s denial may be a healthy response to a hopeless situation, its magnitude is surprising and it raises another interesting question. Do patients like him who have been delusional as a result of ventromedial frontal lobe involvement confabulate mainly to protect the integrity of “self”, or can they be provoked to confabulate about other abstract matters as well? If you were to ask such a patient, “How many hairs does Clinton have on his head?” would he confabulate or would he admit ignorance?

In other words, would the very act of questioning by an authority figure be sufficient to make him confabulate? There have been no systematic studies to address these issues, but unless a patient has dementia (loosely speaking, mental retardation due to diffuse cortical damage) he is usually quite “honest” in admitting ignorance of matters that don’t pose any immediate threat to his well-being.

11. Clearly denial runs very deep. But even though it’s fascinating to watch, it’s also a source of great frustration and practical concern for the patient’s relatives (although by definition not for the patient!). For instance, given that patients tend to deny the immediate consequences of the paralysis (having no inkling that the cocktail tray will surely topple or that they can’t tie laces), do they also deny its remote consequences — what’s going to happen next week, next month, next year? Or are they dimly aware in the back of their minds that something is amiss, that they are disabled? Would the denial stop them from writing a will?

I have not explored this question on a systematic basis, but on the few occasions when I raised this question, patients responded as if they were completely unaware of how deeply the paralysis would affect their future lives. For example, the patient may confidently assert that he intends to drive back home from the hospital or that he would like to resume golf or tennis. So it is clear that he is not merely suffering from a mere sensory/motor distortion — a failure to update his current body image (although that is certainly a major component of this illness). Rather, his whole range of beliefs about himself and his means of survival have been radically altered to accommodate his present denial. Mercifully such delusions can often be a considerable solace and comfort to these patients, even though their attitude comes into direct conflict with one of the goals of rehabilitation — restoring the patient’s insight into his predicament.

Another way to approach the domain specificity and depth of denial would be to flash the word “paralysis” on the screen and obtain a galvanic skin response. Would the patient find the word threatening — and register a big GSR — even though she is unaware of her paralysis? How would she rate the word for unpleasantness on a scale of 1 to 10, if asked to do so? Would her rating be higher (or indeed lower) than that of a normal person?

12. There are even patients with right frontal lobe stroke who manifest symptoms that are halfway between anosognosia and multiple personality disorder syndrome. Dr. Riita Hari and I saw one such patient recently in Helsinki. As a result of two lesions — one in the right frontal region and one in the cingulate — the patient’s brain was apparently unable to “update” her body image in the way that normal brains do. When she sat on a chair for a minute and then got up to start walking, she would experience her body as splitting into two halves — the left half still sitting in the chair and the right half walking. And she would look back in horror to ensure that she hadn’t abandoned the left half of her body.

13. Recall that when we are awake, the left hemisphere processes incoming sensory data, imposing consistency, coherence and temporal ordering on our everyday experiences. In doing so it rationalizes, denies, represses and otherwise censors much of this incoming information.

Now consider what happens during dreams and REM sleep. There are at least two possibilities that are not mutually exclusive. First, REM may have an important “vegetative” function related to wet-ware (for example, maintenance and “uploading” of neurotransmitter supplies), and dreams may just be epiphenomenal — irrelevant by-products. Second, dreams themselves may have an important cognitive/emotional function, and REM may simply be a vehicle for bringing this about. For example, they may enable you to try out various hypothetical scenarios that would be potentially destabilizing if rehearsed during wakefulness. In other words, dreams may allow a sort of “virtual reality” stimulation using various forbidden thoughts that are ordinarily eclipsed by the conscious mind; such thoughts might be brought out tentatively to see whether they can be assimilated into the story line. If they can’t be, then they are repressed and once again forgotten.

Why we cannot carry out these rehearsals in our imagination, while fully awake, is not obvious, but two ideas come to mind. First, for the rehearsals to be effective, they must look and feel like the real thing, and this may not be possible when we are awake, since we know that the images are internally generated. As we noted earlier, Shakespeare said, “You cannot cloy the hungry edge of appetite with bare imagination of a feast.” It makes good evolutionary sense that imagery cannot substitute for the real thing.

Second, unmasking disturbing memories when awake would defeat the very purpose of repressing them and could have a profound destabilizing effect on the brain. But unmasking such memories during dreams may permit a realistic and emotionally charged simulation to take place while preventing the penalties that would result if you were to do this when awake.

There are many opinions on the functions of dreams. For stimulating reviews on the subject see Hobson, 1988, and Winson, 1986.

14. This isn’t true for all people. One patient, George, vividly remembered that he had denied his paralysis. “I could see that it wasn’t moving”, he said, “but my mind didn’t want to accept it. It was the strangest thing. I guess I was in denial”. Why one person remembers and the other forgets is not clear, but it could have something to do with residual damage to the right hemisphere. Perhaps George had recovered more fully than Mumtaz or Jean and was therefore able to confront reality squarely. It is clear from my experiments, though, that at least some patients who recover from denial will “deny their denials” even though they are mentally lucid and have no other memory problems.

Our memory experiments also raise another interesting question: What if a person were to have an automobile accident that caused peripheral nerve damage and rendered her left arm paralyzed? Then suppose she suffered a stroke some months later, the kind that leads to left body paralysis and denial syndrome. Would she then suddenly say, “Oh, my God, doctor, my arm that has been paralyzed all along is suddenly moving again.” Going back to my theory that the patient tends to cling to a preexisting worldview, would she cling to her updated worldview and therefore say that the left arm was paralyzed — or would she go back to her earlier body image and assert that her arm was in fact moving again?

15. I emphasize that this is a single case study and we need to repeat the experiment more carefully on additional patients. Indeed, not every patient was as cooperative as Nancy. I vividly recall one patient, Susan, who vigorously denied her left arm paralysis and who agreed to be part of our experiments. When I told her that I was going to inject her left arm with a local anesthetic, she stiffened in her wheelchair, leaned forward to look me straight in the eye and without batting an eyelid, said, “But doctor, is that fair?” It was as though Susan was playing some sort of game with me and I had suddenly changed the rules and this was forbidden. I didn’t continue the experiment.

I wonder, though, whether mock injections may pave the way for an entirely new form of psychotherapy.

16. Another fundamental problem arises when the left hemisphere tries to read and interpret messages from the right hemisphere. You will recall from Chapter 4 that the visual centers of the brain are segregated into two streams, called the how and what pathways (parietal and temporal lobes). Crudely speaking, the right hemisphere tends to use an analogue — rather than digital — medium of representation, emphasizing body image, spatial vision and other functions of the how pathway. The left hemisphere, on the other hand, prefers a more logical style related to language, recognizing and categorizing objects, tagging objects with verbal labels and representing them in logical sequences (done mainly by the what pathway). This creates a profound translation barrier. Every time the left hemisphere tries to interpret information coming from the right — such as trying to put into words the ineffable qualities of music or art — at least some forms of confabulation may arise because the left hemisphere starts spinning a yarn when it doesn’t get the expected information from the right (because the latter is either damaged or disconnected from the left). Can such a failure of translation explain at least some of the more florid confabulations that we see in patients with anosognosia? (See Ramachandran and Hirstein, 1997.)

Chapter 8: “The Unbearable Lightness of Being”

1. J. Capgras and J. Reboul-Lachaux, 1923; H.D. Ellis and A.W. Young, 1990; Hirstein and Ramachandran, 1997.

2. This disorder is called prosopagnosia. See Farah, 1990; Damasio, Damasio and Van Hoesen, 1982.

Cells in the visual cortex (area 17) respond to simple features like bars of light, but in the temporal lobes they often respond to complex features such as faces. These cells may be part of a complex network specialized for recognizing faces. See Gross, 1992; Rolls, 1995; Tovee, Rolls and Ramachandran, 1996.

The functions of the amygdala which figures prominently in this chapter, have been discussed in detail by LeDoux, 1996, and Damasio, 1994.

3. The clever idea that Capgras’ delusion may be a mirror image of prosopagnosia was first proposed by Young and Ellis (1990), but they postulate a disconnection between dorsal stream and limbic structures rather than the IT amygdala disconnection that we suggest in this chapter. Also see Hirstein and Ramachandran, 1997.

4. Another question: Why does the mere absence of this emotional arousal lead to such an extraordinarily far-fetched delusion? Why doesn’t the patient just think, I know that this is my father but for some reason I no longer feel the warmth? One answer is that some additional lesion, perhaps in the right frontal cortex, may be required to generate such extreme delusions. Recall the denial patients in the last chapter whose left hemispheres sought to preserve global consistency by explaining away discrepancies and whose right hemispheres kept things in balance by monitoring and responding to inconsistency. To develop full-blown Capgras’ syndrome, one might need a conjunction of two lesions — one that affects the brain’s ability to attach emotional significance to a familiar face and one that disturbs the global “consistency-checking” mechanism in the right hemisphere. Additional brain imaging studies are needed to resolve this.

5. Baron-Cohen, 1995.

Chapter 9: God and the Limbic System

1. At present the device is effective mainly for stimulating parts of the brain near the surface but we may eventually be able to stimulate deeper structures.

2. See Papez, 1937, for the original description and Maclean, 1973, for a comprehensive review full of fascinating speculations.

It’s no coincidence that the rabies virus “chooses” to lodge itself mainly in the limbic structures. When dog A bites dog B, the virus travels from the peripheral nerves near the bite into the spinal cord and then eventually up to the victim’s limbic system, turning Benji into Damien. Snarling and foaming at the mouth, the once-placid pooch bites another victim and the virus is thus passed on, infecting those very brain structures that drive aggressive biting behavior. And as part of this diabolical strategy, the virus initially leaves other brain structures completely unaffected so that the dog can remain alive just long enough to transmit the virus. But how the devil does a virus travel all the way from peripheral nerves near the bite to cells deep inside the brain while sparing all other brain structures along the way? When I was a student I often wondered whether it might be possible to stain the virus with a fluorescent dye in order to “light up” these brain areas — thereby allowing us to discover pathways specifically concerned with biting and aggression, in much the same way one uses PET scans these days. In any event, it is clear that as far as the rabies virus is concerned, a dog is just another way of making a virus — a temporary vehicle for passing on its genome.

3. Useful descriptions of temporal lobe epilepsy can be found in Trimble, 1992, and Bear and Fedio, 1977. Waxman and Geschwind, 1975, have championed the view that there is a constellation of personality traits more frequently found in temporal lobe epilepsy patients than in age-matched controls. Although this notion is not without its critics, several studies have confirmed such an association: Gibbs, 1951; Gastaut, 1956; Bear and Fedio, 1977; Nielsen and Kristensen, 1981; Rodin and Schmaltz, 1984: Adamec, 1989; Wieser, 1983.

The presumed link between “psychiatric disturbances” and epilepsy, of course, goes back to antiquity, and, in the past, there has been an unfortunate stigma attached to the disorder. But as I have stressed repeatedly in this chapter, there is no basis for concluding that any of these traits is “undesirable” or that the patient is worse off because of them. The best way to eliminate the stigma, of course, is to explore the syndrome in greater depth.

Slater and Beard (1963) noted “mystical experiences” in 38 percent of their series of cases, and Bruens (1971) made a similar observation. Frequent religious conversions are also seen in some patients (Dewhurst and Beard, 1970).

It is important to recognize that only a minority of patients display esoteric traits, like religiosity or hypergraphia, but that does not make the association any less real. By way of analogy, consider the fact that kidney or eye changes (complications of diabetes) occur only in a minority of diabetics, but no one would deny that the association exists. As Trimble (1992) has noted, “It is most likely that personality traits such as religiosity and hypergraphia seen in patients with epilepsy represent an all-or-none phenomenon and are seen in a minority of patients. It is not a graded characteristic, for example like obsessionality, and therefore does not emerge as a prominent factor in questionnaire studies unless a sufficiently large number of patients are evaluated.”

4. To complicate matters, it is entirely possible that some clinically undetectable damage in the temporal lobes also underlies schizophrenia and manic-depressive disorders, so the fact that psychiatric patients sometimes experience religious feelings doesn’t negate my argument.

5. Similiar views have been put forward by Crick, 1993; Ridley, 1997; and Wright, 1994, although they do not invoke specialized structures in the temporal lobe.

This argument smacks of group selection — a taboo phrase in evolutionary psychology — but it doesn’t have to. After all, most religions, even though they pay lip service to the “brotherhood” of mankind, tend mainly to emphasize loyalty to one’s own clan or tribe (hence those who probably share many of the same genes).

6. Bear and Fedio (1977) offered the ingenious suggestion that there has been hyperconnectivity in the limbic system that makes the patients see cosmic significance in everything. Their idea predicts a heightened GSR to everything the patient looks at, a prediction that held up in some preliminary studies. But other studies showed either no change or a reduction in GSR to most categories. The picture is complicated also by the extent to which the patient is on medication while the GSR is measured.

Our own preliminary studies, on the other hand, suggest that there can be a selective enhancement of GSR responses to some categories and not others — thereby altering the patients’ emotional landscape permanently (Ramachandran, Hirstein, Armel, Tecoma and Iragui, 1997). But this finding, too, should be taken with a generous scoop of salt until it is confirmed on a large number of patients.

7. Moreover, even if the changes in the patient’s brain were originally mediated by the temporal lobes — the actual repository of the changes — “a religious outlook” probably involves many different brain areas.

8. For lucid and lively expositions of Darwin’s ideas, see Dawkins, 1976; Maynard Smith, 1978; Dennett, 1995.

There is an acrimonious debate going on at the high table of evolution over whether every trait (or almost every trait) is a direct result of natural selection or whether there are other laws or principles governing evolution. We will take up this debate in Chapter 10, where I discuss the evolution of humor and laughter.

9. Much of this discussion appears in a book by Loren Eisley (1958).

10. This idea is clearly described in a delightful book by Christopher Wills (1993). See also Leakey, 1993, and Johanson and Edward, 1996.

11. The savant who could produce the cube-root is described by Hill, 1978. The idea that savants have learned some simple shortcuts or tricks for discovering primes or for factoring has been around for some time. But it doesn’t work. When a professional mathematician learned the appropriate algorithm, he still took almost a minute to generate all primes between 10,037 and 10,133 — whereas a nonverbal autistic man, naive to this task, took only ten seconds (Hermelin and O’Connor, 1990).

There are algorithms for generating primes at a high frequency — with occasional rare errors. It would be interesting to see whether prime number savants make exactly the same rare errors that these algorithms do; that would tell us whether the savants were tacitly using the same algorithm.

12. Another possible explanation of the savant syndrome is based on the notion that the absence of certain abilities may actually make it easier to take advantage of what’s left over and to focus attention on more esoteric skills. For instance, as you encounter events in the external world, you obviously do not record every trivial detail in your mind; that would be maladaptive. Our brains first gauge the significance of events and engage in an elaborate censoring and editing of the information — before storing it. But what if this mechanism goes awry? Then you might start recording at least some events in needless detail like the words in a book that you read ten years ago. This, to you or me, might seem to be an astounding talent. But in truth, it emerges from a damaged brain that cannot censor daily experience. Similarly, an autistic child is locked in a world where others are not welcome, save one or two channels of interest to the outside. The child’s ability to focus all her attention on a single subject to the exclusion of all else can lead to apparently exotic abilities — but, again, her brain is not normal and she remains profoundly retarded.

A related but more ingenious argument is proposed by Snyder and Thomas (1997), who suggest that savants are for some reason less concept-driven because of their retardation and this in turn allows them access to earlier levels of the processing hierarchy, which is not available to most of us (hence the obsessively detailed drawings of Stephen Wiltshire, which contrast sharply with the tadpole figures or the conceptual cartoonlike drawings of normal children).

This idea is not inconsistent with mine. One could argue that the shift in emphasis from concept-driven perception (or conception) to allow access to early processes may depend on the hypertrophy of the “early” modules in precisely the manner I have suggested. Snyder’s idea could therefore be seen as being halfway between the traditional attention theory and my theory proposed in this chapter.

One problem is that although drawings of some savants seem excessively detailed (for example, Stephen Wiltshire’s, described by Sacks), there are others whose drawings seem genuinely beautiful (for example, the da Vinci-like drawings of horses produced by Nadia). Her sense of perspective, shading and so on seem hypernormal in a manner predicted by my argument.

What all these ideas have in common is that they imply a shift in emphasis from one set of modules to the other. Whether this results simply from the lack of function of one set (with more attention paid to others) or from actual hypertrophy of what’s left remains to be seen.

The attention shift idea also doesn’t appeal to me for two other reasons. First, saying that you automatically become skilled at something by deploying attention doesn’t really tell us very much unless you know what attention is, and we don’t. Second, if this argument is correct, then why don’t adult patients with damaged large portions of their brains suddenly become very skilled at other things — by shifting attention? I have yet to come across a dyscalculic who suddenly became a musical savant or a neglect patient who became a calculating prodigy. In other words, the argument doesn’t explain why savants are born, not made.

The hypertrophy theory can, of course, be readily tested by using magnetic resonance imaging (MRI) on different types of savants.

13. Patients like Nadia also bring us face-to-face with an even deeper question: What is art? Why are some things pretty, while others are not? Is there a universal grammar underlying all visual aesthetics?

An artist is skilled at grasping the essential features (what Hindus call rasa) of an image he is trying to portray and eliminating superfluous detail, and in doing so he is essentially imitating what the brain itself has evolved for. But the real question is: Why should this be aesthetically pleasing?

In my view, all art is “caricature” and hyperbole, so if you understand why caricatures are effective you understand art. If you teach a rat to discriminate a square from, say, a rectangle and reward it for the latter, then it will soon start recognizing the rectangle and show a preference for it. But paradoxically, it will respond even more vigorously to a skinnier “caricature” rectangle (e.g., with an aspect ratio of 3:1 instead of 2:1) than to the original prototype! The paradox is resolved when you realize that what the rat learns is a rule — “rectangularity” — rather than a particular exemplar of that rule. And the way the visual form area in the brain is structured, amplifying the rule (a skinnier rectangle) is especially reinforcing (pleasing) to the rat, providing an incentive for the rat’s visual system to “discover” the rule. In a similar vein, if you subtract a generic average face from Nixon’s face and then amplify the differences, you end up with a caricature that is more Nixon-like than the original. In fact, the visual system is constantly struggling to “discover the rule”. My hunch is that very early in evolution, many of the extrastriate visual areas that are specialized for extracting correlations and rules and binding features along different dimensions (form, motion, shading, color, etc.) are directly linked to limbic structures to produce a pleasant sensation, since this would enhance the animal’s survival. Consequently, amplifying a specific rule and eliminating irrelevant detail makes the picture look even more attractive. I would suggest also that these mechanisms and associated limbic connections are more prominent in the right hemisphere. There are many cases in the literature of patients with left-hemisphere stroke whose drawings actually improved after the stroke — perhaps because the right hemisphere is then free to amplify the rule. A great painting is more evocative than a photograph because the photograph’s details may actually mask the underlying rule — a masking that is eliminated by the artist’s touch (or by a left-hemisphere stroke!).

This is not a complete explanation of art, but it’s a good start. We still need to explain why artists often use incongruous juxtapositions deliberately (as in humor) and why a nude seen behind a shower curtain or a diaphanous veil is more attractive than a nude photograph. It’s as though the rule discovered after a struggle is even more reinforcing than one that is instantly obvious, a point that has also been made by the art historian Ernest Gombrich. Perhaps natural selection has wired up the visual areas in such a way that the reinforcement is actually stronger if it obtained after “work” — in order to ensure that the effort itself is pleasant rather than unpleasant. Hence the eternal appeal of puzzle-pictures such as the dalmatian dog on page 239 or “abstract” pictures of faces with strong shadows. A pleasant feeling occurs when the picture finally clicks and the splotches are correctly linked together to form a figure.

Chapter 10: The Woman Who Died Laughing

1. Ruth and Willy (pseudonyms) are reconstructions of patients originally described in an article by Ironside (1955). The clinical details and autopsy reports, however, have not been altered.

2. Fried, Wilson, MacDonald and Behnke, 1998.

3. The discipline of evolutionary psychology was foreshadowed by the early writings of Hamilton (1964), Wilson (1978) and Williams (1966). The modern manifesto of this discipline is by Barkow, Cosmides and Tooby (1992), who are regarded as founders of the field. (Also see Daly and Wilson, 1983, and Symons, 1979.)

The clearest exposition of these ideas can be found in Pinker’s book How the Mind Works, which contains many stimulating ideas. My disagreement with him on specific details of evolutionary theory doesn’t detract from the value of his contributions.

4. This idea is intriguing, but as with all problems in evolutionary psychology it is difficult to test. To emphasize this further, I’ll mention another equally untestable idea. Consider Margie Profet’s clever suggestion that women get morning sickness in the first three months of pregnancy to curtail appetite, thus avoiding the natural poisons in many foods that might lead to abortion (Profet, 1997). My colleague Dr. Anthony Deutsch has proposed an even more ingenious argument. He suggests, tongue-in-cheek, that the odor of vomitus prevents the male from wanting to have sex with a pregnant woman, thereby reducing the likelihood of intercourse, which in turn is known to increase the risk of abortion. It’s instantly obvious this is a silly argument, but why is the argument about toxins any less silly?

5. V.S. Ramachandran, 1997. Here is what they fell for:

Now ask yourself, “Why do gentlemen prefer blondes?” In Western cultures, it is widely believed that men have a distinct sexual and aesthetic preference for blondes over brunettes (Alley and Hildebrandt, 1988). A similar preference for women of lighter than average skin color is also seen in many non-Western cultures. (This has been formally confirmed by “scientific” surveys; Van der Berghe and Frost, 1986.) Indeed, in many countries, there is an almost obsessive preoccupation with “improving one’s complexion” — a mania that the cosmetic industry has been quick to pander to with innumerable useless skin products. (Interestingly, there appears to be no such preference for men of lighter skin, hence the phrase “tall, dark and handsome”.)

The well-known American psychologist Havelock Ellis suggested fifty years ago that men prefer rotund features (which indicate fecundity) in women and that blonde hair emphasizes the roundness by blending in better with the body outline. Another view is that infants’ skin and hair color tend to be lighter than adults’ and the preference for blonde women may simply reflect the fact that in humans, neotenous babylike features in females may be secondary sexual characteristics.

I’d like to propose a third theory, which is not incompatible with these two but has the added advantage of being consistent with more general biological theories of mate selection. But to understand my theory, you have to consider why sex evolved in the first place. Why not reproduce asexually since you could then pass on all your genes to your offspring rather than just half of them? The surprising answer is that sex evolved mainly to avoid parasites (Hamilton and Zuk, 1982)! Parasitic infestation is extremely common in nature and parasites are always trying to fool the host immune system into thinking that they are part of the host body. Sex evolved to help the host species shuffle its genes so that it always stays one step ahead of the parasites. (This is called the Red Queen strategy, a term inspired by the queen in Alice in Wonderland, who had to keep running just to stay in one place.) Similarly, we can ask why secondary sexual characteristics such as the peacock’s tail or the rooster’s wattles evolved. The answer again is parasites. These displays — a shimmering large tail or blood red wattles — may serve the purpose of “informing” the female that the suitor is healthy and free of skin parasites.

Might being blonde or light skinned serve a similar purpose? Every medical student knows that anemia, usually caused by either intestinal or blood parasites; cyanosis (a sign of heart disease); jaundice (a sick liver) and skin infection are much easier to detect in fair-skinned people than in brunettes. This is true for both skin and eyes. Infestation with intestinal parasites must have been very common in early agricultural settlements, and such infestation can cause severe anemia in the host. There must have been considerable selection pressures for the early detection of anemia in nubile young women since anemia can interfere with fertility, pregnancy and the birth of a healthy child. So the blonde is in effect telling your eyes, “I am pink, healthy and free of parasites. Don’t trust that brunette. She could be concealing her ill health and parasitic infestation.”

A second, related reason for the preference might be that the absence of protection from ultraviolet radiation by melanin causes the skin of blondes to “age” faster than that of brunettes and the dermal signs of aging — age spots and wrinkles — are usually easier to detect. Since fertility in women declines rapidly with age, perhaps aging men prefer very young women as sexual partners (Stuart Anstis, personal communication). So blondes might be preferred not only because the signs of aging occur earlier but also because the signs are easier to detect in them.

Third, certain external signs of sexual interest like social embarrassment and blushing, as well as sexual arousal (the “flush” of orgasm), might be more difficult to detect in dark-skinned women. Thus the likelihood that one’s courtship gestures will be reciprocated and consummated can be predicted with greater confidence when courting blondes.

The reason that the preference is not so marked for light-skinned men might be that anemia and parasites are mainly a risk during pregnancy and men don’t get pregnant. Furthermore, a blonde woman would have greater difficulty than a brunette in lying about an affair she just had since the blush of embarrassment and guilt would give her away. For a man, detecting such a blush in a woman would be especially important because he is terrified of being cuckolded, whereas a woman need not worry about this — her main goals are to find and keep a good provider. (This paranoia in the man is not unreasonable; recent surveys show that as many as 5 to 10 percent of fathers are not genetic fathers. There are probably many more milkman genes in the population than anyone realizes.)

One last reason for preferring blondes concerns the pupils. Pupil dilation — another obvious sign of sexual interest — would be more evident when seen against the blue iris of a blonde than against the dark iris of a brunette. This may also explain why brunettes are often considered “sultry” and mysterious (or why women use belladonna to dilate pupils and why men try to seduce women by candlelight; the drug and dim light dilate the pupils, enhancing the sexual interest display).

Of course, all these arguments would apply equally well to any woman of lighter skin. Why does the blond hair make any difference, if indeed it does? The preference for lighter skin has been established by conducting surveys, but the question of blond hair has not been studied. (The existence of bleached blondes doesn’t negate our argument since evolution couldn’t have anticipated the invention of hydrogen peroxide. Indeed, the fact that there is no such thing as a “fake brunette” but only a “fake blonde” suggests that such a preference does exist; after all, most blondes don’t dye their hair black.) I suggest that the blond hair serves as a “flag” so that even from a great distance it’s obvious to a male that a light-skinned woman is in the neighborhood.

The take-home message: Gentlemen prefer blondes so they can easily detect the early signs of parasitic infection and aging, both of which reduce fertility and offspring viability and can also detect blushing and pupil size, which are indices of sexual interest and fidelity. (That fair skin may itself be an indicator of youth and hormonal status was proposed in 1995 by Don Symons, a distinguished evolutionary psychologist from UCSB, but he did not put forward specific arguments concerning the easier detection of parasites, anemia, blushing or pupils in blondes being advocated here.)

As I said earlier, I concocted this whole ridiculous story as a satire on ad hoc sociobiological theories of human mate selection — the mainstay of evolutionary psychology. I give it less than a 10 percent chance of being true, but even so it’s at least as viable as many other theories of human courtship currently in vogue. If you think my theory is silly, then you should read some of the others.

6. Ramachandran, 1998.

7. The important link between humor and creativity has also been emphasized by the English physician, playwright and polymath Jonathan Miller.

8. The notion that a smile is related to a threat grimace goes all the way back to Darwin and often resurfaces in the literature. But to my knowledge, no one has pointed out that it has the same logical form as laughter: an aborted response to a potential threat when an approaching stranger turns out to be a friend.

9. Any theory that purports to explain humor and laughter has to account for all of the following features — not just one or two: first, the logical structure of jokes and events that elicit laughter — that is, the input; second, the evolutionary reason why the input has to take the particular form that it does, a buildup of a model followed by a sudden paradigm shift that is of trivial consequence; third, the loud explosive sound; fourth, the relation of humor to tickling and why tickling might have evolved (I suggest it has the same logical form as humor but may represent “play” rehearsal for adult humor); fifth, the neurological structures involved and how the functional logic of humor maps onto the “structural logic” of these parts of the brain; sixth, whether humor has any other functions than the one it originally evolved for (for example, we suggest that adult cognitive humor may provide rehearsal for creativity and may also serve internally to “deflate” potentially disturbing thoughts that you can’t do anything about); seventh, why a smile is a “half laugh” and often precedes laughter (the reason I suggest is that it has the same logical form — deflation of potential threat — that humor and laughter have because it evolved in response to approaching strangers).

Laughter may also facilitate a kind of social bonding or “grooming”, especially since it frequently occurs in response to a spurious violation of social contracts or taboos (e.g., when someone is lecturing on the podium with his fly open). Telling jokes or laughing at someone may allow an individual to recalibrate frequently the social mores of the group to which he belongs and help consolidate a shared sense of values. (Hence the popularity of ethnic jokes.)

The psychologist Wallace Chafe (1987) has proposed an ingenious theory of laughter that is in some ways the converse of mine — although he doesn’t consider the neurobiology. The main function of laughter, he says, is to serve as a “disabling” device — the physical act is so exhausting that it literally immobilizes you momentarily and allows you to relax when you realize that the threat isn’t genuine. I find this idea attractive for two reasons. First, when you stimulate the left supplementary motor cortex, not only do you get fits of laughter but the patient is effectively immobilized; he can’t do anything else (Fried et al., 1998). Second, in a strange disorder called catalepsy, listening to a joke causes the patient to become paralyzed and collapse to the ground while remaining fully conscious. It seems plausible that this might be a pathological expression of the “immobilization reflex” that Chafe is alluding to. However, Chafe’s theory doesn’t explain how a laugh is related to a smile or how it is related to tickling; nor why a laugh should take the particular form that it does — the rhythmic, loud explosive sounds.

Why not just stop dead in your tracks like an opossum? This, of course, is a general problem in evolutionary psychology; you can come up with several reasonable-sounding scenarios of how something might have evolved, but it is often difficult to retrace the particular route taken by the trait to get where it is now.

Finally, even if I am correct in asserting that laughter evolved as an “it’s OK” or “all is well” signal for communication, we have to explain the rhythmic head and body movements (in addition to the sounds) that accompany laughter. Can it be a coincidence that so many other pleasurable activities such as dancing, sex and music also involve rhythmic movements? Could it be that they all tap partially into the same circuits? Jacobs (1994) has proposed that both autistic children and normal people may enjoy rhythmic movements because such movements activate the serotinergic raphe system, releasing the “reward transmitter” serotonin. One wonders whether laughter activates the same mechanism. I knew of at least one autistic child who frequently engaged in uncontrollable, socially inappropriate laughter for relief.

10. In saying this, I have no intention of providing ammunition for creationists. These “other factors” should be seen as mechanisms that complement rather than contradict the principle of natural selection. Here are some examples:

a. Contingency — plain old luck — must have played an enormous role in evolution. Imagine two different species that are slightly different genetically — let’s call them hippo A and hippo B — on two different islands, island A and island B. Now if a huge asteroid hits both islands, perhaps hippo B is better adapted to asteroid impacts, survives and passes on its gene via natural selection. But it’s equally possible that the asteroid may not have hit island B and its hippos. Say it hit only island A and wiped out all hippo A’s. Hippo B’s therefore survived and passed on their genes not because they had “asteroid-resistance genes” but simply because they were lucky and the asteroid never hit them.

This idea is so obvious that I find it astonishing that people argue about it. In my view, it encapsulates the whole debate over the Burgess shale creatures. Whether Gould is right or wrong about the particular creatures unearthed there, his general argument about the role of contingency is surely correct. The only sensible counterargument would be the many instances of convergent evolution. My favorite example is the evolution of intelligence and complex types of learning — such as imitation learning — independently in octopuses and higher vertebrates. How does one explain the independent emergence of such complex traits in both vertebrates and invertebrates, if contingency rather than natural selection was playing the major role? Doesn’t it imply that if the tape of evolution were played again, intelligence would evolve yet again? If it evolved twice, why not three times?

Yet such instances of astonishing convergence are not fatal to the notion of contingency; after all, they occur very rarely. Intelligence evolved twice, not dozens of times. Even the apparent convergent evolution of eyes in vertebrates and invertebrates — such as squids — is probably not a true case of convergence, since it has recently been shown that the same genes are involved.

b. When certain neural systems reach a critical level of complexity, they may suddenly acquire unforeseen properties, which again are not a direct result of selection. There is nothing mystical about these properties; one can show mathematically that even completely random interactions can lead to these little eddies of order from complexity. Stuart Kauffman, a theoretical biologist at the Santa Fe Institute, has argued that this might explain the punctuated nature of organic evolution — that is, the sudden emergence of new species in novel phylogenetic lines.

c. The evolution of morphological traits may be driven, to a significant extent, by perceptual mechanisms. If you teach a rat to discriminate a square (1:1 aspect ratio) from a rectangle (of 1:2 aspect ratio) and reward it for the rectangle alone, then the rat is found to respond even more vigorously to a skinnier (1:4 ratio) rectangle than to the original prototype rectangle, which it was trained on. This paradoxical result — called the “peak shift effect” — suggests that the animal is learning a rule — rectangularity — rather than a response to a single stimulus. I suggest that this basic propensity — wired into the visual pathways of all animals — can help explain the emergence of new species and of new phylogenetic trends. Consider the classic problem of how the giraffe got its long neck. Assume first that an ancestral group of giraffes evolved a slightly longer neck as a result of competition for food, that is, through conventional Darwinian selection. Once such a trend had been set up, however, it would be important for long-necked giraffes to mate only with other long-necked giraffes to ensure viability and fertility of the offspring. Once the longer neck became a distinguishing trait for the new species, then this trait must become “wired” into the visual centers of the giraffe’s brain to help locate potential mates. Once this “giraffe = long neck” rule has been wired into a freely interbreeding group of giraffes, given the peak shift principle, any giraffe would tend to prefer mating with the most “giraffelike” individual that it could spot — that is, the most long-necked individual in the herd. The net result would be a progressive increase in “long neck” alleles in the population even in the absence of a specific selection pressure from the environment. The final outcome would be a race of giraffes with almost comically exaggerated necks of the kind we see today.

This process will lead to a positive feedback “gain amplification” of any preexisting evolutionary trends; it will exaggerate morphological and behavioral differences between a given species and its immediate ancestor. This amplification will occur as a direct consequence of a psychological law rather than a result of environmental selection pressures. The theory makes the interesting prediction that there should be many instances in evolution of progressive caricaturization of species. Such trends do occur and can be seen clearly in the evolution of elephants, horses and rhinoceroses. As we trace their evolution, they appear to become more and more “mammothlike” or “horselike” or “rhinolike” with the passage of time.

This idea is quite similar to Darwin’s own explanation for the origin of secondary sexual characteristics — his so-called theory of sexual selection. The progressive enlargement of the male peacock tail, for example, is thought to arise from a female’s preference for mates with larger tails. The key difference between our idea and Darwinian sexual selection is that the latter idea was put forth specifically to explain differences between the sexes, whereas our idea accounts for morphological differences between species as well. Mate selection involves choosing partners that have more salient “sexual markers” (secondary sexual characteristics) and have species “markers” (labels that serve to differentiate one species from another). Consequently, our idea might help explain the evolution of external morphological traits in general and the progressive caricaturization of species, and not just the emergence of flamboyant sexual display signals andethological “releasers”.

One wonders whether the explosive enlargement of brain (and head) size in hominid evolution is a consequence of the same principle. Perhaps we find infantile, neotenous features, such as a disproportionately large head, appealing because such features are usually diagnostic of a helpless infant, and genes that promote the care of infants would quickly multiply in a population. But once this perceptual mechanism is in place, infants’ heads would become larger and larger (since large-head genes would produce neotenous features and elicit greater care) and a large brain might simply be a bonus!

To this long list we can add others — Lynn Margulis’s idea that symbiotic organisms can “fuse” to evolve into new phylogenetic lines (for example, mitochondria have their own DNA and may have started out as intracellular parasites). A detailed description of her ideas is outside the scope of this book, which, after all, is about the brain, not evolution.

Chapter 11: “You Forgot to Deliver the Twin”

1. This story is a reconstruction based on a case originally described by Silas Weir Mitchell. See Bivin and Klinger, 1937.

2. Christopher Wills told me the story of an eminent professor of obstetrics who was fooled by a patient sufficiently that he actually presented her as a case of normal pregnancy to his residents and medical students during Grand Rounds. The students promptly elicited all the classic symptoms and signs of pregnancy in the unfortunate lady. They even claimed to hear the fetal heartbeat with their gleaming new stethoscopes — until one student remembered the “protruding umbilicus” sign and risked embarrassing her professor by revealing the correct diagnosis.

3. Pseudocyesis is a fossil disease, so rare that one hardly sees it anymore. The condition was first described by Hippocrates in 300 B.C. It afflicted Mary Tudor, queen of England, who was falsely pregnant twice, with one episode lasting thirteen months. Anna O., one of Freud’s most famous patients, suffered through a false pregnancy. And the more recent medical literature even describes two transsexuals who experienced it! For recent work on pseudocyesis, see Brown and Barglow, 1971, and Starkman et al., 1985.

4. Follicle-stimulating hormone (FSH), luteinizing hormone (LH) and prolactin are produced by the anterior pituitary; they regulate the menstrual cycle and ovulation. FSH causes the initial ripening of the ovarian follicle and LH causes ovulation. The combined action of FSH and LH augments the release of estrogen by the ovaries and later of both estrogen and progesterone by the corpus luteum (what remains of the follicle after release of the egg). Last, prolactin also acts on the corpus luteum, causing it to secrete estrogen and progesterone and preventing it from becoming involuted (and therefore preventing subsequent menstruation if the ovum is fertilized).

5. For the effects of suggestion on warts, see Spanos, Stenstrom and Johnston, 1988. For a report on unilateral wart remission, see Sinclair-Gieben and Chalmers, 1959.

6. See Ader, 1981, and Friedman, Klein and Friedman, 1996.

7. Hypnosis is a good example. Its a topic that’s sometimes taught even in the most conservative medical establishments, and yet every time the word is mentioned at scientific meetings, there is an uncomfortable shuffling of feet. Even though hypnosis has a venerable tradition going all the way back to one of the founding fathers of modern neurology, Jean Martin Charcot, it seems to enjoy a curious dual reputation, being accepted as real on the one hand and yet also regarded as the orphan child of “fringe medicine”. Charcot claimed that if the right side of a normal person’s body is temporarily paralyzed as the result of a hypnotic suggestion, then that person also has problems with language, suggesting that the trance is actually inhibiting brain mechanisms in the left hemisphere (recall that language is in the left). A similar trance-induced paralysis of the left side of the body does not produce language problems. We have tried replicating this result in our lab, without success.

The key question about hypnosis is whether it is simply an elaborate form of “role playing” (in which you temporarily suspend disbelief as you do while watching a horror movie) or whether it is a fundamentally different mental state.

Richard Brown, Eric Altschuler, Chris Foster and I have begun to try to answer this question using a technique called Stroop interference. The words “red” and “green” are printed either in the correct color (red ink for the word “red”, green for “green”) or with the colors reversed (the word “green” in red ink). If a normal subject is asked just to name the color of the ink and ignore the word, he is slowed down considerably if the word and color don’t match. He’s apparently unable voluntarily to ignore the word, and so the word interferes with color naming (Stroop interference). Now the question arises, What would happen if you implanted the hypnotic suggestion in the subject’s mind that he’s a native Chinese who can’t read the English alphabet but can still name colors? Would this suddenly eliminate Stroop interference? This test would prove once and for all that hypnosis is real — not playacting — for there is no way a subject can voluntarily ignore the word. (As a “control” one could simply offer him a large cash reward for voluntarily overcoming the interference.)

8. The placebo response is a much maligned but poorly understood phenomenon. Indeed, the phrase has come to acquire a pejorative connotation in clinical medicine. Imagine that you are testing a new painkilling drug for back pain. Assume also that no one gets better spontaneously. To determine the efficacy of the drug, you give the pills to one hundred patients and find that, say, ninety patients get better. In a controlled clinical trial, it is customary for the comparison group of one hundred patients to receive a dummy pill — a placebo — (of course, the patient doesn’t know this) to see what proportion of them, if any, get better simply as a result of the belief in the drug. If only 50 percent get better (instead of 90 percent), we are justified in concluding that the drug is indeed an effective painkiller.

But now let us turn to the mysterious 50 percent who got better as a result of the “placebo”. Why did they get better? It was shown about a decade ago that these patients actually release painkilling chemicals, called endorphins, in their brains (indeed, in some cases the effect of the placebo can be counteracted by naloxone, a drug that blocks endorphins).

A fascinating but largely unexplored question concerns the specificity of the placebo response, and our laboratory has recently become very interested in this issue. Recall that only 50 percent got better from taking the placebo. Is this because there is something special about this group? What if the same one hundred patients (treated with a placebo for pain) went on to develop depression a few months later and you were to give a “new” placebo — telling them that it was a powerful new antidepressant? Would the same fifty patients get better, or would a new set of patients show improvement, overlapping only partially with the first set? In other words, is there such a thing as a “placebo responder”? Is the response specific to the malady, the pill, the person or all three? Indeed, consider what would happen if the same one hundred patients once again developed pain a year later and again you gave them the original placebo “painkiller”. Would the same fifty get better or would it be a new group of patients? Dr. Eric Altschuler and I are presently conducting such a study.

Other aspects of placebo specificity also remain to be investigated. Imagine that a patient simultaneously develops a migraine and an ulcer — and you give him a placebo that you tell him is a new “antiulcer drug”. Then would only the ulcer pain go away (assuming that he is a “placebo responder”), or would his brain become so flushed with endorphins that the migraine pain would also disappear as a bonus? This sounds unlikely, but if antipain neurotransmitters, such as endorphins, are released diffusely in his brain, then he may also get relief from his other aches and pains even though his belief pertains only to the ulcer. The question of how sophisticated beliefs are translated and understood by primitive brain mechanisms concerned with pain is a fascinating one.

9. For a review of multiple personality disorders, see Birnbaum and Thompson, 1996.

For ocular changes, see Miller, 1989.

Chapter 12: Do Martians See Red?

1. For clear introductions to the problem of consciousness, see Humphrey, 1992; Searle, 1992; Dennett, 1991; P. Churchland, 1986; P.M. Churchland, 1993;Galin, 1992; Baars 1997; Block, Ramachandran and Hirstein, 1997; Penrose, 1989.

The idea that consciousness — especially introspection — may have evolved mainly to allow you to simulate other minds (which inspired the currently popular notion of a “theory of other minds” module) was first proposed by Nick Humphrey at a conference that I had organized in Cambridge over twenty years ago.

2. Another very different type of translation problem also arises between the code or language of the left hemisphere and that of the right (see note 16, Chapter 7).

3. Some philosophers are utterly baffled by this possibility, but it’s no more mysterious than striking your ulnar nerve at the elbow with a hammer to generate a totally novel electrical “tingling” qualia even though you may have never experienced anything quite like it before (or even the very first time a boy or girl experiences an orgasm).

4. Thus an ancient philosophical riddle going back to David Hume and William Molyneux can now be answered scientifically. Researchers at NIH have used magnets to stimulate the visual cortex of blind people to see whether visual pathways have degenerated or become reorganized, and we have also begun some experiments here at UCSD. But to my knowledge, the specific question of whether a person can experience a quale or subjective sensation totally novel to him or her has never been explored empirically.

5. The pioneering experiments in this field were performed by Singer, 1993, and Gray and Singer, 1989.

6. It is sometimes asserted — on grounds of parsimony — that one does not need qualia for a complete description of the way the brain works, but I disagree with this view. Occam’s razor — the idea that the simplest of competing theories is preferable to more complex explanations of unknown phenomena — is a useful rule of thumb, but it can sometimes be an actual impediment to scientific discovery. Most science begins with a bold conjecture of what might be true. The discovery of relativity, for example, was not the product of applying Occam’s razor to our knowledge of the universe at that time. The discovery resulted from rejecting Occam’s razor and asking what if some deeper generalization were true, which was not required by the available data, but which made unexpected predictions (which later turned out to be parsimonious, after all). It’s ironic that most scientific discoveries result not from brandishing or sharpening Occam’s razor — despite the view to the contrary held by the great majority of scientists and philosophers — but from generating seemingly ad hoc and ontologically promiscuous conjectures that are not called for by the current data.

7. Please note that I am using the phrase “filling in” in a strictly metaphorical sense — simply for lack of a better one. I don’t want to leave you with the impression that there is a pixel-by-pixel rendering of the visual image on some internal neural screen. But I disagree with Dennett’s specific claim that there is no “neural machinery” corresponding to the blind spot. There is, in fact, a patch of cortex corresponding to each eye’s blind spot that receives input from the other eye as well as the region surrounding the blind spot in the same eye. What we mean by “filling in” is simply this: that one quite literally sees visual stimuli (such as patterns and colors) as arising from a region of the visual field where there is actually no visual input. This is a purely descriptive, theory-neutral definition of filling in, and one does not have to invoke — or debunk — homunculi watching screens to accept it. We would argue that the visual system fills in not to benefit a homunculus but to make some aspects of the information explicit for the next level of processing.

8. Tovee, Rolls and Ramachandran, 1996. Kathleen Armel, Chris Foster and I have recently shown that if two completely different “views” of this dog are presented in rapid succession, naive subjects can see only chaotic, incoherent motion of the splotches, but once they see the dog, it is seen to jump or turn in the appropriate manner — emphasizing the role of the “top-down” object knowledge in motion perception (see Chapter 5).

9. Sometimes qualia become deranged, leading to a fascinating condition called synesthesia, in which a person quite literally tastes a shape or sees color in a sound. For example, one patient, a synesthete, claimed that chicken has a distinctly “pointy” flavor and told his physician, Dr. Richard Cytowic, “I wanted the taste of this chicken to be pointed, but it came out all round . . . well, I mean it’s nearly spherical; I can’t serve this if it doesn’t have points.” Another patient claimed to see the letter “U” as being yellow to light brown in color, whereas the letter “N” was a shiny varnished ebony hue. Some synesthetes see this union of the senses as a gift to inspire their art, not as brain pathology.

Some cases of synesthesia tend to be a bit dubious. A person claims to see a sound or taste a color, but it turns out that she is merely being metaphorical — much the same way that you might speak of a sharp taste, a bitter memory or a dull sound (bear in mind, though, that the distinction between the metaphorical and the literal is extremely blurred in this curious condition). However, many other cases are quite genuine. A graduate student, Kathleen Armel, and I recently examined a patient named John Hamilton who had relatively normal vision up until the age of five, then suffered progressive deterioration in his sight as a result of retinitis pigmentosa, until finally at the age of forty he was completely blind. After about two or three years, John began to notice that whenever he touched objects or simply read Braille, his mind would conjure up vivid visual images, including flashes of light, pulsating hallucinations or sometimes the actual shape of the object he was touching. These images were highly intrusive and actually interfered with his Braille reading and ability to recognize objects through touch. Of course, if you or I close our eyes and touch a ruler, we don’t hallucinate one, even though we may visualize it in our mind’s eye. The difference, again, is that your visualization of the ruler is usually helpful to your brain since it is tentative and revocable — you have control over it — whereas John’s hallucinations are often irrelevant and always irrevocable and intrusive. He can’t do anything about them, and to him they are a spurious and distracting nuisance. It seems that the tactile signals evoked in John’s somatosensory areas — his Penfield map — are being sent all the way back to his deprived visual areas, which are hungry for input. This is a radical idea, but it can be tested by using modern imaging techniques.

Interestingly, synesthesia is sometimes seen in temporal lobe epilepsy, suggesting that the merging of sense modalities occurs not only in the angular gyrus (as is often asserted) but also in certain limbic structures.

10. This question arose in a conversation I had with Mark Hauser.

11. Searle, 1992.

12. Jackendorf, 1987.

13. The patient may also say, “This is it; I finally see the truth. I have no doubts anymore.” It seems ironic that our convictions about the absolute truth or falsehood of a thought should depend not so much on the propositional language system, which takes great pride in being logical and infallible, but on much more primitive limbic structures, which add a form of emotional qualia to thoughts, giving them a “ring of truth”. (This might explain why the more dogmatic assertions of priests as well as scientists are so notoriously resistant to correction through intellectual reasoning!)

14. Damasio, 1994.

15. I am, of course, simply being metaphorical here. At some stage in science, one has to abandon or refine metaphors and get to the actual mechanism — the nitty- gritty of it. But in a science that is still in its infancy, metaphors are often useful pointers. (For example, seventeenth-century scientists often spoke of light as being made of waves or particles, and both metaphors were useful up to a point, until they became assimilated into the more mature physics of quantum theory. Even the gene — the independent particle of beanbag genetics — continues to be a useful word, although its actual meaning has changed radically over the years.)

16. For an insightful discussion of akinetic mutism, see Bogen, 1995, and Plum, 1982.

17. Dennett, 1991.

18. Trivers, 1985.

<<< Íàçàä
Âïåðåä >>>

Ãåíåðàöèÿ: 8.258. Çàïðîñîâ Ê ÁÄ/Cache: 3 / 1
Ââåðõ Âíèç