Книга: Phantoms in the Brain
Preface
<<< Назад Foreword |
Вперед >>> CHAPTER 1 The Phantom Within |
Preface
In any field, find the strangest thing and then explore it.
This book has been incubating in my head for many years, but I never quite got around to writing it. Then, about three years ago, I gave the Decade of the Brain lecture at the annual meeting of the Society for Neuroscience to an audience of over four thousand scientists, discussing many of my findings, including my studies on phantom limbs, body image and the illusory nature of the self. Soon after the lecture, I was barraged with questions from the audience: How does the mind influence the body in health and sickness?
How can I stimulate my right brain to be more creative? Can your mental attitude really help cure asthma and cancer? Is hypnosis a real phenomenon? Does your work suggest new ways to treat paralysis after strokes? I also got a number of requests from students, colleagues and even a few publishers to undertake writing a textbook. Textbook writing is not my cup of tea, but I thought a popular book on the brain dealing mainly with my own experiences working with neurological patients might be fun to write. During the last decade or so, I have gleaned many new insights into the workings of the human brain by studying such cases, and the urge to communicate these ideas is strong. When you are involved in an enterprise as exciting as this, it’s a natural human tendency to want to share your ideas with others. Moreover, I feel that I owe it to taxpayers, who ultimately support my work through grants from the National Institutes of Health.
Popular science books have a rich, venerable tradition going as far back as Galileo in the seventeenth century.
Indeed, this was Galileo’s main method of disseminating his ideas, and in his books he often aimed barbs at an imaginary protagonist, Simplicio — an amalgam of his professors. Almost all of Charles Darwin’s famous books, including The Origin of Species, The Descent of Man, The Expression of Emotions in Animals and Men, The Habits of Insectivorous Plants — but not his two-volume monograph on barnacles! — were written for the lay reader at the request of his publisher, John Murray. The same can be said of the many works of Thomas Huxley, Michael Faraday, Humphry Davy and many other Victorian scientists. Faraday’s Chemical History of a Candle, based on Christmas lectures that he gave to children, remains a classic to this day.
I must confess that I haven’t read all these books, but I do owe a heavy intellectual debt to popular science books, a sentiment that is echoed by many of my colleagues. Dr. Francis Crick of the Salk Institute tells me that Erwin Schr?dinger’s popular book What Is Life? contained a few speculative remarks on how heredity might be based on a chemical and that this had a profound impact on his intellectual development, culminating in his unraveling the genetic code together with James Watson. Many a Nobel Prize-winning physician embarked on a research career after reading Paul de Kruif’s The Microbe Hunters, which was published in 1926. My own interest in scientific research dates back to my early teens, when I read books by George Gamow, Lewis Thomas, and Peter Medawar, and the flame is being kept alive by a new generation of writers — Oliver Sacks, Stephen Jay Gould, Carl Sagan, Dan Dennett, Richard Gregory, Richard Dawkins, Paul Davies, Colin Blakemore and Steven Pinker.
About six years ago I received a phone call from Francis Crick, the codiscoverer of the structure of deoxyribonucleic acid (DNA), in which he said that he was writing a popular book on the brain called The Astonishing Hypothesis. In his crisp British accent, Crick said that he had completed a first draft and had sent it to his editor, who felt that it was extremely well written but that the manuscript still contained jargon that would be intelligible only to a specialist. She suggested that he pass it around to some lay people. “I say, Rama”, Crick said with exasperation, “the trouble is, I don’t know any lay people. Do you know any lay people I could show the book to?” At first I thought he was joking, but then realized he was perfectly serious.
I can’t personally claim not to know any lay people, but I could nevertheless sympathize with Crick’s plight.
When writing a popular book, professional scientists always have to walk a tightrope between making the book intelligible to the general reader, on the one hand, and avoiding oversimplification, on the other, so that experts are not annoyed. My solution has been to make elaborate use of end notes, which serve three distinct functions: First, whenever it was necessary to simplify an idea, my cowriter, Sandra Blakeslee, and I resorted to notes to qualify these remarks, to point out exceptions and to make it clear that in some cases the results are preliminary or controversial. Second, we have used notes to amplify a point that is made only briefly in the main text — so that the reader can explore a topic in greater depth. The notes also point the reader to original references and credit those who have worked on similar topics. I apologize to those whose works are not cited; my only excuse is that such omission is inevitable in a book such as this (for a while the notes threatened to exceed the main text in length). But I’ve tried to include as many pertinent references as possible in the bibliography at the end, even though not all of them are specifically mentioned in the text.
This book is based on the true-life stories of many neurological patients. To protect their identity, I have followed the usual tradition of changing names, circumstances and defining characteristics throughout each chapter. Some of the “cases” I describe are really composites of several patients, including classics in the medical literature, as my purpose has been to illustrate salient aspects of the disorder, such as the neglect syndrome or temporal lobe epilepsy. When I describe classic cases (like the man with amnesia known as H.M.), I refer the reader to original sources for details. Other stories are based on what are called single-case studies, which involve individuals who manifest a rare or unusual syndrome.
A tension exists in neurology between those who believe that the most valuable lessons about the brain can be learned from statistical analyses involving large numbers of patients and those who believe that doing the right kind of experiments on the right patients — even a single patient — can yield much more useful information. This is really a silly debate since its resolution is obvious: It’s a good idea to begin with experiments on single cases and then to confirm the findings through studies of additional patients. By way of analogy, imagine that I cart a pig into your living room and tell you that it can talk. You might say, “Oh, really? Show me.” I then wave my wand and the pig starts talking. You might respond, “My God! That’s amazing!” You are not likely to say, “Ah, but that’s just one pig. Show me a few more and then I might believe you.” Yet this is precisely the attitude of many people in my field.
I think it’s fair to say that, in neurology, most of the major discoveries that have withstood the test of time were, in fact, based initially on single-case studies and demonstrations. More was learned about memory from a few days of studying a patient called H.M. than was gleaned from previous decades of research averaging data on many subjects. The same can be said about hemispheric specialization (the organization of the brain into a left brain and a right brain, which are specialized for different functions) and the experiments carried out on two patients with so-called split brains (in whom the left and right hemispheres were disconnected by cutting the fibers between them). More was learned from these two individuals than from the previous fifty years of studies on normal people.
In a science still in its infancy (like neuroscience and psychology) demonstration-style experiments play an especially important role. A classic example is Galileo’s use of early telescopes. People often assume that Galileo invented the telescope, but he did not. Around 1607, a Dutch spectacle maker, Hans Lipperhey, placed two lenses in a cardboard tube and found that this arrangement made distant objects appear closer. The device was widely used as a child’s toy and soon found its way into country fairs throughout Europe, including France. In 1609, when Galileo heard about this gadget, he immediately recognized its potential. Instead of spying on people and other terrestrial objects, he simply raised the tube to the sky — something that nobody else had done. First he aimed it at the moon and found that it was covered with craters, gullies and mountains — which told him that the so-called heavenly bodies are, contrary to conventional wisdom, not so perfect after all: They are full of flaws and imperfections, open to scrutiny by mortal eyes just like objects on earth. Next he directed the telescope at the Milky Way and noticed instantly that far from being a homogeneous cloud (as people believed), it was composed of millions of stars. But his most startling discovery occurred when he peered at Jupiter, which was known to be a planet or wandering star. Imagine his astonishment when he saw three tiny dots near Jupiter (which he initially assumed were new stars) and witnessed that after a few days one disappeared. He then waited for a few more days and gazed once again at Jupiter, only to find that not only had the missing dot reappeared, but there was now an extra dot — a total of four dots instead of three. He understood in a flash that the four dots were Jovian satellites — moons just like ours — that orbited the planet. The implications were immense. In one stroke, Galileo had proved that not all celestial bodies orbit the earth, for here were four that orbited another planet, Jupiter. He thereby dethroned the geocentric theory of the universe, replacing it with the Copernican view that the sun, not the earth, was at the center of the known universe. The clinching evidence came when he directed his telescope at Venus and found that it looked like a crescent moon going though all the phases, just like our moon, except that it took a year rather than a month to do so. Again, Galileo deduced from this that all the planets were orbiting the sun and that Venus was interposed between the earth and the sun. All this from a simple cardboard tube with two lenses. No equations, no graphs, no quantitative measurements: “just” a demonstration.
When I relate this example to medical students, the usual reaction is, Well, that was easy during Galileo’s time, but surely now in the twentieth century all the major discoveries have already been made and we can’t do any new research without expensive equipment and detailed quantitative methods. Rubbish! Even now amazing discoveries are staring at you all the time, right under your nose. The difficulty lies in realizing this.
For example, in recent decades all medical students were taught that ulcers are caused by stress, which leads to excessive acid production that erodes the mucosal lining of the stomach and duodenum, producing the characteristic craters or wounds that we call ulcers. And for decades the treatment was either antacids, histamine receptor blockers, vagotomy (cutting the acid-secreting nerve that innervates the stomach) or even gastrectomy (removal of part of the stomach). But then a young resident physician in Australia, Dr. Bill Marshall, looked at a stained section of a human ulcer under a microscope and noticed that it was teeming with Helicobacter pylori — a common bacterium that is found in a certain proportion of healthy individuals.
Since he regularly saw these bacteria in ulcers, he started wondering whether perhaps they actually caused ulcers. When he mentioned this idea to his professors, he was told, “No way. That can’t be true. We all know ulcers are caused by stress. What you are seeing is just a secondary infection of an ulcer that was already in place.”
But Dr. Marshall was not dissuaded and proceeded to challenge the conventional wisdom. First he carried out an epidemiological study, which showed a strong correlation between the distribution of Helicobacter species in patients and the incidence of duodenal ulcers. But this finding did not convince his colleagues, so out of sheer desperation, Marshall swallowed a culture of the bacteria, did an endoscopy on himself a few weeks later and demonstrated that his gastrointestinal tract was studded with ulcers! He then conducted a formal clinical trial and showed that ulcer patients who were treated with a combination of antibiotics, bismuth and metronidazole (Flagyl, a bactericide) recovered at a much higher rate — and had fewer relapses — than did a control group given acid-blocking agents alone.
I mention this episode to emphasize that a single medical student or resident whose mind is open to new ideas and who works without sophisticated equipment can revolutionize the practice of medicine. It is in this spirit that we should all undertake our work, because one never knows what nature is hiding.
I’d also like to say a word about speculation, a term that has acquired a pejorative connotation among some scientists. Describing someone’s idea as “mere speculation” is often considered insulting. This is unfortunate.
As the English biologist Peter Medawar has noted, “An imaginative conception of what might be true is the starting point of all great discoveries in science.” Ironically, this is sometimes true even when the speculation turns out to be wrong. Listen to Charles Darwin: “False facts are highly injurious to the progress of science for they often endure long; but false hypotheses do little harm, as everyone takes a salutary pleasure in proving their falseness; and when this is done, one path toward error is closed and the road to truth is often at the same time opened”.
Every scientist knows that the best research emerges from a dialectic between speculation and healthy skepticism. Ideally the two should coexist in the same brain, but they don’t have to. Since there are people who represent both extremes, all ideas eventually get tested ruthlessly. Many are rejected (like cold fusion) and others promise to turn our views topsy turvy (like the view that ulcers are caused by bacteria).
Several of the findings you are going to read about began as hunches and were later confirmed by other groups (the chapters on phantom limbs, neglect syndrome, blindsight and Capgras’ syndrome). Other chapters describe work at an earlier stage, much of which is frankly speculative (the chapter on denial and temporal lobe epilepsy). Indeed, I will take you at times to the very limits of scientific inquiry.
I strongly believe, however, that it is always the writer’s responsibility to spell out clearly when he is speculating and when his conclusions are clearly warranted by his observations. I’ve made every effort to preserve this distinction throughout the book, often adding qualifications, disclaimers and caveats in the text and especially in the notes. In striking this balance between fact and fancy, I hope to stimulate your intellectual curiosity and to widen your horizons, rather than to provide you with hard and fast answers to the questions raised.
The famous saying “May you live in interesting times” has a special meaning now for those of us who study the brain and human behavior. On the one hand, despite two hundred years of research, the most basic questions about the human mind — How do we recognize faces? Why do we cry? Why do we laugh? Why do we dream? and Why do we enjoy music and art? — remain unanswered, as does the really big question: What is consciousness? On the other hand, the advent of novel experimental approaches and imaging techniques is sure to transform our understanding of the human brain. What a unique privilege it will be for our generation — and our children’s — to witness what I believe will be the greatest revolution in the history of the human race: understanding ourselves. The prospect of doing so is at once both exhilarating and disquieting.
There is something distinctly odd about a hairless neotenous primate that has evolved into a species that can look back over its own shoulder and ask questions about its origins. And odder still, the brain can not only discover how other brains work but also ask questions about its own existence: Who am I? What happens after death? Does my mind arise exclusively from neurons in my brain? And if so, what scope is there for free will?
It is the peculiar recursive quality of these questions — as the brain struggles to understand itself — that makes neurology fascinating.
<<< Назад Foreword |
Вперед >>> CHAPTER 1 The Phantom Within |
- Foreword
- Preface
- CHAPTER 1 The Phantom Within
- CHAPTER 2 “Knowing Where to Scratch”
- CHAPTER 3 Chasing the Phantom
- CHAPTER 4 The Zombie in the Brain
- CHAPTER 5 The Secret Life of James Thurber
- CHAPTER 6 Through the Looking Glass
- CHAPTER 7 The Sound of One Hand Clapping
- CHAPTER 8 “The Unbearable Lightness of Being”
- CHAPTER 9 God and the Limbic System
- CHAPTER 10 The Woman Who Died Laughing
- CHAPTER 11 “You Forgot to Deliver the Twin”
- CHAPTER 12 Do Martians See Red?
- Acknowledgments
- Notes
- Bibliography and Suggested Reading
- Index
- Содержание книги
- Популярные страницы
- Foreword
- 142. Азотная кислота.
- Мифы об ученых
- У большинства женщин нет чувства юмора
- Экспериментальные свидетельства Большого взрыва
- Возможно ли существование людей после смерти Солнца?
- Что хрустит, когда я сгибаю колени, особенно если сижу на корточках?
- Если вы неактивны, вы остаетесь неактивны
- Парк юрского периода
- Школьные годы Ньютона
- УЧИМСЯ РАБОТАТЬ В УСЛОВИЯХ СТРЕССА
- Сравнение свойств географической карты и плана местности. План местности. Географическая карта