David Eagleman's Blog / en-US Fri, 22 Mar 2019 00:44:32 -0700 60 David Eagleman's Blog / 144 41 /images/layout/goodreads_logo_144.jpg /author_blog_posts/4423968-the-science-of-de--and-re-humanization Fri, 25 Jan 2013 14:54:31 -0800 <![CDATA[The science of de- and re-humanization]]> /author_blog_posts/4423968-the-science-of-de--and-re-humanization

Why do groups of people inflict violence on unarmed neighbors? (Germany, Rwanda, Darfur, Nanking....). Here's the neuroscience point of view.



Why do groups of people inflict violence on unarmed neighbors (think Germany, Rwanda, Darfur, Nanking....)? How do ingroups and outgroups form, neurally-speaking? How does social context and obedience to authority navigate mass behavior?


Here's my take on these questions, through the lens of neuroscience.







{videobox}KXeJQ2YSLMo|1. Introducing social neuroscience|,
SiXhZdO7AHM|2. Syndrome E - violence and group contagion|,
TDjWryXdVd0|3. What makes us empathetic?|,
7KsAFALdp2w|4. Peer pressure and obedience to authority|,
M-8PB6ZJgOM|5. Re-humanisation and how we can curb violence|,
kO9n_99Fw9A|6. Q&A: David Eagleman on the science of de- (and re-) humanisation{/videobox}



posted by David Eagleman on March, 22 ]]>
/author_blog_posts/4423969-after-sandy-hook-why-mental-illness-matters Sat, 15 Dec 2012 16:34:24 -0800 <![CDATA[After Sandy Hook: Why mental illness matters]]> /author_blog_posts/4423969-after-sandy-hook-why-mental-illness-matters

The shootings at Sandy Hook sparked debate ranging from gun control to bulletproof windows. But the most fruitful approach may be to prioritize our discussion of mental illness.



The tragic shootings at Sandy Hook have sparked debate ranging from gun control to bulletproof windows at elementary schools.


I suggest the more important issue is to prioritize our national discussion of mental illness.


There seem to be two problems with the discussion at present.The first is a lack of understanding about the terms and their meanings. To illustrate, here’s the complete text from a two-sentence article from Fox News:




Ryan Lanza, 24, brother of gunman Adam Lanza, 20, tells authorities that his younger brother is autistic, or has Asperger syndrome and a “personality disorder.”Neighbors described the younger man to ABC as “odd� and displaying characteristics associated with obsessive-compulsive disorder.




One might consider it impressive to embed so many problems into such a concise article.


First, the phrasing of the first sentence appears to suggest that autism is something like a sum of Asperger’s and a personality disorder.That’s incorrect. Asperger’s is simply a milder form of autism: both are different degrees on a single autistic disorder spectrum, a broad-ranging developmental disorder characterized by problems in relationships and communication. Importantly, there is no known link between autism and pre-meditated violence.


Let’s turn to the item at the end of the first sentence. A personality disorder is a pervasive and inflexible pattern of behavior that causes distress or limits social progress. Because the term personality disorder is standard clinical nomenclature, there is no reason for a reporter to put that term in quotation marks. Perhaps he was simply quoting Ryan? But we wouldn’t expect a reporter to write that Ryan’s “younger brother� is “autistic�.The quotation marks around personality disorder could suggest to the uninitiated reader that the concept is simply a colloquialism, which could—in the worst case—promote dismissal of the important issues around it.


Finally, let’s consider obsessive-compulsive disorder, in which people suffer anxiety from recurrent thoughts and compulsions toward repetitive behaviors. Is its mention in the article an important clue? Probably not: as with autism, there is no known relationship between obsessive-compulsive disorder and violence.


In sum, the two-sentence report uses four mental health terms, two of which are different degrees of the same disorder, one of which is wrapped in quotation marks, and most of which have no plausible bearing on the Sandy Hook shootings.


Did these problems arise from the brother’s misunderstanding, or the neighbors�, or the reporter’s?Whatever the case, the readership is given poor information (and potentially misinformation) about important mental health issues.


Does a public understanding of mental illness matter? Very much. A deeper understanding can fuel early detection, resources, prevention, rehabilitation, and cures.Adam Lanza is dead, and we may never know what pathologies were lurking in the patterns of his neural circuits. But he’s not the point anymore.It’s thenextAdam Lanza, growing up now, lurking in the wings of the future.


I suggest we take this tragedy as a wake-up call about how we want to address mental illness in our society.Research and care programs for those with mental problems are continuously under-funded. And as we head for the fiscal cliff, scientists funded by the National Institutes of Health are bracing themselves for the plunge from an already tight budget.


* * * *


But here’s a question: how do weknowwhether the public has a misunderstanding of mental illness? As a clue to this question, let’s return to the two-sentence Fox News story. Beyond the description of Lanza’s presumptive mental health problems, something else caught my eye: the comments section. Here are a few examples:






jacsonhole

Is this supposed to be an excuse??I do not buy it!Have you noticed how things, at all levels, have gone to hell in a hand basket since we removed God from schools and communities??There is a lesson of life here.










coachdidi

It is time to go back to the old ways of raising kids. There are winners and losers, not everyone gets a trophy for showing up. Also parents need to be in charge again, not be their buddy. And school systems need to teach and SHUT-UP.










LadyMustang

I really don't care what was wrong with him, there is NO EXCUSE for this. I believe that if parents/schools/communities would never have started this whole "let's not grade them" "let's not let them lose" "everyone is equal" bull. Parents & schools actually discipline them and if parents actually would spend quality time with their kids, they wouldn't be running wild now a days. Come on parents, bring your kids up to be responsible members of society, not hand them $20 to get out of your hair.






These viewpoints represent a potentially disastrous misunderstanding about mental health illness.At the time of this writing, it is far from clear what was wrong with Adam Lanza—but his behavior alone is sufficient evidence that something was abnormal about his brain.Millions of 20-year-olds on this planet play video games, have divorced parents, are eccentric, have access to guns, and so on—but Lanza tops the news because his actions are so exceptionallyrare.Such abnormal decision-making unmasks abnormalities in brain function.To assume that prayer in schools and tough-love parenting is a meaningful solution to brain abnormalities is to miss the boat entirely. It represents unfamiliarity with the long history of brain science.I’ve written about this issue before inIncognito, so I’ll take an excerpt here:




The study of brains and behaviors finds itself in the middle of a conceptual shift. As recently as a century ago, the prevailing attitude was to get psychiatric patients to “toughen up,� either by deprivation, pleading, or torture. The same attitude applied to many disorders; for example, some hundreds of years ago, epileptics were often abhorred because their seizures were understood as demonic possessions—perhaps in direct retribution for earlier behavior. Not surprisingly, this proved an unsuccessful approach.




Indeed, the past century has witnessed a shift from blame to biology. But why? Continuing the excerpt:




Perhaps the largest driving force is the effectiveness of the pharmaceutical treatments. No amount of beating will chase away depression, but a little pill called fluoxetine often does the trick. Schizophrenic symptoms cannot be overcome by exorcism, but can be controlled by risperidone. Mania responds not to talking or to ostracism, but to lithium.




I’m not a great fan of the purely pharmaceutical approach, but its successes underscore the idea that mental problems can be approached in the same clear-eyed manner with which we might approach diabetes, cancer, or an inflammation.More fromIncognito:




The more we discover about the circuitry of the brain, the more the answers tip away from accusations of indulgence, lack of motivation, and poor discipline—and move toward the details of the biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are controlled by inaccessible subroutines that can be easily perturbed.




In this light, consider this series of provocative questions from my neuroscience colleague Robert Sapolsky:




Is a loved one, sunk in a depression so severe that she cannot function, a case of a disease whose biochemical basis is as “real� as is the biochemistry of, say, diabetes, or is she merely indulging herself? Is a child doing poorly at school because he is unmotivated and slow, or because there is a neurobiologically based learning disability? Is a friend, edging towards a serious problem with substance abuse, displaying a simple lack of discipline, or suffering from problems with the neurochemistry of reward?





To the newsreaders who feel that mental illness is best viewed as an excuse, let me suggest instead that we might more effectually recognize it as a national priority for social policy.If we care to prevent the next mass shooting, we should concentrate our efforts on getting meaningful diagnoses and resources to the next Adam Lanza.There is no advantage in imagining that all brains are the same on the inside, because they’re not. There is no point in concluding that your own child did not perpetrate a school shooting solely because of your terrific parenting.This is not meant to diminish the importance of excellence in parenting—but mental illness is real, and online tips about parental discipline and school prayer will remain insufficient solutions.



----------------------------------------------------------------------------


David Eagleman directs the



posted by David Eagleman on February, 09 ]]>
/author_blog_posts/4423970-tnt-s-new-drama-perception Sat, 22 Sep 2012 18:30:00 -0700 TNT's new drama: Perception /author_blog_posts/4423970-tnt-s-new-drama-perception

I am the scientific advisor for the TNT television drama,Perception,starring Eric McCormack and Rachael Leigh Cook.Learn more about the show.



PerceptionFor the past year I have been the scientific advisor for the TNT television drama, Perception, starring Eric McCormack and Rachael Leigh Cook.


Here's the description from TNT:



In Perception, Eric McCormack plays Dr. Daniel Pierce, an eccentric neuroscience professor with paranoid schizophrenia who is recruited by the FBI to help solve complex cases. Pierce has an intimate knowledge of human behavior and a masterful understanding of the way the mind works. He also has an uncanny ability to see patterns and look past people's conscious emotions to see what lies beneath.


Pierce's mind may be brilliant, but it's also damaged. He struggles with hallucinations and paranoid delusions brought on by his schizophrenia. Oddly, Daniel considers some of his hallucinations to be a gift. They occasionally allow him to make connections that his conscious mind can’t yet process. At other times, the hallucinations become Daniel's greatest curse, leading him to behave in irrational, potentially dangerous ways.



My job is to brainstorm about possible scenarios with the talented stable of writers, and then to read the scripts in detail for accuracy.


For each episode, I also spend a minute or two speaking to the underlying neuroscientific issues:



For more installments of Inside the Mind of Perception, see the .



posted by David Eagleman on December, 11 ]]>
/author_blog_posts/4423971-schwarzenegger-on-incognito Sat, 01 Sep 2012 16:07:14 -0700 Schwarzenegger on Incognito /author_blog_posts/4423971-schwarzenegger-on-incognito

What a wonderful shot of caffeine it was to find my childhood hero lauding my book in theNew York Times.



What a wonderful shot of caffeine it was to find my childhood hero lauding my book in the:



ԴDZɲԱ𲵲: By the Book
Published: December 27, 2012



Illustration by Jillian Tamaki

ԴDZɲԱ𲵲






What book is on your night stand now?


Right now I’m reading a book calledby David Eagleman, about the human brain. I’ve always been interested in psychology, so learning about the things that influence our thinking is really important for me. In bodybuilding, I was known for “psyching� out my opponents with mind tricks. I wish I had this book then because the stuff I was doing was Mickey Mouse compared with what’s in this book.







posted by David Eagleman on March, 06 ]]>
/author_blog_posts/4423972-remembering-a-trail-blazer---francis-crick Mon, 23 Jul 2012 17:00:00 -0700 <![CDATA[Remembering a trail blazer - Francis Crick]]> /author_blog_posts/4423972-remembering-a-trail-blazer---francis-crick

Francis Crick, one of the premier biologists ofthe 20th century, passed away July 28, 2004,in San Diego. On his 88th birthday last June,I brought him chocolates and spent the day withhim in his home in La Jolla.



To commemorate the death of Francis Crick on this day in 2004, here's a reprint of the article I published in the Houston Chroniclea few days after his passing.



Francis Crick, one of the premier biologists ofthe 20th century, passed away July 28, 2004,in San Diego. On his 88th birthday last June,I brought him chocolates and spent the day withhim in his home in La Jolla.


As with all our meetings, he jumped straightinto a discussion of theories about brain function.He was increasingly frail, his hair had thinnedfrom chemotherapy, and he wobbled on his caneunsteadily. But intellectually, he was still thedominating leviathan of biology.



From the obituaries, most people know thatFrancis Crick, with his colleague James Watson, uncovered thestructure of what sits in the middle of every cell of every animal onthe planet: DNA. The double helix they deduced led quickly to anunraveling of all the secrets of the genetic code.


It had long been known that you inherit traits from your parents� but no one had any good idea how your father’s nose shape andyour mother’s eye color were encoded in invisibly small molecules.By the 1960s, thanks largely to the work of Francis Crick and hiscircle of friends, the molecular basis of inheritance was worked out.


For the DNA work, he and Watson won the Nobel Prize in 1962.As the biologist Jacque Monod said of him, “one man dominates intellectually the whole field [of molecular biology], because he knowsthe most and understands the most.�


The popular media offered depictions they assumed the publicwould appreciate, declaring, for example, that the work of Dr. Cricklaid the groundwork for genetically engineered tomatoes. Whilesuch tomatoes can plausibly trace distant roots to Crick’s discoveries, thejournalists were digging in the wrong place: Crick cared aboutthe deeper questions, the questions about life itself. In molecularbiology, he blazed trails and laid the groundwork for everythingthat would happen over the next half-century. Having pretty wellanswered what he set out to answer, he turned his voracious intellectual appetite to his second scientific goal: an understanding of thebrain. In 1977, he moved to the Salk Institute in La Jolla, California.


James Watson famously commenced his book The DoubleHelix with the line, “I have never seen Francis Crick in a modestmood.� I have yet to find a more incorrect opening line. Francis Crick wasalways in a modest mood. He was one of the few people alwayswilling to criticize his own ideas. He never filtered beliefs throughhis own ego and never hesitated to applaud other people’s theories.He laughed freely and often. When asked about Watson’s meaningin the opening line, Crick smiled and said it must have reflected thathe (Crick) always wanted to get to the bottom ofthings.Specifically, he wanted to know how the brain produces consciousness. In the field of neuroscience, consciousness was forbidden territory. It took someone with the gravitas of Francis Crickto establish consciousness as a real scientific problem. It feels likesomething to have pain. It feels like something to see the color indigo. Somehow, these conscious perceptions are underpinned by neural activity � but how, where, what? By asking penetrating questions,rallying others to perform experiments, and inspiring thousands, heopened up new directions in brain research. He even published ondream sleep and the origin of life on Earth. Nothing was outside hisintellectual ken. He once remarked to me that the dangerous man is the one withonly one theory, because he’ll fight to the death for it.



I cannot escape the feeling that those whodiscover life’s secrets should be immune to life’sfatality. But in the end, Francis Crick was madeonly of the molecules he illuminated. He wasthe victim of uncontrolled cell division. He wasconsumed by the microscopic scales of which hewas composed. The molecules he discovered werethe sewn-in seeds of his own destruction.


Thisdescription would appeal to Francis. His crusadewas to teach that we are a vastly sophisticatednetwork of trillions of cells; a tour de force of biological sophistication with no other magic in themachine. Some people worry that scientific understanding somehowdiminishes the beauty of nature. To this Francis once answered, “Itseems to me that what you lose in mystery you gain in awe.� Whatwe have lost in Francis we gain in inspiration.


I first met Francis when I moved to the Salk Institute in 1999.He was quite a bit taller than I had expected. Beneath a head ofsilver hair he had sparkling eyes and an impish smile and the mostimpressively winged eyebrows I have ever seen. The first time Isaw him in the auditorium during a talk, he sat alone in the frontrow. As the talk went on, his head began to sink and his eyes beganto close. I felt the sad intuition that senescence was taking its toll ona great mind. But then the speaker made some seemingly innocuousinterpretation of his results, and a small smile grew on the corner ofFrancis� lip. He leisurely raised his hand, and in a rapid-fire Cambridge-accented karate-chop analysis, the speaker was re-educated.I came to recognize this as a regular occurrence. Francis was nevermean-spirited, just incisive. He detected microscopic flaws in logic.In a room full of smart scientists, Francis continually re-earned hisposition as the heavyweight champ.



Francis Crick and David Eagleman, California 2004


One of the finest things in my life has been his friendship andtutelage. Francis Crick influenced me in the way that only a youngperson near the beginning of his career can be influenced by someone nearthe end of theirs. I was born 18 years (to the day) after Watson andCrick published the double-helix structure on the pages of Nature, ajournal where I would come to publish my own work 51 years later.Arriving on the planet so long afterward, I was inestimably fortunateto have shared orbits with him for the past six years. His influenceon me was deep, and his loss marks the passing of an era for many people in the field.


He was an inspiration to all who knew him, a brainstormingintellectual powerhouse with a mischievous smile. He listenedcarefully, engaged ideas, sought robust debates, and hunted for thetough problems. At the age of 88, he continued to work every day onimportant unsolved problems. He continued to publishmajor papers and read all the journals in the field at an age whenmost people are playing bridge and intellectually melting away. Hewas working on a manuscript the day he died.


As a scientist, thinker,author, mentor, friend, and colleague, one would be hard pressed tofind someone who could outshine the twinkly-eyed Francis Crick. Itwill be some time before the world sees another like him.



posted by David Eagleman on March, 06 ]]>
/author_blog_posts/4423973-brain-time Tue, 03 Jul 2012 17:00:00 -0700 Brain Time /author_blog_posts/4423973-brain-time

The days of thinking of time as a river—evenly flowing, always advancing—are over. Time perception, just like vision, is a construction of the brain.



At some point, the Mongol military leader Kublai Khan (1215�94) realized that his empire had grown so vast that he would never be able to see what it contained. To remedy this, he commissioned emissaries to travel to the empire's distant reaches and convey back news of what he owned. Since his messengers returned with information from different distances and traveled at different rates (depending on weather, conflicts, and their fitness), the messages arrived at different times. Although no historians have addressed this issue, I imagine that the Great Khan was constantly forced to solve the same problem a human brain has to solve: what events in the empire occurred in which order?


Your brain, after all, is encased in darkness and silence in the vault of the skull. Its only contact with the outside world is via the electrical signals exiting and entering along the super-highways of nerve bundles. Because different types of sensory information (hearing, seeing, touch, and so on) are processed at different speeds by different neural architectures, your brain faces an enormous challenge: what is the best story that can be constructed about the outside world?


The days of thinking of time as a river—evenly flowing, always advancing—are over. Time perception, just like vision, is a construction of the brain and is shockingly easy to manipulate experimentally. We all know about optical illusions, in which things appear different from how they really are; less well known is the world of temporal illusions. When you begin to look for temporal illusions, they appear everywhere. In the movie theater, you perceive a series of static images as a smoothly flowing scene. Or perhaps you've noticed when glancing at a clock that the second hand sometimes appears to take longer than normal to move to its next position—as though the clock were momentarily frozen.


More subtle illusions can be teased out in the laboratory. Perceived durations are distorted during rapid eye movements, after watching a flickering light, or simply when an "oddball" is seen in a stream of repeated images. If we inject a slight delay between your motor acts and their sensory feedback, we can later make the temporal order of your actions and sensations appear to reverse. Simultaneity judgments can be shifted by repeated exposure to nonsimultaneous stimuli. And in the laboratory of the natural world, distortions in timing are induced by narcotics such as cocaine and marijuana or by such disorders as Parkinson's disease, Alzheimer's disease, and schizophrenia.


Try this exercise: Put this book down and go look in a mirror. Now move your eyes back and forth, so that you're looking at your left eye, then at your right eye, then at your left eye again. When your eyes shift from one position to the other, they take time to move and land on the other location. But here's the kicker: you never see your eyes move. What is happening to the time gaps during which your eyes are moving? Why do you feel as though there is no break in time while you're changing your eye position? (Remember that it's easy to detect someone else's eyes moving, so the answer cannot be that eye movements are too fast to see.)


All these illusions and distortions are consequences of the way your brain builds a representation of time. When we examine the problem closely, we find that "time" is not the unitary phenomenon we may have supposed it to be. This can be illustrated with some simple experiments: for example, when a stream of images is shown over and over in succession, an oddball image thrown into the series appears to last for a longer period, although presented for the same physical duration. In the neuroscientific literature, this effect was originally termed a subjective "expansion of time," but that description begs an important question of time representation: when durations dilate or contract, does timein generalslow down or speed up during that moment? If a friend, say, spoke to you during the oddball presentation, would her voice seem lower in pitch, like a slowed- down record?


If our perception works like a movie camera, then when one aspect of a scene slows down, everything should slow down. In the movies, if a police car launching off a ramp is filmed in slow motion, not only will it stay in the air longer but its siren will blare at a lower pitch and its lights will flash at a lower frequency. An alternative hypothesis suggests that different temporal judgments are generated by different neural mechanisms—and while they often agree, they are not required to. The police car may seem suspended longer, while the frequencies of its siren and its flashing lights remain unchanged.


Available data support the second hypothesis.Duration distortions are not the same as a unified time slowing down, as it does in movies. Like vision, time perception is underpinned by a collaboration of separate neural mechanisms that usually work in concert but can be teased apart under the right circumstances.


This is what we find in the lab, but might something different happen during real- life events, as in the common anecdotal report that time "slows down" during brief, dangerous events such as car accidents and robberies? My graduate student Chess Stetson and I decided to turn this claim into a real scientific question, reasoning that if time as a single unified entity slows down during fear, then this slow motion should confer a higher temporal resolution—just as watching a hummingbird in slowmotion video allows finer temporal discrimination upon replay at normal speed, because more snapshots are taken of the rapidly beating wings.


We designed an experiment in which participants could see a particular image only if they were experiencing such enhanced temporal resolution. We leveraged the fact that the visual brain integrates stimuli over a small window of time: if two or more images arrive within a single window of integration (usually under one hundred milliseconds), they are perceived as a single image. For example, the toy known as a thaumatrope may have a picture of a bird on one side of its disc and a picture of a tree branch on the other; when the toy is wound up and spins so that both sides of the disc are seen in rapid alternation, the bird appears to be resting on the branch. We decided to use stimuli that rapidly alternated between images and their negatives. Participants had no trouble identifying the image when the rate of alternation was slow, but at faster rates the images perceptually overlapped, just like the bird and the branch, with the result that they fused into an unidentifiable background.


To accomplish this, we engineered a device (the perceptual chronometer) that alternated randomized digital numbers and their negative images at adjustable rates. Using this, we measured participants' threshold frequencies under normal, relaxed circumstances. Next, we harnessed participants to a platform that was then winched fifteen stories above the ground. The perceptual chronometer, strapped to the participant's forearm like a wristwatch, displayed random numbers and their negative images alternating just a bit faster than the participant's determined threshold. Participants were released and experienced free fall for three seconds before landing (safely!) in a net. During the fall, they attempted to read the digits. If higher temporal resolution were experienced during the free fall, the alternation rate should appear slowed, allowing for the accurate reporting of numbers that would otherwise be unreadable.


The result? Participants weren't able to read the numbers in free fall any better than in the laboratory. This was not because they closed their eyes or didn't pay attention (we monitored for that) but because they could not, after all, see time in slow motion (or in "bullet time," like Neo in The Matrix). Nonetheless, their perception of the elapsed duration itself was greatly affected. We asked them to retrospectively reproduce the duration of their fall using a stopwatch. (" Re- create your freefall in your mind. Press the stopwatch when you are released, then press it again when you feel yourself hit the net.") Here, consistent with the anecdotal reports, their duration estimates of their own fall were a third greater, on average, than their recreations of the fall of others.


How do we make sense of the fact that participants in free fall reported a duration expansion yet gained no increased discrimination capacities in the time domain during the fall? The answer is that time and memory are tightly linked. In a critical situation, a walnut-size area of the brain called the amygdala kicks into high gear, commandeering the resources of the rest of the brain and forcing everything to attend to the situation at hand. When the amygdala gets involved, memories are laid down by a secondary memory system, providing the later flashbulb memories of post- traumatic stress disorder. So in a dire situation, your brain may lay down memories in a way that makes them "stick" better. Upon replay, the higher density of data would make the event appear to last longer. This may be why time seems to speed up as you age: you develop more compressed representations of events, and the memories to be read out are correspondingly impoverished. When you are a child, and everything is novel, the richness of the memory gives the impression of increased time passage—for example, when looking back at the end of a childhood summer.


To further appreciate how the brain builds its perception of time, we have to understand where signals are in the brain, and when. It has long been recognized that the nervous system faces the challenge offeature-binding—that is, keeping an object's features perceptually united, so that, say, the redness and the squareness do not bleed off a moving red square. That feature-binding is usually performed correctly would not come as a surprise were it not for our modern picture of the mammalian brain, in which different kinds of information are processed in different neural streams. Binding requires coordination—not only among different senses (vision, hearing, touch, and so on) but also among different features within a sensory modality (within vision, for example: color, motion, edges, angles, and so on).


But there is a deeper challenge the brain must tackle, without which feature-binding would rarely be possible. This is the problem oftemporalbinding: the assignment of the correct timing of events in the world. The challenge is that different stimulus features move through different processing streams and areprocessed at different speeds. The brain must account for speed disparities between and within its various sensory channels if it is to determine the timing relationships of features in the world.


What is mysterious about the wide temporal spread of neural signals is the fact that humans have quite good resolution when making temporal judgments. Two visual stimuli can be accurately deemed simultaneous down to five milliseconds, and their order can be assessed down to twenty-millisecond resolutions. How is the resolution so precise, given that the signals are so smeared out in space and time?


To answer this question, we have to look at the tasks and resources of the visual system. As one of its tasks, the visual system—couched in blackness, at the back of the skull—has to get the timing of outside events correct. But it has to deal with the peculiarities of the equipment that supplies it: the eyes and parts of the thalamus. These structures feeding into the visual cortex have their own evolutionary histories and idiosyncratic circuitry. As a consequence, signals become spread out in time from the first stages of the visual system (for example, based on how bright or dim the object is).


So if the visual brain wants to get events correct timewise, it may have only one choice:wait for the slowest information to arrive. To accomplish this, it must wait about a tenth of a second. In the early days of television broadcasting, engineers worried about the problem of keeping audio and video signals synchronized. Then they accidentally discovered that they had around a hundred milliseconds of slop: As long as the signals arrived within this window, viewers' brains would automatically resynchronize the signals; outside that tenth-of-a-second window, it suddenly looked like a badly dubbed movie.


This brief waiting period allows the visual system to discount the various delays imposed by the early stages; however, it has the disadvantage of pushing perception into the past. There is a distinct survival advantage to operating as close to the present as possible; an animal does not want to live too far in the past. Therefore, the tenth-of- a-second window may be the smallest delay that allows higher areas of the brain to account for the delays created in the first stages of the system while still operating near the border of the present. This window of delay means that awareness is postdictive, incorporating data from a window of time after an event and delivering a retrospective interpretation of what happened.


Among other things, this strategy of waiting for the slowest information has the great advantage of allowing object recognition to be independent of lighting conditions. Imagine a striped tiger coming toward you under the forest canopy, passing through successive patches of sunlight. Imagine how difficult recognition would be if the bright and dim parts of the tiger caused incoming signals to be perceived at different times. You would perceive the tiger breaking into different space-time fragments just before you became aware that you were the tiger's lunch. Somehow the visual system has evolved to reconcile different speeds of incoming information; after all, it is advantageous to recognize tigers regardless of the lighting.


This hypothesis—that the system waits to collect information over the window of time during which it streams in—applies not only to vision but more generally to all the other senses. Whereas we have measured a tenth-of-a-second window of postdiction in vision, the breadth of this window may be different for hearing or touch. If I touch your toe and your nose at the same time, you will feel those touches as simultaneous. This is surprising, because the signal from your nose reaches your brain well before the signal from your toe. Why didn't you feel the nose-touch when it first arrived? Did your brain wait to see what else might be coming up in the pipeline of the spinal cord unti lit was sure it had waited long enough for the slower signal from the toe? Strange as that sounds, it may be correct.


It may be that a unified polysensory perception of the world has to wait for the slowest overall information. Given conduction times along limbs, this leads to the bizarre but testable suggestion that tall people may live further in the past than short people. The consequence of waiting for temporally spread signals is that perception becomes something like the airing of a live television show. Such shows are not truly live but are delayed by a small window of time, in case editing becomes necessary.


Waiting to collect all the information solves part of the temporal- binding problem, but not all of it. A second problem is this: if the brain collects information from different senses in different areas and at different speeds, how does it determine how the signals are supposed to line up with one another? To illustrate the problem, snap your fingers in front of your face. The sight of your fingers and the sound of the snap appear simultaneous. But it turns out that impression is laboriously constructed by your brain. After all, your hearing and your vision process information at different speeds. A gun is used to start sprinters, instead of a flash, because you can react faster to a bang than to a flash. This behavioral fact has been known since the 1880s and in recent decades has been corroborated by physiology: the cells in your auditory cortex can change their firing rate more quickly in response to a bang than your visual cortex cells can in response to a flash.


The story seems as though it should be wrapped up here. Yet when we go outside the realm of motor reactions and into the realm of perception (what you report you saw and heard), the plot thickens. When it comes to awareness, your brain goes through a good deal of trouble to perceptually synchronize incoming signals that were synchronized in the outside world. So a firing gun will seem to you to have banged and flashed at the same time. (At least when the gun is within thirty meters; past that, the different speeds of light and sound cause the signals to arrive too far apart to be synchronized.)


But given that the brain received the signals at different times, how can it know what was supposed to be simultaneous in the outside world? How does it know that a bang didn't really happen before a flash? It has been shown that the brain constantly recalibrates its expectations about arrival times. And it does so by starting with a single, simple assumption: if it sends out a motor act (such as a clap of the hands), all the feedback should be assumed to be simultaneous and any delays should be adjusted until simultaneity is perceived. In other words, the best way to predict the expected relative timing of incoming signals is to interact with the world: each time you kick or touch or knock on something, your brain makes the assumption that the sound, sight, and touch are simultaneous.


While this is a normally adaptive mechanism, we have discovered a strange consequence of it: Imagine that every time you press a key, you cause a brief flash of light. Now imagine we sneakily inject a tiny delay (say, two hundred milliseconds) between your key-press and the subsequent flash. You may not even be aware of the small, extra delay. However, if we suddenly remove the delay, you will now believe that the flash occurredbeforeyour key-press, an illusory reversal of action and sensation. Your brain tells you this, of course, because it has adjusted to the timing of the delay.


Note that the recalibration of subjective timing is not a party trick of the brain; it is critical to solving the problem of causality. At bottom, causality requires a temporal order judgment: did my motor act come before or after that sensory signal? The only way this problem can be accurately solved in a multisensory brain is by keeping the expected time of signals well calibrated, so that "before" and "after" can be accurately determined even in the face of different sensory pathways of different speeds.


It must be emphasized that everything I've been discussing is in regard to conscious awareness. It seems clear from preconscious reactions that the motor system does not wait for all the information to arrive before making its decisions but instead acts as quickly as possible,beforethe participation of awareness, by way of fast subcortical routes. This raises a question: what is the use of perception, especially since it lags behind reality, is retrospectively attributed, and is generally outstripped by automatic (unconscious) systems? The most likely answer is that perceptions are representations of information that cognitive systems can work with later. Thus it is important for the brain to take sufficient time to settle on its best interpretation of what just happened rather than stick with its initial, rapid interpretation. Its carefully refined picture of what just happened is all it will have to work with later, so it had better invest the time.


Neurologists can diagnose the variety of ways in which brains can be damaged, shattering the fragile mirror of perception into unexpected fragments. But one question has gone mostly unasked in modern neuroscience: what do disorders oftimelook like? We can roughly imagine what it is like to lose color vision, or hearing, or the ability to name things. But what would it feel like to sustain damage to your time- construction systems?


Recently, a few neuroscientists have begun to consider certain disorders—for example, in language production or reading—as potential problems of timing rather than disorders of language as such. For example, stroke patients with language disorders are worse at distinguishing different durations, and reading difficulties in dyslexia may be problems with getting the timing right between the auditory and visual representations.


We have recently discovered that a deficit in temporalorder judgments may underlie some of the hallmark symptoms of schizophrenia, such as misattributions of credit ("My hand moved, but I didn't move it") and auditory hallucinations, which may be an order reversal of the generation and hearing of normal internal monolog.


As the study of time in the brain moves forward, it will likely uncover many contact points with clinical neurology. At present, most imaginable disorders of time would be lumped into a classification of dementia or disorientation, catch-all diagnoses that miss the important clinical details we hope to discern in coming years.


Finally, the more distant future of time research may change our views of other fields, such as physics. Most of our current theoretical frameworks include the variable t in a Newtonian, river-flowing sense. But as we begin to understand time as a construction of the brain, as subject to illusion as the sense of color is, we may eventually be able to remove our perceptual biases from the equation. Our physical theories are mostly built on top of our filters for perceiving the world, and time may be the most stubborn filter of all to budge out of the way.



[This essay was originally published as Eagleman, DM (2009),What's Next? Dispatches on the Future of Science. Ed. M. Brockman. New York: Vintage.]



1V. Pariyadath and D. M. Eagleman, "The Effect of Predictability on Subjective Duration,"PLoS ONE(2007).


2A critical point is that the speed at which one can discriminate alternating patterns is not limited by the eyes themselves, since retinal ganglion cells have extremely high temporal resolution. For more details on this study, see C. Stetson et al., "Does Time Really Slow Down During a Frightening Event?"PLoS ONE(2007).


3We introduced the termpostdictionin 2000 to describe the brain's act of collecting information well after an event and then settling on a perception (D. M. Eagleman and T. J. Sejnowski, "Motion Integration and Postdiction in Visual Awareness,"Science287(2000):2036�8).


4R. Efron, "Temporal Perception, Aphasia, and Deja Vu,"Brain86(1963): 403�24; M. M. Merzenich et al., "Temporal Processing Deficits of Language-Learning Impaired Children Ameliorated by Training,"Science271, no. 5245 (1996): 77�81.



posted by David Eagleman on March, 23 ]]>
/author_blog_posts/4423974-silicon-immortality-downloading-consciousness-into-computers Sun, 01 Apr 2012 17:00:00 -0700 <![CDATA[Silicon Immortality: Downloading Consciousness into Computers]]> /author_blog_posts/4423974-silicon-immortality-downloading-consciousness-into-computers

Well before we understand how brains work, we may find ourselves able to digitally copy the brain's structure and able to download the conscious mind into a computer. What are the possibilities and challenges?



While medicine will advance in the next half century, we are not on a crash-course for achieving immortality by curing all disease. Bodies simply wear down with use. We are on a crash-course, however, with technologies that let us store unthinkable amounts of data and run gargantuan simulations. Therefore, well before we understand how brains work, we will find ourselves able to digitally copy the brain's structure and able to download the conscious mind into a computer.


Silicon Immortality


If the computational hypothesis of brain function is correct, it suggests that an exact replica of your brain will hold your memories, will act and think and feel the way you do, and will experience your consciousness � irrespective of whether it's built out of biological cells, Tinkertoys, or zeros and ones. The important part about brains, the theory goes, is not the structure, it is about the algorithms that ride on top of the structure. So if the scaffolding that supports the algorithms is replicated � even in a different medium � then the resultant mind should be identical. If this proves correct, it is almost certain we will soon have technologies that allow us to copy and download our brains and live forever in silica. We will not have to die anymore. We will instead live in virtual worlds like the Matrix. I assume there will be markets for purchasing different kinds of afterlives, and sharing them with different people � this is future of social networking. And once you are downloaded, you may even be able to watch the death of your outside, real-world body, in the manner that we would view an interesting movie.


Of course, this hypothesized future embeds many assumptions, the speciousness of any one of which could spill the house of cards. The main problem is that we don't know exactly which variables are critical to capture in our hypothetical brain scan. Presumably the important data will include the detailed connectivity of the hundreds of billions of neurons. But knowing the point-to-point circuit diagram of the brain may not be sufficient to specify its function. The exact three-dimensional arrangement of the neurons and glia is likely to matter as well (for example, because of three-dimensional diffusion of extracellular signals). We may further need to probe and record the strength of each of the trillions of synaptic connections. In a still more challenging scenario, the states of individual proteins (phosphorylation states, exact spatial distribution, articulation with neighboring proteins, and so on) will need to be scanned and stored. It should also be noted that a simulation of the central nervous system by itself may not be sufficient for a good simulation of experience: other aspects of the body may require inclusion, such as the endocrine system, which sends and receives signals from the brain. These considerations potentially lead to billions of trillions of variables that need to be stored and emulated.


The other major technical hurdle is that the simulated brain must be able to modify itself. We need not only the pieces and parts, we also the physics of their ongoing interactions � for example, the activity of transcription factors that travel to the nucleus and cause gene expression, the dynamic changes in location and strength of the synapses, and so on. Unless your simulated experiences change the structure of your simulated brain, you will be unable to form new memories and will have no sense of the passage of time. Under those circumstances, is there any point in immortality?



SiliconImmort3The good news is that computing power is blossoming sufficiently quickly that we are likely to make it within a half century. And note that a simulation does not need to be run in real time in order for the simulated brain to believe it is operating in real time. There's no doubt that whole brain emulation is an exceptionally challenging problem. As of this moment, we have no neuroscience technologies geared toward ultra-high-resolution scanning of the sort required � and even if we did, it would take several of the world's most powerful computers to represent a few cubic millimeters of brain tissue in real time. It's a large problem. But assuming we haven't missed anything important in our theoretical frameworks, then we have the problem cornered and I expect to see the downloading of consciousness come to fruition in my lifetime.



(I originally in John Brockman's This Will Change Everything, a collection of answers to the Edge.org annual question)



posted by David Eagleman on March, 06 ]]>
/author_blog_posts/4423975-time-to-end-the-war-on-drugs Thu, 08 Mar 2012 16:00:00 -0800 Time to End the War on Drugs? /author_blog_posts/4423975-time-to-end-the-war-on-drugs

To liberalise or prohibit? I recently joined Eliot Spitzer, Julian Assange, Vicente Fox, Russell Brand, Richard Branson and several others for an online debate.



To liberalise or prohibit? I recently joined Eliot Spitzer, Julian Assange, Vicente Fox, Russell Brand, Richard Branson and several others for an online live debate hosted by Google, YouTube, and Intelligence Squared.


For those who missed the debate, it's now online (my contribution occurs at 1:17):



For the short version, here's my position on the War on Drugs: Attacking the drug supply will never work. In the United States we spend over 20 billion dollars a year on the War on Drugs, and it's wasted money. This is because the drug supply is like a water balloon: if you push it down in one location, it comes up somewhere else. The better strategy is not to address supply, but demand. Drug demand is rooted in the brain of the addict. We know quite a bit about the circuitry and pharmacology of drug addiction, and there are many fruitful new approaches to addressing the ills of drug addiction in a cooperative, evidence-based, neurally-compatible manner. Dealing with drug addiction through rehabilitation is a more humane and cost effective idea than mass incarceration of the addicted.


For the fleshed-out version of this argument, please see my paper: .


Also, here's an interesting summary article of the problems with the current War on Drugs:.


As people sometimes say, just because using drugs is a stupid idea, that doesn't automatically make the War on Drugs a smart idea.



posted by David Eagleman on February, 12 ]]>
/author_blog_posts/4423976-the-mystery-of-expertise Mon, 23 Jan 2012 16:00:00 -0800 The Mystery of Expertise /author_blog_posts/4423976-the-mystery-of-expertise

To the extent that consciousness is useful, it is useful in small quantities, and for very particular kinds of tasks. It's easy to understand why you would not want to be consciously aware of the intricacies of your muscle movement, but this can be less intuitive when applied to your perceptions, thoughts, and beliefs, which are also final products of the activity of billions of nerve cells.



Unconscious perception


The mystery of expertise


There is a chasm between what the brain knows and what our minds can fathom


David Eagleman, PhD


[This article originally appeared inThe Weekmagazine,Dec 26, 2011]




CONSIDER THE SIMPLE act of changing lanes while driving a car. Try this: Close your eyes, grip an imaginary steering wheel, and go through the motions of a lane change. Imagine that you are driving in the left lane and you would like to move over to the right lane. Before reading on, actually try it.


It's a fairly easy task, right? I'm guessing that you held the steering wheel straight, then banked it over to the right for a moment, and then straightened it out again. No problem.


Like almost everyone else, you got it completely wrong


The motion of turning the wheel rightward for a bit, then straightening it out again would steer you off the road: You just piloted a course from the left lane onto the sidewalk. The correct motion for changing lanes is banking the wheel to the right, then back through the center, and continuing to turn the wheel just as far to the left side, and only then straightening out. Don't believe it? Verify it for yourself when you're next in the car. It's such a simple task that you have no problem accomplishing it in your daily driving. But when forced to access it consciously, you're flummoxed.


The lane-changing example is one of a thousand. You are not consciously aware of the vast majority of your brain's ongoing activities, nor would you want to be � it would interfere with the brain's well-oiled processes. The best way to mess up your piano piece is to concentrate on your fingers; the best way to get out of breath is to think about your breathing; the best way to miss the golf ball is to analyze your swing.


Remembering motor acts like changing lanes is a type of implicit memory � which means that your brain holds knowledge of something that your mind cannot explicitly access. Riding a bike, tying your shoes, typing on a keyboard, and steering your car into a parking space while speaking on your cellphone are examples of this. You execute these actions easily but without knowing the details of how you do it. You would be totally unable to describe the perfectly timed choreography with which your muscles contract and relax as you navigate around other people in a cafeteria while holding a tray, yet you have no trouble doing it. This is the gap between what your brain can do and what you can tap into consciously.


To the extent that consciousness is useful, it is useful in small quantities, and for very particular kinds of tasks. It's easy to understand why you would not want to be consciously aware of the intricacies of your muscle movement, but this can be less intuitive when applied to your perceptions, thoughts, and beliefs, which are also final products of the activity of billions of nerve cells.


WHEN CHICKEN HATCHLINGS are born, large commercial hatcheries usually set about dividing them into males and females, and the practice of distinguishing gender is known as chick sexing. Sexing is necessary because the two genders receive different feeding programs: one for the females, which will eventually produce eggs, and another for the males, which are typically destined to be disposed of (only a few males are kept and fattened for meat). So the job of the chick sexer is to pick up each hatchling and quickly determine its sex in order to choose the correct bin to put it in. The problem is that the task is famously difficult: Male and female chicks look exactly alike.


Well, almost exactly. The Japanese invented a method of sexing chicks known as vent sexing, by which experts could rapidly ascertain the sex of one-day-old hatchlings. Beginning in the 1930s, poultry breeders from around the world traveled to the Zen-Nippon Chick Sexing School in Japan to learn the technique.


The mystery was that no one could explain exactly how it was done. It was somehow based on very subtle visual cues, but the professional sexers could not say what those cues were. They would look at the chick's rear (where the vent is) and simply seem to know the correct bin to throw it in.


And this is how the professionals taught the student sexers. The master would stand over the apprentice and watch. The student would pick up a chick, examine its rear, and toss it into one bin or the other. The master would give feedback: yes or no. After weeks of this activity, the student's brain was trained to a masterful � albeit unconscious � level.


Meanwhile, a similar story was unfolding oceans away. During World War II, under constant threat of bombings, the British had a great need to distinguish incoming aircraft quickly and accurately. Which aircraft were British planes coming home and which were German planes coming to bomb? Several airplane enthusiasts had proved to be excellent "spotters," so the military eagerly employed their services. These spotters were so valuable that the government quickly tried to enlist more spotters � but they turned out to be rare and difficult to find. The government therefore asked the spotters to train up some others.


It was a grim attempt. The spotters tried to explain their strategies but failed. No one got it, not even the spotters themselves. Like the chicken sexers, the spotters had little idea how they did what they did � they simply saw the right answer.


With a little ingenuity, the British finally figured out how to successfully train new spotters: by trial-and-error feedback. A novice would hazard a guess and an expert would say yes or no. Eventually the novices became, like their mentors, vessels of the mysterious, ineffable expertise.





THERE CAN BE a large gap between knowledge and awareness. When we examine skills that are not amenable to introspection, the first surprise is that implicit memory is completely separable from explicit memory: You can damage one without hurting the other.


Consider patients with anterograde amnesia, who cannot consciously recall new experiences in their lives. If you spend an afternoon trying to teach them the video game Tetris, they will tell you the next day that they have no recollection of the experience, that they have never seen this game before � and, most likely, that they have no idea who you are, either. But if you look at their performance on the game the next day, you'll find that they have improved exactly as much as nonamnesiacs. Implicitly their brains have learned the game: The knowledge is simply not accessible to their consciousness.


Of course, it's not just sexers and spotters and amnesiacs who enjoy unconscious learning. Essentially everything about your interaction with the world rests on this process. You may have a difficult time putting into words the characteristics of your father's walk, or the shape of his nose, or the way he laughs � but when you see someone who walks, looks, or laughs the way he does, you know it immediately.


One of the most impressive features of brains � and especially human brains � is the flexibility to learn almost any kind of task. Give an apprentice the desire to impress his master in a chicken-sexing task and his brain devotes its massive resources to distinguishing males from females. Give an unemployed aviation enthusiast a chance to be a national hero and his brain learns to distinguish enemy aircraft from local flyboys. This flexibility of learning accounts for a large part of what we consider human intelligence. While many animals are properly called intelligent, humans distinguish themselves in that they are so flexibly intelligent, fashioning their neural circuits to match the task at hand. It is for this reason that we can colonize every region on the planet, learn the local language we're born into, and master skills as diverse as playing the violin, high-jumping, and operating the space shuttle.


ON DEC. 31, 1974, Supreme Court Justice William O. Douglas was debilitated by a stroke that paralyzed his left side and confined him to a wheelchair. But Justice Douglas demanded to be checked out of the hospital on the grounds that he was fine. He declared that reports of his paralysis were "a myth." When reporters expressed skepticism, he invited them to join him for a hike, a move interpreted as absurd. He even claimed to be kicking football field goals with his paralyzed leg. As a result of this apparently delusional behavior, Douglas lost his seat on the Supreme Court.


What Douglas experienced is called anosognosia. This term describes a total lack of awareness about an impairment. It's not that Justice Douglas was lying � his brain actually believed that he could move just fine. But shouldn't the contradicting evidence alert those with anosognosia to a problem? It turns out that alerting the system to contradictions relies on particular brain regions, especially one called the anterior cingulate cortex. Because of these conflict-monitoring regions, incompatible ideas will result in one side or another's winning: The brain either constructs a story that makes them compatible or ignores one side of the debate. In special circumstances of brain damage, this arbitration system can be damaged, and then conflict can cause no trouble to the conscious mind.


ON AUG. 20, 1974, in a game between the California Angels and the Detroit Tigers, Guinness World Records clocked Nolan Ryan's fastball at 100.9 miles per hour. If you work the numbers, you'll see that Ryan's pitch departs the mound and crosses home plate � 60 feet, 6 inches away � in 0.4 seconds. This gives just enough time for light signals from the baseball to hit the batter's eye, work through the circuitry of the retina, activate successions of cells along the loopy superhighways of the visual system at the back of the head, cross vast territories to the motor areas, and modify the contraction of the muscles swinging the bat.


Amazingly, this entire sequence is possible in less than 0.4 seconds; otherwise no one would ever hit a fastball. But even more surprising is that conscious awareness takes longer than that: about half a second. So the ball travels too rapidly for batters to be consciously aware of it.


One does not need to be consciously aware to perform sophisticated motor acts. You can notice this when you begin to duck from a snapping tree branch before you are aware that it's coming toward you, or when you're already jumping up when you first become aware of a phone's ring. The conscious mind is not at the center of the action in the brain; instead, it is far out on a distant edge, hearing but whispers of the activity. As Carl Jung put it, "In each of us there is another whom we do not know." As Pink Floyd put it, "There's someone in my head, but it's not me."


.



posted by David Eagleman on March, 22 ]]>