Search this Blog information

Saturday, July 27, 2013

How the brain filters out noise and pays attention to what's important

Perhaps one of the most important parts of paying attention is filtering out the "noise" that isn't important. A new study published in Nature by researchers at Rush University in Chicago have figured out how the brain does this on a cellular level. The scientists watched the communication between neurons as monkeys completed a visual spatial attention task. The monkeys were able to pay attention by placing more importance on input from some neurons relative to others. This effectively turned up the volume on the important stimuli. The scientists now want to test this in individuals who have attention problems and determine whether or how this process is altered in these people.

Read more: www.geiselmed.dartmouth.edu/news/2013/06/27_briggs/
Journal article: Attention enhances synaptic efficacy and the signal-to-noise ratio in neural circuits. Nature, 2013. doi:10.1038/nature12276.

For more information about this study, please visit http://www.nature.com/nature/journal/vaop/ncurrent/full/nature12276.htm
Image credit: AGrinberg/Flickr

Friday, July 26, 2013

False memory into mice brain !!!

Scientists can implant false memories into mice


MouseOptical fibres implanted in a mouse's brain activated memory forming cells


False memories have been implanted into mice, scientists say.
A team was able to make the mice wrongly associate a benign environment with a previous unpleasant experience from different surroundings.
The researchers conditioned a network of neurons to respond to light, making the mice recall the unpleasant environment.
Reporting in Science, they say it could one day shed light into how false memories occur in humans.
The brains of genetically engineered mice were implanted with optic fibres in order to deliver pulses of light to their brain. Known as optogenetics, this technique is able to make individual neurons respond to light.
Unreliable memory

Start Quote

Our memory changes every single time it's being recorded. That's why we can incorporate new information into old memories and this is how a false memory can form...”
Dr Xu LiuMassachusetts Institute of Technology
Just like in mice, our memories are stored in collections of cells, and when events are recalled we reconstruct parts of these cells - almost like re-assembling small pieces of a puzzle.
It has been well documented that human memory is highly unreliable, first highlighted by a study on eyewitness testimonies in the 70s. Simple changes in how a question was asked could influence the memory a witness had of an event such as a car crash.
When this was brought to public attention, eyewitness testimonies alone were no longer used as evidence in court. Many people wrongly convicted on memory statements were later exonerated by DNA evidence.
Xu Liu of the RIKEN-MIT Center for Neural Circuit Genetics and one the lead authors of the study, said that when mice recalled a false memory, it was indistinguishable from the real memory in the way it drove a fear response in the memory forming cells of a mouse's brain.

How a memory was implanted in a mouse

This cartoon explains how Dr Tonegawa's team created a false memory in the brain of mice
  • A mouse was put in one environment (blue box) and the brain cells encoding memory were labelled in this environment (white circles)
  • These cells were then made responsive to light
  • The animal was placed in a different environment (the red box) and light was delivered into the brain to activate these labelled cells
  • This induced the recall of the first environment - the blue box. While the animal was recalling the first environment, they also received mild foot shocks
  • Later when the mouse was put back into the first environment, it showed behavioural signs of fear, indicating it had formed a false fear memory for the first environment, where it was never shocked in reality
The mouse is the closest animal scientists can easily use to analyse the brain, as though simpler, its structure and basic circuitry is very similar to the human brain.
Studying neurons in a mouse's brain could therefore help scientists further understand how similar structures in the human brain work.
"In the English language there are only 26 letters, but the combinations of letters make unlimited words and sentences, this is also true for memories," Dr Liu told BBC News.
Evolving memories
"There are so many brain cells and for each individual memory, different combinations of small populations of cells are activated."
These differing combinations of cells could partly explain why memories are not static like a photograph, but constantly evolving, he added.

Erasing memories?

Brain artwork
Mice have previously been trained to believe they were somewhere else, "a bit like the feeling of deja-vu we sometimes get", said Rosamund Langston from Dundee University.
A possibility in the future is erasing memories, she told BBC News.
"Episodic memories - such as those for traumatic experiences - are distributed in neurons throughout the brain, and in order to make memory erasure a safe and useful tool, we must understand how the different components of each memory are put together.
"You may want to erase someone's memory for a traumatic event that happened in their home, but you certainly do not want to erase their memory for how to find their way around their home."
"If you want to grab a specific memory you have to get down into the cell level. Every time we think we remember something, we could also be making changes to that memory - sometimes we realise sometimes we don't," Dr Liu explained.
"Our memory changes every single time it's being 'recorded'. That's why we can incorporate new information into old memories and this is how a false memory can form without us realising it."
Susumu Tonegawa, also from RIKEN-MIT, said his teams' work provided the first animal model in which false and genuine memories could be investigated in the cells which store memories, called engram-bearing cells.
"Humans are highly imaginative animals. Just like our mice, an aversive or appetitive event could be associated with a past experience one may happen to have in mind at that moment, hence a false memory is formed."
Silencing fear
Neil Burgess from University College London, who was not involved with the work, told BBC News the study was an "impressive example" of creating a fearful response in an environment where nothing fearful happened.
"One day this type of knowledge may help scientists to understand how to remove or reduce the fearful associations experienced by people with conditions like post traumatic stress disorder."
But he added that it's only an advance in "basic neuroscience" and that these methods could not be directly applied to humans for many years.
"But basic science always helps in the end, and it may be possible, one day, to use similar techniques to silence neurons causing the association to fear."
'Diseases of thought'
Mark Mayford of the Scripps Research Institute in San Diego, US, said: "The question is, how does the brain change with experience? That's the heart of everything the brain does.
He explained that work like this could one day further help us to understand the structure of our thoughts and the cells involved.
"Then one can begin to look at those brain circuits, see how they change, and hopefully find the areas or mechanisms that change with learning."
"The implications are potentially interventions for diseases of thought such as schizophrenia. You cannot approach schizophrenia unless you know how a perception is put together."

Thursday, July 25, 2013

False But True - WHY MOST PUBLISHED NEUROSCIENCE FINDINGS ARE FALSE

WHY MOST PUBLISHED NEUROSCIENCE FINDINGS ARE FALSE

Stanford Professor Dr. John Ioannidis has made some waves over the last few years.
His best-known work is a 2005 paper titled “Why most published research findings are false.”(1) It turns out that Ioannidis is not one to mince words.
In the May 2013 issue of Nature Reviews Neuroscience, Ioannidis and colleagues specifically tackle the validity of neuroscience studies (2). This recent paper was more graciously titled “Power failure: why small sample size undermines the reliability of neuroscience,” but it very easily could have been called “Why most published neuroscience findings are false.”
Since these papers outline a pretty damning analysis of statistical reliability in neuroscience (and biomedical research more generally) I thought they were worth a mention here on the Neuroblog.
Ioannidis and colleagues rely on a measure called Positive Predictive Value or PPV, a metric most commonly used to describe medical tests. PPV is the likelihood that, if a test comes back positive, the result in the real world is actually positive. Let’s take the case of a throat swab for a strep infection. The doctors take a swipe from the patient’s throat, culture it, and the next day come back with results. There are four possibilities.
  1. The test comes back negative, and the patient is negative (does not have strep). This is known as a “correct rejection”.
  2. The test comes back negative, even though the patient is positive (a “miss” or a “false negative”).
  3. The test comes back positive, even when the patient is negative (a “false alarm” or a “false positive”).
  4. The test correctly detects that a patient has strep throat (a “hit”).
In neuroscience research, we hope that every published “positive” finding reflects an actual relationship in the real world (there are no “false alarms”). We know that this is not completely the case. Not every single study ever published will turn out to be true. But Ioannidis makes the argument that these “false alarms” come up much more frequently than we would like to think.
To calculate PPV, you need three other values:
  1. the threshold of significance, or α, usually set at 0.05.
  2. the power of the statistical test. If β is the “false negative” rate of a statistical test, power is 1 – β. To give some intuition–if the power of a test is 0.7, and there are 10 studies done that all are testing non-null effects, the test will only uncover 7 of them. The main result in Ionnadis’s paper is an analysis of neuroscience meta-analyses published in 2011. He finds the median statistical power of the papers in these studies to be 0.2. More on that later.
  3. the pre-study odds, or R. R is the prior on any given relationship tested in the field being non-null. In other words, if you had a hat full of little slips of paper, one for every single experiment conducted in the field, and you drew one out, R is the odds that that experiment is looking for a relationship that exists in the real world.
For those who enjoy bar-napkin calculations–those values fit together like this:
PPV = ([1 - \beta] * R) / ([1 - \beta] * R + \alpha)
Let’s get back to our medical test example for a moment. Say you’re working in a population where 1 in 5 people actually has strep (R = 0.25). The power of your medical test (1- β) is 0.8, and you want your threshold for significance to be 0.05. Then the test’s PPV is (0.8 * 0.25)/ (0.8 * 0.25 + 0.05) = 0.8. This means that 80% of the times that the test claims the patient has strep, this claim will actually be true. If, however, the power of the test were only 0.2, as Ioannidis claims it is broadly across neuroscience, then the PPV drops to 50%. Fully half of the time, the test’s results indicate a false positive.
In a clinical setting, epidemiological results frequently give us a reasonable estimate for R. In neuroscience research, this quantity might be wholly unknowable. But, let’s start with the intuition of most graduate students in the trenches (ahem…at the benches?)…which is that 90% of experiments we try don’t work. And some days, even that feels optimistic. If this intuition is accurate, then only 10% of relationships tested in neuroscience are non-null in the real world.
Using that value, and Ioannidis’s finding that the average power in neuroscience is only 20%, we learn that the PPV of neuroscience research, as a whole, is (drumroll……..) 30%.
If our intuitions about our research are true, fellow graduate students, then fully 70% of published positive findings are “false positives”. This result furthermore assumes no bias, perfect use of statistics, and a complete lack of “many groups” effect. (The “many groups” effect means that many groups might work on the same question. 19 out of 20 find nothing, and the 1 “lucky” group that finds something actually publishes). Meaning—this estimate is likely to be hugely optimistic.
If we keep 20% power in our studies, but want a 50/50 shot of published findings actually holding true, the pre-study odds (R) would have to be 1 in 5.
To move PPV up to 75%, fully 3 in 4 relationships tested in neuroscience would have to be non-null.
1 in 10 might be pervasive grad-student pessimism, but 3 out of 4 is absolutely not the case.
So—how can we, the researchers, make this better? Well, the power of our analyses depends on the test we use, the effect size we measure, and our sample size. Since the tests and the effect sizes are unlikely to change, the most direct answer is to increase our sample sizes. I did some coffee-shop-napkin calculations from Ioannidis’s data to find that the median effect size in the studies included in his analysis is 0.51 (Cohen’s d). For those unfamiliar with Cohen’s d—standard intuition is that 0.2 is a “small” effect, 0.5 is a “medium” effect, and 0.8 constitutes a “large” effect. For those who are familiar with Cohen’s d…I apologize for saying that.
Assuming that the average effect size in neuroscience studies remains unchanged at 0.51, let’s do some intuition building about sample sizes. For demonstration’s sake, we’ll use the power tables for a 2-tailed t-test.
To get a power of 0.2, with an effect size of 0.51, the sample size needs to be 12 per group. This fits well with my intuition of sample sizes in (behavioral) neuroscience, and might actually be a little generous.
To bump our power up to 0.5, we would need an n of 31 per group.
A power of 0.8 would require 60 per group.
My immediate reaction to these numbers is that they seem huge—especially when every additional data point means an additional animal utilized in research. Ioannidis makes the very clear argument, though, that continuing to conduct low-powered research with little positive predictive value is an even bigger waste. I am happy to take all comers in the comments section, at the Oasis, and/or in a later blog post, but I will not be solving this particular dilemma here.
For those actively in the game, you should know that Nature Publishing Group is working to improve this situation (3). Starting next month, all submitting authors will have to go through a checklist, stating how their sample size was chosen, whether power calculations were done given the estimated effect sizes, and whether the data fit the assumptions of the statistics that are used. On their end, in an effort to increase replicability, NPG will be removing all limits on the length of methods sections. Perhaps other prominent publications would do well to follow suit.
Footnotes

Wednesday, July 17, 2013

WIFI Brain-Computer interface

Brown University creates first wireless, implanted brain-computer interface. 


Wireless BCI inventors, Arto Nurmikko and Ming Yin, look thoroughly amazed by their device

We’ve covered BCIs extensively here on ExtremeTech, but historically they’ve been bulky and tethered to a computer. A tether limits the mobility of the patient, and also the real-world testing that can be performed by the researchers. Brown’s wireless BCI allows the subject to move freely, dramatically increasing the quantity and quality of data that can be gathered — instead of watching what happens when a monkey moves its arm, scientists can now analyze its brain activity during complex activity, such as foraging or social interaction. Obviously, once the wireless implant is approved for human testing, being able to move freely — rather than strapped to a chair in the lab — would be rather empowering.
Wireless BCI, installed in a monkey and a pig
Brown’s wireless BCI, fashioned out of hermetically sealed titanium, looks a lot like a pacemaker. (See: Brain pacemaker helps treat Alzheimer’s disease.) Inside there’s a li-ion battery, an inductive (wireless) charging loop, a chip that digitizes the signals from your brain, and an antenna for transmitting those neural spikes to a nearby computer. The BCI is connected to a small chip with 100 electrodes protruding from it, which, in this study, was embedded in the somatosensory cortex or motor cortex. These 100 electrodes produce a lot of data, which the BCI transmits at 24Mbps over the 3.2 and 3.8GHz bands to a receiver that is one meter away. The BCI’s battery takes two hours to charge via wireless inductive charging, and then has enough juice to last for six hours of use.
Brown's wireless BCI, exploded view
One of the features that the Brown researchers seem most excited about is the device’s power consumption, which is just 100 milliwatts. For a device that might eventually find its way into humans, frugal power consumption is a key factor that will enable all-day, highly mobile usage. Amusingly, though, the research paper notes that the wireless charging does cause significant warming of the device, which was “mitigated by liquid cooling the area with chilled water during the recharge process and did not notably affect the animal’s comfort.” Another important factor is that the researchers were able to extract high-quality, “rich” neural signals from the wireless implant — a good indicator that it will also help human neuroscience, if and when the device is approved.
Moving forward, the wireless BCI is very much a part of BrainGate — the Brown University research group that’s tasked with bringing these neurological technologies to humans. So far, the pinnacle of BrainGate’s work is a robotic arm controlled by a tethered BCI, which paralyzed patients can use to feed themselves (video embedded below). While the wireless BCI isn’t approve for human use (and there’s no indication that they’re seeking approval yet), it was designed specifically so that it should be safe for human use.
The Brown researchers now intend to develop a different version of the device to help them study the motor cortex of an animal with Parkinson’s disease. They are also working on reducing the device’s size, improving its safety and reliability, and increasing the amount of data it can transmit — for the eventual goal of equipping those with movement disabilities, or elective transhumanists, with a wireless brain-computer interface.
Research paper: doi:10.1088/1741-2560/10/2/026010 – “An implantable wireless neural interface for recording cortical circuit dynamics in moving primates”

Brain Computer Interface – a timeline

How the Human/Computer Interface Works (Infographics)


Infographic: How the Human/Computer Interface Works.
The long history of user interfaces spans the decades from the primitive punched-card days of the 1950s, through the typed command lines of the 1960s, to the familiar windows and icons of today and beyond.
 
Three factors work to both limit and enable human/computer interface development:
  • Computing Power: Increasingly powerful computer hardware enables more sophisticated software interactions.
  • The Imagination of Inventors: Software designers envision new interactions that take advantage of increasing computer power.
  • The Market: Driven by both large corporate customers and also super-popular  consumer gadgets like iPad.
A timeline of computer interface milestones:
 
1822: The Babbage Analytical Engine was a Victorian-era concept envisioned more than a century before its time, this mechanical computer would have been programmed by physically manipulating cams, clutches, cranks and gears.
 
1950s: Punched cards were first used in the 18th century to control automatic textile looms. By the late 19th century the cards were used for entering data into simple tabulating machines. The advent of electronic computers in the 1950s led to IBM’s punched cards becoming the primary means of entering data and commands into computers.
 
1960s: The Command Line Interface (CLI). Teletype keyboards were connected to early computers to allow users to input their commands. Later, cathode  ray tubes (CRTs) were used as display devices, but the interaction with the computer remained a text-only one.
 
1951: The Light Pen. Created at MIT, the pen is a light-sensitive stylus developed for use with glass-face vacuum tube CRT monitors. The pen senses changes in brightness on the screen.
 
1952: The Trackball. Originally developed for air traffic control and military systems, the trackball was adapted for computer use by MIT scientists in 1964. As a small ball is rotated by the user, sensors detect the changes in orientation of the ball, which are then translated into movements in the position of a cursor on the computer screen.
 
1963: The Mouse. Douglas Englebart and Bill English developed the first computer mouse at the Stanford Research Institute in Palo Alto, Calif. The device was a block of wood with a single button and two gear-wheels positioned perpendicularly to each other.
 
In 1972, while working at Xerox PARC, Bill English and Jack Hawley replaced the two roller wheels with a metal ball bearing to track movement. The ball enabled the mouse to move in any direction, not just on one axis like the original mouse.
 
In 1980, the optical mouse was developed simultaneously by two different researchers. Both required a special mouse pad, and utilized special sensors to detect light and dark. Today’s optical mice can work on any surface and use an LED or laser as a light source.
 
1980s: The Graphical User Interface. The Xerox Star 8010 was the first commercial computer system to come with a mouse, as well as a bitmapped, window-based graphical user interface (GUI) featuring icons and folders. These technologies were originally developed for an experimental system called Alto, which was invented at the Xerox Palo Alto Research Center (PARC).
 
The Xerox workstation systems were intended for business use and had pricetags in the tens of thousands of dollars. The Apple Macintosh was the first consumer-level computer to include the advanced black-and-white graphical interface and a mouse for positioning the cursor on the screen.
 
1984: Multitouch. The first transparent multitouch screen overlay was developed by Bob Boie at Bell Labs. His device used a conductive surface with voltage applied across it and an array of touch sensors laid on top of a CRT display (cathode ray tube). The human body’s natural ability to hold an electrical charge causes a local build-up of charge when the surface is touched, and the position of the disturbance of the field can be determined, enabling a user to manipulate graphical objects with their fingers.
 
2000s: Natural User Interface. The natural user interface, or NUI, senses the user’s body movements and voice commands rather than requiring the use of input devices such as a keyboard or touch screen. Microsoft introduced its Project Natal, later named Kinect, in 2009. Kinect controls the X-box 360 video game system.
 
The future: Direct Brain-Computer Interface. The ultimate computer interface would be thought control. Research into controlling a computer with the brain was begun in the 1970s. Invasive BCI requires that sensors be implanted in the brain to detect thought impulses. Non-invasive BCI reads electromagnetic waves through the skull without the need for implants.

Forgetting Is Harder for Older Brains

Forgetting Is Harder for Older Brains

Adults hang on to useless information, which impedes learning
Kids are wildly better than adults at most types of learning—most famously, new languages. One reason may be that adults' brains are “full,” in a way. Creating memories relies in part on the destruction of old memories, and recent research finds that adults have high levels of a protein that prevents such forgetting.
Whenever we learn something, brain cells become wired together with new synapses, the connections between neurons that enable communication. When a memory fades, those synapses weaken. Researchers led by Joe Tsien, a neuroscientist at the Medical College of Georgia, genetically engineered mice to have high levels of NR2A, part of a receptor on the surface of some neurons that regulates the flow of chemicals such as magnesium and calcium in and out of a cell. NR2A is known to be more prevalent in the brains of mammals as they age. The engineered mice, though young, had adult levels of NR2A, and they showed some difficulty forming long-term memories. More dramatically, their brains could barely weaken their synapses, a process that allows the loss of useless information in favor of more recent data.
A similar process may govern short-term memories as well. When you hear a friend ask for coffee, the details of her order don't just slip away in your mind—your brain must produce a protein that actively destroys the synapses encoding that short-term memory, according to a 2010 paper in Cell.
Much psychological research supports the idea that forgetting is essential to memory and emotional health [see “Trying to Forget,” by Ingrid Wickelgren; Scientific American Mind, January/February 2012]. Tsien's new work, published January 8 inScientific Reports, suggests that older brains hold on to their connections more dearly—which helps to explain why learning is more laborious as we age and why memory trouble later in life so often involves the accidental recall of outdated information.
HOW TO INTENTIONALLY FORGET A MEMORY
Direct Suppression
Try to block out all thoughts of a certain memory.
  • Increases activity in the right dorsolateral prefrontal cortex, which mediates working memory and cognitive control.
  • Reduces activity in the hippocampus, an area important for conscious recollection.
Thought Substitution
Try to forget by substituting the unwanted memory with a more desired one.
  • Increases activity in the left caudal prefrontal cortex, thought to decrease saliency of intrusive memories, and the midventrolateral prefrontal cortex , which helps to retrieve a specific memory.

This article was originally published with the title Forgetting Is Harder for Older Brains.

Tuesday, July 16, 2013

INNER SPEECH SPEAKS ABOUT THE BRAIN VOLUME

Inner Speech Speaks Volumes About the Brain 

http://www.sciencenewsline.com/articles/2013071611170001.html

Whether you're reading the paper or thinking through your schedule for the day, chances are that you're hearing yourself speak even if you're not saying words out loud. This internal speech — the monologue you "hear" inside your head — is a ubiquitous but largely unexamined phenomenon. A new study looks at a possible brain mechanism that could explain how we hear this inner voice in the absence of actual sound.

In two experiments, researcher Mark Scott of the University of British Columbia found evidence that a brain signal called corollary discharge — a signal that helps us distinguish the sensory experiences we produce ourselves from those produced by external stimuli — plays an important role in our experiences of internal speech.
The findings from the two experiments are published in Psychological Science, a journal of the Association for Psychological Science. Corollary discharge is a kind of predictive signal generated by the brain that helps to explain, for example, why other people can tickle us but we can't tickle ourselves. The signal predicts our own movements and effectively cancels out the tickle sensation.
And the same mechanism plays a role in how our auditory system processes speech. When we speak, an internal copy of the sound of our voice is generated in parallel with the external sound we hear.
"We spend a lot of time speaking and that can swamp our auditory system, making it difficult for us to hear other sounds when we are speaking," Scott explains. "By attenuating the impact our own voice has on our hearing — using the 'corollary discharge' prediction — our hearing can remain sensitive to other sounds."
Scott speculated that the internal copy of our voice produced by corollary discharge can be generated even when there isn't any external sound, meaning that the sound we hear when we talk inside our heads is actually the internal prediction of the sound of our own voice.
If corollary discharge does in fact underlie our experiences of inner speech, he hypothesized, then the sensory information coming from the outside world should be cancelled out by the internal copy produced by our brains if the two sets of information match, just like when we try to tickle ourselves.
And this is precisely what the data showed. The impact of an external sound was significantly reduced when participants said a syllable in their heads that matched the external sound. Their performance was not significantly affected, however, when the syllable they said in their head didn't match the one they heard.
These findings provide evidence that internal speech makes use of a system that is primarily involved in processing external speech, and may help shed light on certain pathological conditions.


"This work is important because this theory of internal speech is closely related to theories of the auditory hallucinations associated with schizophrenia," Scott concludes.

Clinicians should pay attention to stroke patients - WALKING

Clinicians Should Pay Attention to Stroke Patients Who Cannot Walk at 3-6 Mon After Onset



Gait dysfunction is one of the most serious disabling sequelae of stroke. Regaining gait ability in stroke is a primary goal of neurorehabilitation. Furthermore, gait is a less demanding motor function than hand function. Stroke patients can walk when motor function is recovered in the proximal joint (hip and knee), at least to the degree of being able to oppose gravity. In general, most motor recovery after stroke occurs within 3-6 months after onset, and gait function usually recovers within 3 months of stroke onset. Therefore, clinicians need to look for the cause of gait inability and perform intensive rehabilitation for stroke patients who cannot walk after 3-6 months after insult. Sung Ho Jang and team from the College of Medicine, Yeungnam University (Daegu, Republic of Korea) reported on a stroke patient who showed delayed gait recovery between 8 and 11 months after the onset of intracerebral hemorrhage, which has been reported in the Neural Regeneration Research (Vol. 8, No. 16, 2013). This 32-year-old female patient underwent craniotomy and drainage for right intracerebral hemorrhage due to rupture of an arteriovenous malformation. Brain MRI revealed a large leukomalactic lesion in the right fronto-parietal cortex. Diffusion tensor tractography at 8 months after onset revealed that the right corticospinal tract was severely injured. At this time, the patient could not stand or walk despite undergoing rehabilitation from 2 months after onset. It was believed that severe spasticity of the left leg and right ankle was largely responsible, and thus, antispastic drugs, antispastic procedures (alcohol neurolysis of the motor branch of the tibial nerve and an intramuscular alcohol wash of both tibialis posterior muscles) and physical therapy were tried to control the spasticity. These measures relieved the severe spasticity, with the result that the patient was able to stand at 3 months. In addition, the improvements in sensorimotor function, visuospatial function, and cognition also seemed to contribute to gait recovery. As a result, she gained the ability to walk independently on even floor with a left ankle foot orthosis at 11 months after onset. This case illustrates that clinicians should attempt to find the cause of gait inability and to initiate intensive rehabilitation in stroke patients who cannot walk at 3-6 months after onset.