Search this Blog information

Thursday, March 28, 2013

Concussion leads to brain damage


Single Concussion May Cause Lasting Brain Damage

 A single concussion may cause lasting structural damage to the brain, according to a new study published online in the journal Radiology.
"This is the first study that shows brain areas undergo measureable volume loss after concussion," said Yvonne W. Lui, M.D., Neuroradiology section chief and assistant professor of radiology at NYU Langone School of Medicine. "In some patients, there are structural changes to the brain after a single concussive episode."
According to the Centers for Disease Control and Prevention, each year in the U.S., 1.7 million people sustain traumatic brain injuries, resulting from sudden trauma to the brain. Mild traumatic brain injury (MTBI), or concussion, accounts for at least 75 percent of all traumatic brain injuries.
Following a concussion, some patients experience a brief loss of consciousness. Other symptoms include headache, dizziness, memory loss, attention deficit, depression and anxiety. Some of these conditions may persist for months or even years.
Studies show that 10 to 20 percent of MTBI patients continue to experience neurological and psychological symptoms more than one year following trauma. Brain atrophy has long been known to occur after moderate and severe head trauma, but less is known about the lasting effects of a single concussion.
Dr. Lui and colleagues set out to investigate changes in global and regional brain volume in patients one year after MTBI. Twenty-eight MTBI patients (with 19 followed at one year) with post-traumatic symptoms after injury and 22 matched controls (with 12 followed at one year) were enrolled in the study. The researchers used three-dimensional magnetic resonance imaging (MRI) to determine regional gray matter and white matter volumes and correlated these findings with other clinical and cognitive measurements.
The researchers found that at one year after concussion, there was measurable global and regional brain atrophy in the MTBI patients. These findings show that brain atrophy is not exclusive to more severe brain injuries but can occur after a single concussion.
"This study confirms what we have long suspected," Dr. Lui said. "After MTBI, there is true structural injury to the brain, even though we don't see much on routine clinical imaging. This means that patients who are symptomatic in the long-term after a concussion may have a biologic underpinning of their symptoms."
Certain brain regions showed a significant decrease in regional volume in patients with MTBI over the first year after injury, compared to controls. These volume changes correlated with cognitive changes in memory, attention and anxiety.
"Two of the brain regions affected were the anterior cingulate and the precuneal region," Dr. Lui said. "The anterior cingulate has been implicated in mood disorders including depression, and the precuneal region has a lot of different connections to areas of the brain responsible for executive function or higher order thinking."
According to Dr. Lui, researchers are still investigating the long-term effects of concussion, and she advises caution in generalizing the results of this study to any particular individual.
"It is important for patients who have had a concussion to be evaluated by a physician," she said. "If patients continue to have symptoms after concussion, they should follow-up with their physician before engaging in high-risk activities such as contact sports."

Journal Reference:
  1. Yvonne W. Lui, Yongxia Zhou, Andrea Kierans, Damon Kenul, Yulin Ge, Joseph Rath, Joseph Reaume, Robert I. Grossman. Mild Traumatic Brain Injury: Longitudinal Regional Brain Volume ChangesRadiology, 2013 DOI:10.1148/radiol.13122542

Wednesday, March 27, 2013

Become a Bartenders or bouncers - Inject Hypocretin


WAKEFULNESS - become NOCTURNAL 

Certain groups of neurons determine whether light keeps us awake or not, says a new study.
Just a typical day for a hypocretin-deficient mouse. Okay, I'll wait for you to finish making that squinchy "Awww!" face, and then we'll move on with the article.
In the hypothalamus – a brain structure responsible for regulating hormone levels – specific kinds of neurons release a hormone called hypocretin (also known as hcrtor orexin). Hypocretin lets light-sensitive cells in other parts of the brain – such as the visual pathway – know that they should respond to incoming light by passing along signals for us to stay awake.
Scientists have understood for centuries that most animals and plants go through regular cycles of wakefulness and sleep - they call these patterns circadian rhythms or circadian cycles. More recently, researchers have begun unraveling the various chemical messagingsystems our bodies use to time and control these cycles - enzymes like PER and JARID1a, which help give us an intuitive sense of how long we’ve been awake or asleep.
But now, as the Journal of Neuroscience reports, a team led by UCLA’s Jerome Siegel has isolated a neurochemical messaging system that dictates whether or not we can stay awake during the day at all. The team bred a special strain of mice whose brains were unable to produce hypocretin, and found that these mice acted like students in first-period algebra - even under bright lights, they just kept dozing off. However, they did jump awake when they received a mild electric shock:
This is the first demonstration of such specificity of arousal system function and has implications for understanding the motivational and circadian consequences of arousal system dysfunction.
What’s even more interesting, though, is that there’s a second half to this story – the dozy mice were perfectly perky in the dark:
We found that Hcrt knock-out mice were unable to work for food or water reward during the light phase. However, they were unimpaired relative to wild-type (WT) mice when working for reward during the dark phase or when working to avoid shock in the light or dark phase.
In other words, the mice without hypocretin stayed awake and worked for food just fine when the lights were out. So they probably have promising futures as bartenders or bouncers.
The takeaway here is that hypocretin isn’t so much responsible for enabling knee-jerk reactions as it is for helping mice (and us) stay alert and motivated to complete reward-based tasks when the lights are on. Without this hormone, we might act normally at night, but we just wouldn’t feel like staying awake when the sun was out.
And that’s exactly what Siegel’s team had found in several of their earlier studies, which linkedhuman hypocretin deficiency with narcolepsy – a disease that causes excessive sleepinessand frequent daytime “sleep attacks.” These new results suggest that narcoleptic patients might have more success getting work done during the night, when their symptoms might be less severe.
Siegel also thinks clinically administered hypocretin might help block many effects ofdepression, and allow depressed patients to feel more motivated to get up and about during the day. If so, this could be a promising new form of treatment for that disease as well.
Finally, and perhaps most intriguingly of all, it’s likely that similar hormonal response “gateways” play crucial roles in other neurochemical arousal systems – like those involved infearanger, and sexual excitement. If so, discoveries along those lines could provide us with some staggering new insights into the ways our brains regulate their own behavior.
So, I know what you’re probably wondering: am I really advocating the use of electric shocks to keep bored math students awake? Of course not – I think releasing wild badgers into the classroom would be much more effective.

"GEN"I("E")ous - DNA of motivation


A wake up call!! 


How failing a PhD led to a strategy for a successful scientific career.
-Bruce Alberts
A wake-up call
Bruce Alberts: 'failure' was a blessing in disguise.
One of my most important formative experiences as a scientist was very traumatic at the time. In the spring of 1965, I had finished writing my PhD thesis at Harvard University, in Cambridge, Massachusetts, and had purchased aeroplane tickets to take my wife Betty and our one-year-old daughter with me for a postdoctoral year in Geneva, Switzerland. Only one step remained — a meeting of my thesis committee to approve the granting of my PhD degree in biophysics. No one in recent memory had failed at this late stage. But to my great surprise, the committee failed me, specifying the need for more experiments that eventually required six more months of research.
This was, of course, a great embarrassment and a shock to my ego. There were the practical problems of having to remain at Harvard — our apartment had already been rented to the next tenant and my small family had nowhere to live. But most importantly, I was to spend the next few months struggling to answer two questions that would be critical for my future. What had gone wrong, and did I really have what it takes to be a scientist?
As an undergraduate working with Jacques Fresco in Paul Doty's laboratory at Harvard, I was handed a research project that proved to be very successful. My undergraduate thesis was quickly converted into two important papers in 1960. This largely undeserved success gave me a false image of how easy it would be to do science. It also enabled me to persuade Paul Doty to allow me to test my own theoretical model for the initiation of chromosome replication as the centrepiece of my PhD research.
According to my model, the sites at which DNA replication begins (now called replication origins) should be located at the two ends of each DNA helix in a chromosome. If this model was correct, the enzyme DNA polymerase should create a transient covalent linkage between the two complementary DNA strands at the tip of a chromosome (a 'DNA crosslink'). I began an extensive search in DNA genomes for crosslinks that were located near the sites where replication begins. None of the tests supported my particular model, but I did find other crosslinks in all of the chromosomes that I tested. I spent several years characterizing these mysterious and unexpected 'naturally occurring crosslinks', but even 40 years later, their structure and origin are still not understood (J. Mol. Biol32, 405–421; 1968).
In retrospect, the shock of having my PhD thesis rejected in 1965 proved to be a critical step in shaping me as a scientist, because it forced me to recognize the central importance of the strategy that underlies any major scientific quest.
I had witnessed the frustration of scientists who were pursuing obvious experiments that were simultaneously being carried out in other laboratories. These scientists were constantly in a race. It had always seemed to me that, even if they were able to publish their results six months before a competing laboratory, they were unlikely to make truly unique contributions.
I had used a different strategy. My approach had been that of predicting how a particular biological process might work and then taking years to test whether my guess might be right. This was enormously risky. The good news was that I was carrying out experiments that were different from those being done by everyone else. The problem was that these tests could produce only a 'yes' or 'no' answer. If 'yes', I might be able to add something unique to the world's store of scientific knowledge. But if 'no', I would learn nothing of real value — in this case, I could eliminate just one of the many possible ways in which DNA replication might begin.
I wanted to continue to focus on how DNA is replicated for my postdoctoral work in Geneva. But what strategy should I choose? The months of analysis triggered by the wake-up call of my PhD failure finally produced an answer. I would look for a unique experimental approach, but one that would have a high probability of increasing our knowledge of the natural world, regardless of the experimental results obtained.
After a great deal of soul-searching, I decided that I would begin by developing a new method — one that would allow me to isolate proteins required for DNA replication that had thus far escaped detection. I knew that the enzyme RNA polymerase, which reads out the genetic information in DNA, binds weakly to any DNA sequence — even though this protein's biologically relevant binding sites are specific DNA sequences. If the proteins that cause DNA to replicate have a similar weak affinity for any DNA molecule, I would be able to isolate them by passing crude cell extracts through a column matrix containing immobilized DNA molecules.
Arriving in Geneva in late 1965 with my PhD degree finally in hand, I found that by drying an aqueous solution of DNA onto plain cellulose powder, I could construct a durable and effective 'DNA cellulose' matrix. A large number of different proteins in a crude, DNA-depleted extract of the bacterium Escherichia coli bound to a column containing this matrix. Moreover, these DNA-binding proteins could be readily purified by elution with an aqueous salt solution. Using this new biochemical tool and a large library of mutant T4 bacteriophages obtained from Dick Epstein in Geneva, I discovered the T4 gene 32 protein after moving to Princeton a year later as an assistant professor. This proved to be the first example of a single-strand DNA-binding (SSB) protein, a structural protein that plays an important role in DNA processes in all organisms (seeNature 227, 1313–1318; 1970).
The strategy of investing in method development and then using this new method for a major series of experiments would be employed over and over again during the next 25 years of my career as a research scientist. As a result, my laboratory almost never felt that it was in a race with other laboratories, and our successes were sufficient to satisfy both me and many of the graduate students and postdoctoral fellows who would join my laboratory. It seems strange to recall that we may owe all it all to one very unhappy PhD thesis committee at Harvard, nearly 40 years ago.

(See the same paper in following website : https://c250ztasdtqme0dhc.sec.amc.nl/nature/journal/v431/n7012/full/4311041a.html)

A Vision about the Vision (-Visual cortex)


TAKING VISION APART

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says anew study.
At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.
If other labs can confirm these results, this will mean we’re very close to being able to predict exactly whichneurons will fire when an animal looks at a specificobject.
Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cells – neurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists likeAlan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator– started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.
See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information inconnections and gradients – degrees to which one piece of neural architecture influencesothers. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.
So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and howthose connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.
But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports.
The team injected mouse brains with a special dye that’s chemically formulated to glowfluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown variousshapes. And the researchers confirmed something wonderfully weird about the way a brain works:
Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.
In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – itsroundness, its largeness, its color, and so on. As signals pass through each layer, they’reinfluenced by the neurons they’ve connected with before. Each layer is like a section of achoir, adding its own voice to the song with perfect timing.
Now, other teams have already developed technologies that can record memories and dreamsright out of the human brain – so what’s so amazing about this particular study? The level ofdetail:
Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.
Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.
The gap between mind and machine is getting narrower every day. How does that make you feel?

Sleep deprivation - JUNK food (Reward ?)


SLEEP, STRESS AND SNACKS

A lack of sleep makes our brains go nuts for unhealthy food, says a new study.
When sleep-deprived people are shown images of junk food, fMRI scans show that their brains’ reward centers light up with far more intense anticipation than those of people who’ve slept a full night. TheFourthmeal marketing team, I assume, are grinning knowingly.
The relationship between cravings and conscious control is a complex one, and studies have found that the balance between the two can easily be tilted. Under stress, our brains are much more likely to return to old junk-food anddrug addictions, as well as other bad habits like nail-biting.
We know this not just from psychological research, but from years of fMRI studies showing that stress weakens activity in areas like the medial prefrontal cortex (mPFC) and the orbitofrontal cortex (OFC) – parts of the brain that are crucial for self reflection, regulation of emotions, and conscious decision-making. Meanwhile, studies show that stress changes the connectivity of our brain’s more primitive “reward” areas, such as the nucleus accumbens and the ventral tegmental area.
The overall effect of these changes is that when you’re feeling stressed – whether it’s about money, lack of sleep, or just a feeling of hunger – your normal self control begins to lose the struggle against your desires for quick gratification.
As a matter of fact, the brains of people exposed to chronic stress often lose (or fail to develop) healthy connectivity between the OFC and those ancient reward centers – a trait that’s found in the brains of many psychopaths. So when you’re operating on an empty stomach or four hours of sleep, you could say that your brain has quite literally gone a little bit psycho.
"Hold all my calls, Peggy - I've got to finish this cost/benefit analysis."
This new study, led by Marie-Pierre St-Onge at Columbia University, focused on another type of stress-related change in brain activity: reward centers’ responsiveness to images of healthy vs. unhealthy foods in the brains of sleep-deprived people:
The same brain regions activated when unhealthy foods were presented were not involved when we presented healthy foods. The unhealthy food response was a neuronal pattern specific to restricted sleep.
In other words, even healthy people with strong self control can turn into sugar addicts when they’re running on less than a full night’s sleep. These results also confirm a fact discovered in some earlier studies: that sleep-deprived people eat more, in general, than their well-rested counterparts do.
Indeed, food intake data from this same study showed that participants ate more overall and consumed more fat after a period of sleep restriction compared to regular sleep. The brain imaging data provided the neurocognitive basis for those results.
In short, it’s clear that stress, sleep and eating disorders are all intimately linked – not just psychologically, but also in terms of the brain’s physical structure and functionality.
So if you’re trying to break a bad habit, make sure you’re getting plenty of healthy food and a solid eight hours every night. And if you’re running on less than a full tank today, give yourself the benefit of some extra patience and compassion – the rewards are worth it.

Tuesday, March 26, 2013

Maybe top 5 neuroscience research breakthroughs of 2012



THE TOP 5 NEUROSCIENCE BREAKTHROUGHS OF 2012

roflbot
More than any year before, 2012 was the year neuroscience exploded into pop culture. From mind-controlled robot hands to cyborg animals to TV specials to triumphant books, brain breakthroughs were tearing up the airwaves and the internets. From all the thrilling neurological adventures we covered over the past year, we’ve collected five stories we want to make absolutely sure you didn’t miss.
Now, no matter how scientific our topic is, any Top 5 list is going to turn out somewhat subjective. For one thing, we certainly didn’t cover every neuroscience paper published in 2012 – we like to pick and choose the stories that seem most interesting to us, and leave the whole “100 percent daily coverage” free-for-all to excellent sites like ScienceDaily.
As you may’ve also noticed, we tend to steer clear of headlines like “Brain Region Responsible for [X] Discovered!” because – as Ben talks about with Matt Wall in this interview – those kinds of discoveries are usually as vague and misleading as they are overblown by the press.
Instead, we chose to focus on five discoveries carry some of the most profound implications of any research published this past year – both for brain science, and for our struggle to understand our own consciousness.
So on that note, here – in countdown order – are the five discoveries that got us the most pumped up in 2012!

5. A Roadmap of Brain WiringA grid of fibers, bein' all interwoven and stuff.
Neuroscientists like to compare the task of unraveling the brain’s connections to the frustration of untangling the cords beneath your computer desk – except that in the brain, there are hundreds of millions of cords, and at least one hundred trillion plugs. Even with our most advanced computers, some researchers were despairing of ever seeing a complete connectivity map of the human brain in our lifetimes. But thanks to a team led by Van Wedeen at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital, 2012 gave us an unexpectedly clear glimpse of our brains’ large-scale wiring patterns. As it turns out, the overall pattern isn’t so much a tangle as a fabric – an intricate, multi-layered grid of cross-hatched neural highways. What’s more, it looks like our brains share this grid pattern with many other species. We’re still a long way from decoding how most of this wiring functions, but this is a big step in the right direction.

Opto Brain Blue_49704. Laser-Controlled Desire
Scientists have been stimulating rats’ pleasure centers since the 1950s – but 2012 saw the widespread adoption of a new brain-stimulation method that makes all those wires and incisions look positively crude. Researchers in the blossoming field of optogenetics develop delicate devices that control the firing of targeted groups of neurons – using only light itself. By hooking rats up to a tiny fiber-optic cable and firing lasers directly into their brains, a team led by Garret D. Stuber at the University of North Carolina at Chapel Hill School of Medicine were able to isolate specific neurochemical shifts that cause rats to feel pleasure or anxiety – and switch between them at will. This method isn’t only more precise than electrical stimulation – it’s also much less damaging to the animals.
(Thanks again, Mike Robinson, for sharing this image of your team’s laser-controlled brain!)

3. Programmable Brain Cells
7d1-chk-nf-h-3Pluripotent stem cell research took off like a rocket in 2012. After discovering that skin cells can be genetically reprogrammed into stem cells, which can in turn be reprogrammed into just about any cell in the human body, a team led by Sheng Ding at UCSF managed to engineer a working network of newborn neurons from a harvest of old skin cells. In other words, the team didn’t just convert skin cells into stem cells, then into neurons – they actually kept the batch of neurons alive and functional long enough to self-organize into a primitive neural network. In the near future, it’s likely that we’ll be treating many kinds of brain injuries by growing brand-new neurons from other kinds of cells in a patient’s own body. This is already close on the horizon for liver and heart cells – but the thought of being able to technologically shape the re-growth of a damaged brain is even more exciting.

microchip2. Memories on Disc
We’ve talked a lot about how easily our brains can modify and rewrite our long-term memories of facts and scenarios. In 2012, though, researchers went Full Mad Scientist with the implications of this knowledge, and blew some mouse minds in the process. One team, led by Mark Mayford of the Scripps Research Institute, took advantage of somerecently invented technology that enables scientists to record and store a mouse’s memory of a familiar place on a microchip. Mayford’s team figured out how to turn specific mouse memories on and off with the flick of a switch – but they were just getting warmed up. The researchers then proceeded to record a memory in one mouse’s brain, transfer it into another mouse’s nervous system, and activate it in conjunction with one of the second mouse’s own memories. The result was a bizarre “hybrid memory” – familiarity with a place the mouse had never visited. Well, not in the flesh, anyway.

1. Videos of Thoughts
reconstruct1Our most exciting neuroscience discovery of 2012 is also one of the most controversial. A team of researchers from the Gallant lab at UC Berkeley discovered a way to reconstruct videos of entire scenes from neural activity in a person’s visual cortex. Those on the cautionary side emphasize that activity in the visual cortex is fairly easy to decode (relatively speaking, of course) and that we’re still a long, long way from decoding videos of imaginary voyages or emotional palettes. In fact, from one perspective, this isn’t much different from converting one file format into another. On the other hand, though, these videos offer the first hints of the technological reality our children may inhabit: A world where the boundaries between the objective external world and our individual subjective experiences are gradually blurred and broken down. When it comes to transforming our relationship with our own consciousness – and those of the people around us – it doesn’t get much more profound than that.

So there you have it: Our picks for 2012′s most potentially transformative neuroscience breakthroughs. Did we miss an important one? Did we overstate the importance of something? Almost certainly, yes. So jump into the comments and let us know!

Exercise to improve memory




Mental Exercises to Help Improve Your Memory

Deutsch: Phrenologie
Deutsch: Phrenologie (Photo credit: Wikipedia)
Research has shown that exercising your mind contributes to your mental health and well being.  While physical activity has been shown to aid in sharpening our minds and recall, simple mental exercises can help us to remain sharp and improve memory as we age.
Try doing this mental exercise over a 4 week period and you should notice an improvement in your short and long term memory.
When you are ready to go to sleep, go over what you did that day from the time you got up until you get into bed. Start with the time you awoke, got out of bed, follow your entire day step by step until the time you went back to bed. Try to recall as much detail as possible, visualizing in your mind each and every step from beginning to end. In the beginning, you probably wont remember much detail, and you’ll probably move rapidly from task to task or think of the day in large periods of time.  However, try to slow down and remember as much as you can to take in as much detail as you can. With time and practice, you will notice significant improvement in your recall of events and details throughout the day.
This basic mental exercise has the following benefits: 

1. It will improve your memory.

2. Your ability to visualize will improve.

3. You will improve  your concentration.

4. You will be more in the moment throughout the day. Because you know you will be recalling your day later, you pay more attention to details throughout the day.

5. Your power of observation will improve. You will probably find yourself during the day performing a modified recall of your day to date because you know that later than night you will be trying to recall it again.

6. You will likely fall asleep faster because your mind will get tired much like counting sheep at night in order to fall asleep.

Monday, March 25, 2013

Amazing neuroscience 2011

What papers have been most interesting in neuroscience for the past year (2011)?

This obviously reflects a very biased opinion based upon my own limited reading, interests, etc. These also aren't necessarily the best articles, nor are they even scientifically correct, but they are certainly interesting.



  • Cohen Kadosh, R., Levy, N., O'Shea, J., Shea, N. & Savulescu, J. The neuroethics of non-invasive brain stimulation. Curr Biol 22, R108–11 (2012).
  • Loo, C. K. et al. Transcranial direct current stimulation for depression: 3-week, randomised, sham-controlled trial. The British Journal of Psychiatry 200, 52–59 (2012).
2012 seems to have been a turning point in the growth of non-invasive brain stimulation with the rise of TDCS and TACS for treating psychiatric and neurological disorders and their symptoms, as well as for cognitive enhancement. The first paper looks at the important ethical implications for these technologies and the second shows some promise for their use in treatment.





  • Carhart-Harris, R. L. et al. Implications for psychedelic-assisted psychotherapy: a functional magnetic resonance imaging study with psilocybin. The British Journal of Psychiatry 1–8 (2012).
  • Carhart-Harris, R. L. et al. Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin. Proc Natl Acad Sci USA 1–6 (2012).
This pair of papers are interesting and important for bringing some long-taboo topics back into the scientific discussion.



  • Huth, A. G., Nishimoto, S., Vu, A. T. & Gallant, J. L. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron 76, 1210–1224 (2012).
The Gallant lab (with Alex Huth leading this research) does it again with a killer fMRI study that basically shames previous attempts and redefines our "Victorian" views of the brain as a modular/hierarchical system. Any future fMRI studies looking at "X vs Y" tasks that don't make use of these methods are falling short.

Also, the associated website should be how science is done:
Brain



  • Landau, A. N. & Fries, P. Attention Samples Stimuli Rhythmically. Curr Biol 1–5 (2012).
  • Chakravarthi, R. & Vanrullen, R. Conscious updating is a rhythmic process.Proc Natl Acad Sci USA 109, 10599–10604 (2012).
The blood is in the water and everyone "in the know" is circling around the fact that visual awareness and processing is dependent upon oscillatory brain state, and these two papers show it beautifully. Now if only someone could somehow makeuse of this information... (FORESHADOWING!)


  • Koralek, A. C., Jin, X., Long, J. D., II, Costa, R. M. & Carmena, J. M. Corticostriatal plasticity is necessary for learning intentional neuroprosthetic skills. Nature 483, 331–335 (2012).
  • Berenyi, A., Belluscio, M., Mao, D. & Buzsaki, G. Closed-Loop Control of Epilepsy by Transcranial Electrical Stimulation. Science 337, 735–737 (2012).
The underlying knowledge of how brain-computer interfaces work, from a physiological perspective, and how they can be used in the treatment of disease has really matured with these two papers. Reading these things still makes me feel like I'm living in the future.



  • Halberda, J., Ly, R. & Wilmer, J. B. Number sense across the lifespan as revealed by a massive Internet-based sample. Proc Natl Acad Sci USA, (2012).
Neuroscientists and psychologists are finally learning how to incorporate large datasets into their thinking, making use of the internet to learn more about cognition and aging. This is also very relevant to my interests.



  • Parvizi, J., Jacques, C. & Foster, B. L. Electrical Stimulation of Human Fusiform Face-Selective Regions Distorts Face Perception. The Journal of Neuroscience(2012).
While this paper is by friends of mine out of Stanford, I have to say that I really dolike this research a lot. It has a cool combination of methods, in a fortuitously unfortunate situation to really clearly show a behavioral effect of interfering with a fundamental process in human perception. (I wrote more detail about this on on my blog: Face processing in the brain: "That was a trip")



  • Voytek, J. B. & Voytek, B. Automated cognome construction and semi-automated hypothesis generation. Journal of Neuroscience Methods 208, 92–100 (2012).
Yeah, yeah, it's my own. But dammit I really do think it's very interesting. It's even got its own topic on Quora (brainSCANr)! And I have said repeatedly and publicly that it will end up being one of the more interesting research projects of my career. (e.g., Bradley Voytek " SciPle.org and Neuroscientist Bradley Voytek is Bringing the Silicon Valley Ethos into Academia - Forbes)


  • Friston, K. Ten ironic rules for non-statistical reviewers. NeuroImage 1–30 (2012).
Just bitchy in a way that so many of us in the field wish we had the clout to say the way Friston does.