Search this Blog information

Wednesday, March 27, 2013

A Vision about the Vision (-Visual cortex)


TAKING VISION APART

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says anew study.
At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.
If other labs can confirm these results, this will mean we’re very close to being able to predict exactly whichneurons will fire when an animal looks at a specificobject.
Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cells – neurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists likeAlan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator– started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.
See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information inconnections and gradients – degrees to which one piece of neural architecture influencesothers. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.
So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and howthose connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.
But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports.
The team injected mouse brains with a special dye that’s chemically formulated to glowfluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown variousshapes. And the researchers confirmed something wonderfully weird about the way a brain works:
Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.
In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – itsroundness, its largeness, its color, and so on. As signals pass through each layer, they’reinfluenced by the neurons they’ve connected with before. Each layer is like a section of achoir, adding its own voice to the song with perfect timing.
Now, other teams have already developed technologies that can record memories and dreamsright out of the human brain – so what’s so amazing about this particular study? The level ofdetail:
Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.
Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.
The gap between mind and machine is getting narrower every day. How does that make you feel?

No comments:

Post a Comment