Tuesday, April 29, 2008

The mistery of stable images

We know optically the eye is like a simple meniscus camera that projects an image onto the retina. We also know that on the cortex there is a holomorphic map of the visual field. However we know very little of what happens in between. For example, in the lateral geniculate nucleus (LGN) there are seven layers, and if we stick a toothpick in a point like in a club sandwich, the layers are geometrically aligned.

But the human visual system (HVS) is not like a camcorder. Here is a simple test: take a camcorder and film what while you move down the street. Try doing this while walking, running, riding a camel, riding in a sports car. While you are doing this, each time you see the same scene.

Now remove the context, i.e., sit on a chair and watch what you filmed. The sports car piece will probably just be blurred, the other pieces will probably give you motion sickness and you will not make out much visual information. If you ever used a virtual reality system you know what I mean.

Add to this that you keep moving your head, and between fixation points your eyes keep saccading. We must conclude that the map on the cortex is not an image of the retina but an image of the real world. How does the HVS perform this feat?

Tim Gollisch and Markus MeisterTim Gollisch of the Max Planck Institute of Neurobiology and Markus Meister of the Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University have solved this riddle and published their data on page 1108 of Science 22 February 2008.

The trick is performed by the retinal ganglion cells, which come in pairs of fast OFF cells and biphasic OFF cells, which receive inputs from both the ON and OFF pathways, and the slow OFF and ON cells. It turns out that they have different spike latencies, which can be e a powerful mechanism to rapidly transmit a new visual scene. Moreover, certain neurons in visual cortex are exquisitely sensitive to the coincidence of spikes on their afferents (30), which is one possible readout mechanism for a latency code.

The autors conclude that it is conceivable that early aspects of sensory processing operate on the basis of the classification of spike latency patterns.