Monday, April 30, 2007

Non-local realism

Curiosity always draws the student of color perception to revisit the advances in the research of the physiology facilitating color vision. The student learns about particles of light called photons hitting like a billiard ball (activating) a rhodopsin protein, isomerizing it, and then producing a phototransduction cascade resulting in the cell membrane to hyperpolarize and cut off the neurotransmitter to the second order neurons in the retina. Yet, when the stimulus is studied, it is not a particle but an electromagnetic wave. What is the correct visualization of a photon, what is a photon's realism?

Albert Einstein, the Swiss student who in 1905 came up with the photon concept in his paper Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt, later in life stated "Gott würfelt nicht," God does not play dice. By this he meant that quantum theory does not provide a complete description of physical reality, because quantum theory only gives probabilistic predictions of individual events. In the seminal 1935 EPR paper with Podolsky and Rosen, Einstein wrote "while we have thus shown that the wavefunction does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible."

Such models of physical realism, suggesting that the results of observations are consequence of the properties carried by physical systems, are called hidden-variable theories. The idea is that all measurement outcomes depend on pre-existing properties of objects that are independent of the measurement. The limitation of quantum theory then would be that we do not know all variables, they are hidden from us.

Another important concept is that of locality, which prohibits any influences between events in space-like separated regions. Think of it in terms of Maxwell's equations, where the electric and magnetic fields are plane waves travelling at a constant speed, which is the speed of light. If there is causality between two non-local events, the time delay must be larger than the time light takes to travel from the first to the second event.

An example of non-local effect is the quantum phenomenon of entanglement, where, for example, a Ti:sapphire femtosecond laser pumps a type 2 beta-barium-borate (BBO) crystal, which in virtue of its optical birefringence produces two photons sharing the same wave function. The two photons can be directed in two separate arms of an instrument, becoming non-local. Yet, because they share the same wave function, when one photon's state changes, the other photon's state must also change at the same time, which is a non-local effect; the two photons appear to have a simultaneous non-local reality.

entanglement

Many years after the EPR paper, some physicists still debate on the photon's reality. For example, every two years the SPIE still has a conference on "The Nature of Light: What Are photons?" However, for most scientists active in the field, this question is not asked. In fact, experimentally observable quantum correlations demonstrate that intuitive features of realism must be abandoned.

This is shown beautifully in a recent article by Gröblacher et al. in the 19 April issue of Nature, An experimental test of non-local realism, which is published in two parts, an experimental part in the printed journal and a theoretical part in an online supplement. The supplement shows elegantly how to construct an explicit non-local hidden-variable model. The experimental part then shows how to build an experimental set-up for testing non-local hidden-variable theories. The salient, and tricky, part of the experimental plan—in which pairs of polarization entangled photons are generated via spontaneous down-conversion as mentioned above—is in how to determine the two-photon visibilities so that the hypothesis is proven.

Where does this leave you when you are trying to understand color perception? Abandon realistic descriptions of photons. You only need a good mathematical model, and everything you need to know about visual stimulation by photons is included in the color matching functions. Even when you need to consider conditions like color vision deficiencies, you can do it by manipulating appropriately the color matching functions, without requiring a realistic description of photransduction.

Thank you to Dmitri Boiko for the pointer to the Nature article.

PS: Links contributed in the comments:

Sunday, April 29, 2007

MPEG-A—Multimedia Application Formats

Today is another perfect day in Silicon Valley. The weather at a balmy 23ºC, blue skies, perfect for a walk on the beach, a hike in the woods, or to drive down to San Jose to attend the First Multimedia Application Formats Awareness Event organized by NIST (National Institute of Standards and Technology), INCITS (InterNational Committee for Information Technology Standards), and MPEG from 8:30 a.m. to 2:00 p.m.

If you recently strolled in a good shopping mall, you probably saw that spacious store in white and light wood, selling—no, not handbags or scarves—but of all things computers, a mass-merchandise that usually sells without markups, with the manufacturer pocketing a small profit by selling sticker space on the keyboard and icon space on the desktop for bloatware. But as you walk down towards the store's end, there is a table with a media person giving a free workshop.

These media people are not store clerks who attended a one-hour training session, but knowledgeable users of the media, which can be music creation, music appreciation, blogging, video blogging, movie making, photography, etc. When the workshop is finished, there are always people asking the price of the software used in the workshop. The answer is a laconic "it's free," and when people ask from where they can download it and whether it runs on XP or requires Vista, the trainer responds "it comes pre-installed on our computers." A few minutes later you see attendees walking out of the store with a box containing the new computer they just bought. Note this are not just teenagers using their parent's credit cards, they are also octogenarians brandishing their grey preferred care card instead of their AARP card.

Clearly people do not want to just balance their checkbook on their computers or solve spreadsheets. They want to use their computers for their creative hobbies. This is possible at an affordable price because today's computers are very fast and the ISO (International Standards Organization) has developed a number of excellent technologies for codecs, metadata management, digital rights management, and digital item streaming.

As a reader of this blog, you are already familiar with JPEG and JPEG-2000 [by the way, as a follow-up on that post, this week Microsoft has submitted HD Photo as a possible JPEG standard, so all the technical details are now available], which deal with still images. Sound and video, as well as digital items in general are covered by MPEG. You have already heard of the old family members:

  • MPEG-1 and MPEG-2 provide interoperable ways of representing audiovisual content, commonly used on digital media and on the air
  • MPEG-4 defines how to represent content
  • MPEG-7 specifies how to describe content
  • MPEG-21 provides a truly interoperable multimedia framework

From an implementation point of view, MPEG-1 and MPEG-2 provide codecs for audiovisual streams. MPEG-1 is the standard on which such products as Video CD and MP3 (MPEG-1 Audio Layer III) are based; MPEG-2 is the standard on which such products as Digital Television set top boxes and DVD are based; MPEG-4 is the standard for multimedia for the fixed and mobile web; and MPEG-7 is the standard for description and search of audio and visual content.

MPEG banner

Returning now to that store, they are not selling magic nor snake oil. Looking at their media tools, my guess is that they have an excellent implementation of the various JPEG and MPEG codecs, a very slick XML management system that delivers seamless interoperability for a multitude of tools and an Internet service, and a family of very slick user interfaces.

In principle, anybody can do it, because JPEG and MPEG have reference implementations and examples of everything. Learning to listen to customers is a little harder, and the system engineering for architecting the content management framework is even harder, but these days computer scientists are a dime a dozen. The idea behind MPEG-A is to show entrepreneurs how easy it all is, and to even give them example implementations they can modify to build their own products. Here it is in MPEG's own words:

The 1st Multimedia Application Formats Awareness Event introduces ISO's newest multimedia standard, ISO/IEC 23000, also known as MPEG-A. MPEG-A aims to serve clearly identified market needs by facilitating the swift development of innovative and standards-based multimedia applications and services. The corresponding application-driven process results in normative specifications of Multimedia Application Formats (MAF) along with reference software, which demonstrates the use of the MAF, and which offers a head-start in product development for multimedia based applications and services. The ultimate objective of MAFs is to stimulate even more the usage of MPEG standard technologies by providing the user with another degree of interoperability at the application and service levels. A MAF (which formally corresponds to an individual part of the MPEG-A standard) specifies a combination of already standardized MPEG and non-MPEG tools providing an appropriate technical solution for a class of applications. This type of standard application and service provide the solutions needed for managing, searching, filtering, and accessing of the exponential growth of public and private multimedia content from the environments of Internet, digital broadcast networks, and mobile devices. The MAF awareness event demonstrates powerful application and service technologies and lays out a migration path towards widespread usage of multimedia content for any organization wishing to provide a set of standard comprehensive and cost-effective content management and distribution service solutions for their customers.

As a consumer, with MPEG-A, you are no longer constrained to white gadgets whose name starts with an 'i'. You can buy mobile phones, cameras, music players, GPS navigators, TVs, etc. from any brand as long as they are MPEG-A compliant.

I have not written anything on the event itself, but that is not necessary, because it is all available on the Web, just look at your tax dollars at work at http://maf.nist.gov.

Thank you to Touradj Ebrahimi for the invitation to the event.

PS: Links contributed in the comments:

Tuesday, April 24, 2007

HP Tech Con

This week it is easy to get a good parking spot at HP Labs, because HP’s technical conference is taking place in San Antonio. HP Tech Con, the premiere internal conference for HP’s technologists, brings together a cross-section of technical leaders from a wide range of disciplines from around the company. The conference recognizes some of the company’s most promising work enhances collaboration and visibility for top-tier technologies and continues to be a driving force in the role of technical innovation at HP.

I posted a couple of entries in this blog on the demise of industrial research in the U.S. HP is an exception, because as CEO Mark Hurd likes to point out, "we are one of only a handful of systems companies left on the planet that invests in significant R&D. My goal is for HP to be the R&D leader in the areas strategic to HP and our customers. I want them to think of HP as a company that's driving useful innovation and bringing it to market in the most efficient way possible to help them solve problems or improve their lives."

In this spirit, EVP Shane Robison, one of the world's most influential chief technology officers, every year organizes an HP-wide technical conference. This year is the fifth. The elite of HP’s top 1.7% technologists are invited to this event, which is HP’s primary vehicle for sharing, communicating and displaying technical work across disciplines and organizations.

It is an investment in HP's technology leadership, and "by invitation" means that it is centrally organized and financed. Thus, it does not impact project’s budget and top performers are truly rewarded.

Tech Con is designed to foster a sense of community, and stimulate collaboration and enthusiasm across HP’s diverse, talented community of technical contributors. Participating technologists are energized in seminars by key executives and participate in team-building activities. Technologies such as blogs and SharePoint are used to keep everybody involved 24 hours a day in a total immersion. In addition, these technologies also allow HP’s other technologists to participate in this important event, shifted in time and space.

The conference hosts topics spanning areas where HP possesses unique technical capability that can be applied to current business interests and/or leveraged across HP’s portfolio to drive new growth.

"HP’s unique competitive advantage and technical edge are based on our ability to collaborate globally across a wide range of disciplines," said Shane Robison, EVP, and Chief Strategy and Technology Officer, who sponsors HP Tech Con ’07. "This year, I look forward to the conference and the continued impact that it will have on HP. Tech Con ’07 promises to be as stimulating as those in the past."

PS: Here are the links mentioned in the comments:

Friday, April 20, 2007

Mini review. Psychophysics of Reading in Normal and Low Vision

In February I wrote about Siliva Zuffi and Carla Brambilla's work on the readability of colored text on a colored background (permalink). In the second paragraph I mentioned Gordon Legge's work. Legge has revisited his work and compiled it in a must-have book for anybody doing graphical user interfaces or working on digital publishing.

In Europe, and more so in Japan, there is great concern about low vision, because the population is rapidly aging and concomitantly there is more reliance on electronic displays. The eye has not evolved for the longevity of today's homo sapiens, and an aging population means that an increasing percentage has to cope with diseases like glaucoma and macular degeneration.

At the same time, we no longer make a phone call by turning a rotary dial but by selecting a number from a phone list displayed on a tiny and dim LCD screen, we no longer stay in line at a government office to speak to a representative but fill out a Web form, we no longer have instruments or warning lights in our cars but read the car status and our location on a navigation panel, etc.

In the United States the problem is not yet as grave—according to Legge's book there are only an estimated four million idividuals with low vision. However, the federal government has regulations for the accessibility of federal documents. It is not just a question of being considerate to our elders, it has become a requirement for doing business with the government.

My first encounter with Legge's work goes back to about 1987 or so, when I was working for Gary Starkweather, who asked me to look into the trade-off of resolution vs. gray levels on the new ionographic printers that started showing up on our desks. At that time, I had the chance to meet Legge at an OSA vision meeting in San Francisco and ask him for advice on setting up a readability experiment. He did not tell me to read a paper or a book—he listened patiently at my problem description and then he sketched out how he would design the experiment if he would be at my place.

For the historian of science it is interesting to read Legge's papers and learn how his knowledge evolved and expanded. However, his book is what is valuable for us doing science, because in it Legge revisits all his research with what he knows now. This is why I think it is a must-have book. Here is the catalog information:

Gordon E. Legge
Psychophysics of Reading in Normal and Low Vision
Lawrence Erlbaum Associates, Mahwah, 2007
ISBN: 0-8058-4328-0

At $110.00 the book is quite expensive. The price includes a CD-ROM with full reprints of the twenty original articles in the Psychophysics of Reading series. The text of the articles has been type set in a uniform format, and the figures have been reprinted from the originals. The CD-ROM also contains a cumulative reference list with all the citations from the book and the twenty articles, unfortunately only in PDF, not in BibTeX format. Supporting material for the MNREAD test is also included—a score sheet which can be printed for use, and computer source code in C, MATLAB, and Perl for estimating MNREAD parameters from test data.

Wednesday, April 11, 2007

Stuffing the toolbox

In industrial and government research organizations there are often religious wars on what tools researchers should have in their toolboxes. In part this is due to the onerous purchasing processes, which tend to have researchers cling to whatever tools they have in a sort of a conservative reflex—need to pound a nail? Use your shoe! In academia the situation is healthier, because generous educational discounts allow researchers to use whatever tools allow them to accomplish their job most efficiently by the deadline.

Every time there is a transition in your research is a good time to look at your toolbox and reconsider what you have. If I would not keep revisiting my toolbox, I would still be doing my backups on paper tape; after all it would still work—at least sort of.

Artisans of the days past used to build their own tools at the beginning of their careers, and then keep improving them, because this was their competitive advantage over their colleagues. This is not a good paradigm for a modern research lab, because the tools are so sophisticated, they take a lifetime to build. I would not even advise anybody to build something as elementary as an integrating sphere; it would take you ages to get it completely smooth and you will probably never figure out where to mount the baffles, let alone determine their geometry.

Every color scientist should have a spectrophotometer. Measuring color is difficult and it requires a lot of intuition to assess the correctness of a measurement; the only way to build intuition is through daily practice. The instrument should be regularly tested, so you become confident in it but not overconfident. Do use your spectrophotometer in emission mode to regularly calibrate you monitor, because that is also a tool in which you need to be able to have confidence.

You should periodically self-administer the Farnsworth-Munsell 100 hue test, so you know what you can see and you know when you are getting out of shape.

The other tools depend on your specific area in color research. In this post, I will focus on the software, because you will have to use software tools and you will have to write software. Let us first look at the programming environment.

What you use depends on your deliverable. When your deliverable changes, your programming tools should be reconsidered. If you are doing wet color science, your deliverable will be an experimental procedure. Your software will be used to design the experiment, run it, and evaluate it. A nice recent example of wet color science is in the 30 March 2007 issue of Science, where Buschman and Miller developed a novel electrode system that allowed them to record across up to 50 channels simultaneously, allowing them to study top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices.

In such a case, the best strategy is to ask the vendor of the instrument you are interfacing for a recommendation, because they often chose a system for which they will write drivers and demo software first. The MathWorks and Wolfram Research are two software vendors with interpretative environments that allow you to cut down software development to a minimum and get quickly to your computational results. Your organization probably has a site license for the one or the other and you do not need to go through a purchasing process. There is a number of freely available toolboxes for this software that can give you a running start—just run a search on the Internet. If you end up using MATLAB, invest into Westland and Ripamonti's computational color science book.

If you are doing computational color science, sooner or later you will have to deliver a system, so you should have a full fledged system in your toolbox. You should talk first to your colleagues. Have they built a shared set of tools? Sometimes a colleague who used to work in your new project has a system, no longer wants to maintain it, and would be very happy having you taking it over. Since today it has become hard to get budget for tools, second look for open source software, and SourceForge is the best place to start starting looking. If you find a pertinent system, download it and give it a spin. If you like it, join the team as a contributor and get engaged.

If you have to develop your own system, there are a few points to note. Do not use an integrated development environment (IDE) targeted to software developers, because you do not need all the whistles and bells and over time would spend too much time keeping up to date with the stream of new feature updates and dealing with complex team version management tools. Ideally you want an IDE that allows you to easily switch platforms. Check out Eclipse.

For the programming language, you want a language that is platform independent and commonly used in open source projects. Inheritance is a good thing to have because you can then, for example, implement a generic color model operator, which you then subclass every time you need to add a new color space. A garbage collector is also handy because then you can forget about memory management. For high-level work Java and C++ are popular choices, for low-level work OpenGL is a popular choice. If you have to write stand-alone Java applications, use SWT with JFace and, if you like visual programming, SWT Designer.

A recent general book on computational color science is Kang's book Computational Color Technology by SPIE press.

Interpreted systems are not a good choice when you have to develop a system. They are difficult to maintain and even if all systems have good documentation facilities, they are not used and after a year you have forgotten all details about your implementation. Since there is no error handling, you cannot debug the system and would end up spending less time just rewriting it. Use the right tool for the job!

If you develop for color management, SourceForge has a color management system called Little cms that will give you a running start.

If you develop for the Web, stay away from scripting like the plague. They are not only impossible to debug, but they consume an input, try to guess what it means, and then execute powerful server code to process putative data. There is a legion of entrepreneurs who exploit this “feature” to run businesses for spam, phishing, identity theft, peer to peer sharing, etc., high jacking your server. This would piss off your organization and instead of working on your research you would be patching up holes in your system.

The best approach for color science on the Web is to rely on a robust system like Apache and Tomcat. Program in a strongly typed language and religiously perform consistency checks on the input. Never reuse a string you receive from the Internet, just use it to create a new string with fresh bytes, because you have no idea of what a creative entrepreneur can hide in a string. Initialize all your variable, check boundaries, and religiously catch all exceptions.

Your data is valuable and you will be glad to be able to access it in the future to mine it. So store it in a database you can search and access from any programming environment. Do not forget to include all the metadata you can, because the more metadata you have, the more valuable your data will be in the future. A good relational SQL database is MySQL. By the way, spreadsheets are for modeling and trying out (financial) models and scenarios, they are not for storing data.

Periodically, when you discover something interesting, you have to publish your results. If you are using Mathematica or MATLAB, use the built-in editor to take advantage of the seamlessly integrated systems. Else, there is still no better technical typographic design system than LATEX, which uses the best in class TEX typesetter. Today you no longer have to type your text in EMACS, but can use a modern GUI for LATEX, like for example TeXShop.

If you write a long document like a book or slides for a course, you need a document preparation system that allows you to break the document into a file per chapter or module, can manage hypertext links across files, autonumber across files, generate indices, tables of contents, etc, typeset formulas, and include floating figures by reference. Unfortunately only one generally available such system has ever been implemented and unfortunately its development stopped a decade ago, so the GUI is quite ancient. However, your organization if still has a FrameMaker site license you are in luck, even if it is for an old version, because not much has changed for technical and scientific document preparation after release 4. Most important, it always works and you never get surprises.

Finally, if you are not using an integrated system you need graphical software. XGRAPH will create your plots. Xfig can help you draw any kind of illustrations. If you nee to draw many diagrams, you may want to invest into a diagramming application like OmniGraffle. Last but not least, there also is an inexpensive application for fitting your data, called proFit.

So far this was my contribution. Now it is up to you to share your tool recommendations. Please write a comment with your recommendations.

Wednesday, April 4, 2007

IS&T 2007 Honors and Awards

A few minutes ago the IS&T announced its 2007 Honors and Awards. For the details see IS&T's press release. Here is the the summary.

Honorary Member:  Jan Allebach

“For his many and diverse contributions to imaging science; including halftoning, digital image processing, color management, visual perception, and image quality.”

Carlson Award:  Hiroyuki Kawamoto

“For his significant contributions to practical aspects of the technology of electrophotography.”

 Bowman Award:  Hiroaki Kotera

“For a lifelong dedication in advising young researchers building successful careers in image processing and color science.”

Fellowship:  Ralph Jacobson, Shoji Tominaga, Bahram Javidi, Rob Buckley, Daniele Marini

Jacobson:  “For his contributions to the field of image quality metrics and his leadership in imaging science education.”

Tominaga:  “For his contributions to color imaging science, particularly the interaction of light with materials, color constancy, and illuminant estimation.”

Javidi:  “For his contributions to 3-D imaging science, information security, and image recognition.”

Buckley:  “For his contributions to gamut mapping, color encoding, document encoding, and their standardization.”

Marini:  “For his contributions to computer graphics and his development of a practical approach to Retinex theory.”

Senior Membership:  Jim King, Jim Owens, Franziska Frey

King:  “For his many contributions to the leadership of IS&T and its conferences.”

Owens: “For his dedicated work for IS&T as a conference organizer, national officer, lecturer, and committee chair.”

Frey:  “For her leadership in establishing and promoting the Archiving Conference and her contributions to the organization of many other IS&T conferences.”

Journal Award, Science: Richard P.N. Veregin, Maria N.V. McDougall, Michael S. Hawkins, Coung Vong, Vladislav Skorokhod, and Henry P. Schreiber

 “A Bidirectional Acid-Base Charging Model for Triboelectrification: part I.  Theory” and “part II.  Experimental Verification by Inverse Gas Chromatography and Charging of Metal Oxides,”  Journal of Imaging Science and Technology, 50 #3, 282-287 and 288-293, 2006

Journal Award, Engineering:  Beat Münch and Úlfar Steingrimmsson

”Optimized RGB for Image Data Encoding,” Journal of Imaging Science and Technology, 50 #2, 125-138, 2006

Itek Award:  Veronika Chovankova-Lovell

“Novel Phase Change Inks for Printing Three-Dimensional Structures,” Journal of Imaging Science and Technology, 50 #6, 550-555, 2006

Service Award:  Roger David Hersch

“For his long term contributions to the organization of the Electronic Imaging conferences.”

Gutenberg Prize:  Jeff Folkins

“For his substantial contributions to electrophotography and his more recent innovations in solid ink jet printing.”

Davis Scholarship:  Bhaskar Choubey and Steve Viggiano

Sex and evolution: on becoming a trichromat

From the difference in the amino-acid sequences for the various photoreceptor genes it is clear that the human visual system did not evolve according to a single design. Most mammals have two classes of photopigments, one encoded on an autosome and mostly sensitive to short wavelength stimuli, and one sex-linked and mostly sensitive to medium wavelengths. To evolve into a trichromat, is it sufficient to shuffle the sex-linked genes to create an additional sensitivity, or does one first have to evolve the opponent mechanisms supported by the midget bipolar and retinal ganglion cells?

This question is addressed in research at UC Santa Barbara and at John Hopkins in Baltimore published in the latest print-issue of Science magazine. The full reference is Gerald H. Jacobs, Gary A. Williams, Hugh Cahill, Jeremy Nathans, Emergence of Novel Color Vision in Mice Engineered to Express a Human Cone Photopigment, Science 23 March 2007: Vol. 315. no. 5819, pp. 1723 – 1725. If you are an AAAS member, the online version is at this link.

For an overview, the following table Lucia Ronchi and I compiled at the 1993 AIC meeting in Budapest based on 1983 work by Ebehart Zrenner may be useful.


Finding Rod and S Mechanisms L and M Mechanisms
Anatomy Distribution perifoveal foveal
Bipolar circuitry one class (only on) two classes (on and off)
Psychophysics Spatial resolution low high
Temporal resolution low high
Weber fraction high low
Wavelength sensitivity short medium
Electrophysiology Response function saturates does not saturate
Latencies long short
ERG-off-effect negative positive
Ganglion cell response afterpotential no afterpotential
Receptive field large small
Vulnerability high low
Genetics
autosomal sex-linked

To answer the question whether to evolve into a trichromat, it is sufficient to shuffle the sex-linked genes to create an additional sensitivity, or one does first have to evolve the opponent mechanisms supported by the midget bipolar and retinal ganglion cells, Jacobs et al. designed a human L cone pigment knock-in mouse. Most of the coding sequences for the native mouse M cone pigment were replaced with sequences encoding a human L pigment. Subsequently these mice were backcrossed for five generations. Of interest are the heterozygous females who have a mixture of the two pigments.

Color vision requires both multiple photopigments and appropriate neural wiring, and it has been argued that the organization of the primate retina, and in particular the low-convergence midget bipolar and ganglion cell system, is such that the addition of a new class of cone photoreceptors may be all that is required for comparing M versus L cone signals. This first step in the evolution of primate trichromacy is what the authors modeled with their knock-in mouse.

The authors asked whether the sudden acquisition of an additional and spectrally distinct pigment and its production in a subset of cones suffice to permit a new dimension of chromatic discrimination that would imply that

  1. the mammalian brain is sufficiently plastic that it can extract and compare a new dimension of sensory input and
  2. the heterozygous female primate that first inherited an additional X-chromosome allele would have immediately enjoyed a selective advantage with respect to chromatic discrimination.

How do you do psychophysics with mice? To examine whether vision is altered by the added photopigment, Jacobs et al. tested their mice in a behavioral three-alternative forced-choice discrimination task. In this task, the mouse was required to identify which one of the three test panels was illuminated differently from the other two, with the location of the correct choice varying randomly between trials.

A first experiment demonstrates that their knock-in mice can extract visual information from L cones. The next experiment is to perform experiments for brightness matches, because color vision implies the ability to discriminate variations in spectral composition irrespective of variations in intensity.

In the last experiment, after extensive training (~17,000 trials), an M/L heterozygote female with balanced M:L ratio (44:56) successfully discriminated 500-nm from 600-nm lights. These results imply that color vision in this M/L heterozygous mouse is based on a comparison of quantal catches between the M and L pigments.

The mouse lacks a midget system, and thus the color vision documented in M/L heterozygotes must be subserved by other means. Most mouse retinal ganglion cells have a receptive field center with an antagonistic surround, albeit a weak one, and chromatic information could be extracted based on differences in M versus L input to these two regions. In a variation on this idea, chromatic information could also be extracted simply based on variation among retinal ganglion cells in the total M versus L weightings.

These results have general implications for the evolution of sensory systems. The behavioral or electrophysiological responses show the predicted expansion or modification of sensitivity. The author’s observation that the mouse brain can use this information to make spectral discriminations implies that alterations in receptor genes might be of immediate selective value not only because they expand the range or types of stimuli that can be detected but also because they permit a plastic nervous system to discriminate between new and existing stimuli. Additional genetic changes that refine the downstream neural circuitry to more efficiently extract sensory information could then follow over many generations.

Will this research yield a cure for color blindness? Could we replace the appropriate gene sequence of a color blind father’s spermatozoa the corresponding sequence from the mother’s egg? The answer is a clear no, because the gene sequences do not directly encode the pigment peak sensitivity.

The X chromosome has a number of base repetitions for the gene sequence for peak sensitivity encoding, there is no single and clearly defined L or M gene. The sequence is very labile, i.e., the genes get easily transposed or shuffled, thereby cousing color vision defects in males, who have only a single chromosome. As with all gene sequences, only a portion is active, i.e., transcribed by the messenger RNA (mRNA). Therefore, the X chromosome in the spermatozoa is not conclusive. Examination of the mRNA can only occur post partum and is destructive for the retina, so out of question.

Currently we know far too little of proteonomics to even start thinking about a cure for color vision deficiency.

PS: as usual, since our software does not support links in comments, I am adding the links here