Tuesday, February 27, 2007

Visualizing color data

In most practical applications color is a three-dimensional quantity, for example a red, green, blue triplet or an XYZ tristimulus value. With today’s fast graphics cards it is easy to visualize such data, for example to study gamut mapping. However, in color research color has often more dimensions, and then it becomes trickier to visualize the data.

For example, if you are modeling a printer, you may have to visualize a space with three CIELAB coordinates and four CMYK coordinates. If you also have to take into account geometric appearance, you have to add gloss and granularity for a nine-dimensional space. When metamerism or fluorescence is also an issue, you may even have to go spectral.

Hard disks are not forever

FAST '07 was the 5th USENIX Conference on File and Storage Technologies and took place in San Jose (California) 13-16 February 2007. In the last morning session there were two papers on disk failures. The first was Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You? by Bianca Schroeder and Garth A. Gibson, Carnegie Mellon University (click on the title for the PDF). It received the Best Paper Award.

The authors analyzed what disk failures look like in the field by analyzing log data from several high-performance computing (HPC) clusters and ISP data centers. Among other, they found that disk failures are more frequent than the MTTF (mean time to failure) would suggest. They also found that SATA and PATA drives are not less reliable than enterprise class SCSI and FC drives

If you believe their data does not apply to the consumer grade disk drives in your home PC, then the following paper, Failure Trends in a Large Disk Drive Population, by Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz André Barroso, Google Inc. (click on the title for the PDF) was just about such disks.

To make a long story short, expect your hard disk to last for about five years, and replace it as soon as you get a scan error. However, the most important thing is that you should back it up religiously. Your hard disk contains a large chunk of your life: your correspondence, your finances, your pictures, your music, and much more. Losing your data can be catastrophic. Fortunately disks are cheap and you can easily attach a big external hard disk over your FireWire (IEEE 1394) port and do frequent incremental backups. An external disk has the advantage you can easily store it off-site and the operating system can power it down when it is idle. I mention FireWire because for the sustained transfers typical in a backup, it is faster than USB 2.0.

Last but not least, use professional backup software, because it has to be fast (otherwise you will back up less frequently) and above all it must be capable to fully and completely restore your hard disk when it fails.

I have been using computers since 1968, always backed up my data, and used off-site storage. Despite my evil karma and losing many paper tapes, card decks, and disk drives, I never lost a bit of my data. If I can do it, you can do it too.

PS: as usual, since our software does not support links in comments, I am adding the links here

Wednesday, February 21, 2007

Are hyperthreads good for you?

Although new architectures always sound good and are hailed for a number of advancements, computers are only as good as the people who program them. Computer science really peaked in the 60s and 70s, but unfortunately most of that knowledge has disappeared, in large part because the most experienced computer scientists were pushed into early retirement plans before they could pass on their knowledge.

In the case of hyperthreading this corporate behavior had two consequences. The first is that many operating systems see the threads as CPUs, which they are not because only critical resources like registers are duplicated. This error leads the scheduling algorithms in the OS to miscompute the available capacity and leads to “missing MIPS.”

The second consequence is that the art of concurrent programming has been mostly lost. Although for example the Cedar system had beautifully implemented threads, it had been very difficult to achieve (it cost Dough Wyatt a lot of sweat and tears until the last deadlock in the threaded viewer system was squelched). Today only few programmers know how to work with threads, and then often get it wrong.

Because the software I use cannot reliably exploit hyperthreading, I have turned it off. Maybe this is also why Intel’s latest processors are not hyperthreaded.

For more details, read the following paper: Neil J. Gunther, The virtualization spectrum from hyperthreads to grids, Proceedings of the Computer Measurement Group (CMG) 32nd International Conference, Reno (NV), December 3-8, 2006.

PS: Here are the hyperlinks to the two articles mentined in comment number 3 by reader RocketRoo: http://www.kernelthread.com/publications/virtualization/ and http://www.gotw.ca/publications/concurrency-ddj.htm.

We are all photographers now!

On September 13 1996, during a formal reception at the Makuhari Messe in Chiba (Japan), Shin Ohno introduced Yoko and me to Tadaaki Tani and his wife Aoi. He wanted us to discuss the future of digital photography. Tadaaki-sensei was very skeptical about digital photography, on ground that semiconductors will never be able to achieve the quantum efficiency of AgX, and at the theoretical limit there is a difference of an order of magnitude (if my memory does not fool me too badly)

Shin-sensei suggested that according to a polemic pamphlet I had written, the popularization of Internet tools like email and the Web would subvert completely the media and therefore the performance metrics would change. Indeed, I pointed to the sticker photo booths all around Makuhari to postulate that the future key metric will be immediacy. What will be important for the amateur photographer, will not be the image quality but the ability to share an event remotely at the time it was occurring. And the rest is history

In January 2000, in collaboration with Raimondo Schettini I led a brainstorming meeting for the program committee members of the Internet Imaging conference at EI. The goal was to identify what set apart Internet imaging from other forms of imaging and what was the hardest problem. The answer to the first question was that it is about systems, and the answer to the second question was the lack of a benchmark for content based image retrieval (CBIR).

The latter led to the creation of the Benchathlon effort. Most of the work was done by the Viper Team at the University of Geneva, who had a lot of the required software tools, and by Neil Gunther, one of the foremost experts on performance analysis. My modest task was to provide a corpus of images and have them annotated in a collaborative effort, since I had talked about it in the past (HPL-97-162).

While the Viper team was able to leverage on their MRML technology to set up a platform at http://www.benchathlon.net/, and Neil was able to determine that the MPEG-7 metric for CBIR performance is flawed (see HPL-2000-162), my image corpus effort failed

The important point was to collect typical amateur photographs, which from an imaging point of view are very different from the normalized professional photographs often used at that time. Seven years have past, and today we are not much farther ahead than in 2000. The annotated amateur photo image corpus is still an unfulfilled desideratum. This is where We are all photographers now! at the Musée de l'Elysée in Lausanne comes into play, an art event in which HP is a co-sponsor

As you can see on the submission form, all images are annotated when they are submitted, and as you can read on the terms and conditions in article1 paragraph d, the corpus will be available for research. There are two ways for you to participate: contribute

  1. your images to the corpus and enjoy this art event
  2. use the corpus when it will be available for research

Now rush to http://www.allphotographersnow.ch/index.php to learn more.

Sunday, February 18, 2007

Readability of colored text on a colored background

We have all encountered Web pages with colored text on a colored background that gave us trouble. Sometimes they were hard to read, especially for those of us with color vision deficiencies. Other times we could read them, but we felt irritated. Did you know you can do something about it? Read on to find out how. The readability of text for readers with normal vision, low vision, or color vision deficiency (CVD) has been studied for decades by Gordon Legge and his coworkers at the University of Minnesota. However, most of their color work is for monochrome displays with colored phosphors. The W3C has proposed recommendation for the accessibility of Web pages based on new work that does not build on the current knowledge.

In 2005 the Italian Government, specifically the Ministero per l'Istruzione, le Università, e la Ricerca (MIUR) instructed the company designing its Web sites to revise the pages to conform to the W3C recommendation. This company, Vocabola in Venice, quickly found out that recommendations correlate very poorly with perceptual readability.

True to their passion, Vocabola set up a Web site to discuss this problem, http://www.contrastocolori.org/, which soon showed that most experienced Web designers agree that the W3C recommendation is misguided. However, nobody in their circle knew how to solve the accessibility problem. This is when Silvia Postai, Vocabola's owner of Vocabola and author of the Italian bible for Web designers, contacted the well known color researcher Silvia Zuffi at the Centro Nazionale per la Ricerca (CNR) in Milano and also professor at the university Milano-Bicocca. Silvia at that time was working on a color selection tool for Web designers.

Silvia remembered I had shown a program addressing this problem at the AIC meeting in Granada earlier that year [for you managers among the readers: see, this is why we have to attend conferences], so she contacted me. Silvia enlisted the help of a well known mathematical statistician, Carla Brambilla, and they performed a psychophysics experiment for estimating the contrast requirement. The results were published in HPL-2005-216.

Silvia then had the idea that this problem would be ideal to attempt a psychophysics experiment on the Web, as they have been tried a few times before. Silvia and Carla repeated the traditional experiment und very tightly controlled conditions to have a reference point. Then they designed a Web version and determined that the two experiments yield the same predictions. This was described in HPL-2006-187.

Contrast is only one dimension, and the data can be used to determine other factors determining the visual quality of a Web page. This is where you can help.

One would expect that on the Web one would get orders of magnitude more data than one can get from a controlled experiment. However, as others who have done psychophysics on the Web have experienced, only few people actually contribute data and a large percentage of the data in unusable. To help, please visit the Web page for the experiment and give us at about an hour of observations, i.e., try to do about 50 repetitions. Then let a friend know and have them also contribute. The URL is

http://daedalus.itc.cnr.it/readability/

We are also very interested in hearing your suggestions and comments. Please use the comment feature in this blog to let us know how this work can be improved.

PS: Links contributed in comments:

Tuesday, February 13, 2007

Color fidelity as a goal: oxymoron → color integrity

In May 1996 I gave a controversial internal presentation at HP Labs with the title WWW + Structure =  Knowledge. It was sufficiently provocative that word spread in the Valley and I was asked to make an external version and give it in other venues, such as the PARC Forum. I finally decided to strip it down and make a conference paper, which is available here (subscription required); there are also a preprint and a copy of the slides.

Anyway, the conference at which I presented the paper was the Color Imaging conference at the 1997 Electronic Imaging Symposium, and the room was full of color scientists. The last bullet in the last slide was

Color fidelity as a goal: oxymoron → color integrity

What I explained was that when color printing happened in a closed system, it was relatively easy. When systems became open, things became more complicated, because now we had to deal with color management, device profiles, appearance modes, adaptation, remote proofing, etc.

Then, leaving the audience in shock, I claimed on the Internet color fidelity is an oxymoron, because users do not know about all those things and they do not want to know about them. Therefore, while there still is a traditional graphic arts market where color fidelity is a must, for publishing on the Internet color scientists have to come up with a different technology, namely color integrity.

While color fidelity entails color matching, color integrity means that a color palette must remain preserved in the sense that individual colors can change, but their relation must be preserved. I made the example of a page selling blue jeans, claiming that the color patches showing the available colors do not have to match the true colors, because people will not hold up their sweaters against the screen, which does not really mean anything. Instead, it the same vendor has also that sweater, then if the sweater matches the jeans, also their displayed images must match, regardless what the device calibration is. Also, if there are stone washed jeans, their color must be lighter than that of the original jeans and the black jeans must look darker.

A possible algorithm I sketched at the time was that under any conditions on any practical device, no color in the palette should move over a color name boundary, and if one takes the vector field of the error vectors, there should be no divergence. The latter condition meant that there should not be any “virtual” light source, so the human visual system can adapt to whatever the conditions are.

At the end of the session Lindsay MacDonald told me that I have to redeem myself and participate on a round table discussion with that title at the following Color Imaging Conference in Scottsdale. Never afraid to make a fool of myself in front of a big audience, I agreed.

Thursday, November 20, 1997 at 8:00 pm, the big conference room at the Radisson Resort in Scottsdale was full. The debate unfortunately was a little fragmented and raucous because the representative from Microsoft, maybe not used to Scottsdale’s hot sun, kept interrupting and blurting in the microphone “You guys do not worry about color, just use ICM and Microsoft Windows takes care of everything.”

Despite this annoyance I was able to make my point. At the beginning of the discussion panel moderator Jim King asked the audience to show by hand raining if they believe in color fidelity or in color integrity. Only a few hands rose for integrity, but all rose for fidelity. However, when at the end Jim asked again the question, the vote was 50/50.

Ten years have passed. I would like to know what people think about this today. Please open your heart and post you opinion as a comment. Let us reopen the discussion with the advantage of hindsight.

Monday, February 12, 2007

2007 Japan Prize in technology

This year’s Japan Prize in technology, awarded for breakthrough basic research with a high industrial impact, goes to Albert Fert and Peter Grünberg, who independently described giant magnetoresistance (GMR), in which the electrical resistance of certain materials drops when a magnetic field is applied. GMR enables the high capacity found in today’s hard disk drives.

The Japan Prize is awarded to world-class scientists and technologists credited with original and outstanding achievements and who have contributed to the advancement of science and technology, thereby furthering the cause of peace and the prosperity of mankind.

The Presentation Ceremony will be held in the presence of Their Majesties, the Emperor and Empress in Tokyo in April. The events will also be attended by the Prime Minister, the Speaker of the House of Representatives, the President of House the Councilors, the Chief Justice of the Supreme Court, foreign ambassadors to Japan and about a thousand other guests, including eminent academics, researchers and representatives of political, business and press circles.

The week in which the Japan Prize is presented is designated as "Japan Prize Week." During this period, the laureates give commemorative lectures and attend academic discussion meetings. They take part in various other activities, including a visit to the Prime Minister and The Japan Academy

Every year, the Science and Technology Foundation of Japan (JSTF) selects two categories in which scientists and technologists who are exemplary role models for their behavior are awarded the Japan Prize. For 2007 the two categories are Innovative Devices Inspired by Basic Research and Science and Technology of Harmonious Co- Existence.

In the first category, the motivation is that basic research in science plays an important role as a cornerstone of our modern society. Breakthroughs in physics, chemistry, and other fields of basic research often come to fruition in the form of new materials, or devices, eventually leading to the development of a new industry. The award for 2007 is focused on an accomplishment in developing original findings in basic research into the invention of an innovative device which will likely create a new industry.

Fert and Grünberg’s achievement is the independent discovery of Giant Magneto-Resistance (GMR) and its contribution to development of innovative spin-electronics devices.

In previous magnetic heads used for reading data, magnetoresistance (MR) components had been used. Magnetoresistance is the phenomenon of the change to electrical resistance when subject to a magnetic field. As the electrical resistance change causes the electrical current change, the data written on the hard disk can be read by detecting the electrical current. The resistance change ratio when using the MR component is at most only a few percent.

In contrast to this, when the GMR component is used resistance ratio rises to several tens of percentage. In other words, even a responsibility to a weak magnetic field leads to a vast increase in sensitivity. This means that even when a large amount of magnetic data is stored on a small sized hard disc it is easy to read, and this has resulted in the memory storage capacity of hard discs undergoing great improvements. Thanks to the development of the magnetic head which utilizes GMR effect in the late 1990s, the performance and effectiveness of hard discs is being enhanced at a faster and faster rate.

What is called MR effect—namely the effect of magnetization on the electrical resistance of ferromagnetic materials—has posed a challenge for a long time in both fundamental and applied physics. From the 1970s onward, Fert developed comprehensive, pioneering studies for a quantum mechanical understanding of electrical transport properties in ferromagnetic alloys. Through his research, it became foreseeable in the mid-80's that the effect of spin-dependent scattering would give rise to magneto-resistance effects of unprecedented magnitude, provided one finds a means to switch the relative orientation of the magnetization of successive magnetic layers in a multi-layer from parallel to antiparallel.

In this situation Grünberg, who had a long-standing record of improving the growth and of characterizing the properties of magnetic layers, found that two Fe layers separated by a Cr interlayer couple for a certain Cr thickness antiparallel to each other could be aligned parallel to each other by applying an external magnetic field. Eventually in 1988 he discovered a GMR effect of about 1% at room temperature in such a Fe/Cr/Fe tri-layer system.

Less than a decade after its discovery, the GMR effect was used in magnetic heads in large-capacity small-size hard drives, which are integrated in commercial devices such as personal computers, video recorders, and portable music players. It is quite remarkable that scientific discovery lead to practical applications in such a short period of time.

The new paradigm of spin-electronics pioneered by Fert and Grünberg triggered a great advance in basic research that linked the electrical transport and the magnetic phenomenon, as well as in innovative applied research such as nonvolatile memory (MRAM) making use of the finding. They have opened the way for "innovative devices inspired by basic research."

Compiled from information on the JSTF web site. For more information on MR and GMR, see Janice Nickel, Magnetoresistance Overview, HPL-95-60.

Tuesday, February 6, 2007

Five EI leaders elected to SPIE Fellows

Although all domains covered by SPIE are mostly digitized and heavily rely on electronic imaging, only a handful of SPIE members active in electronic imaging had been honored as Fellows.

This has just changed. At the Fellows luncheon in occasion of Photonics West in San Jose, the SPIE announced that 5 leaders who have been very engaged in organizing the IS&T/SPIE Annual Symposia on Electronic Imaging Science and Technology will be honored as new Fellows of the Society this year. They are:

Jan P. Allebach
Purdue University, USA, for specific achievements in electronic imaging
Jaakko T. Astola
Tampere University of Technology, Finland, for specific achievements in electronic imaging and image processing
Chang Wen Chen
Florida Institute of Technology, USA, for specific achievements in electronic imaging and visual communications
Gabriel G. Marcu
Apple Computer Inc., USA, for specific achievements in electronic imaging
Thrasyvoulos N. Pappas
Northwestern University, USA, for specific achievements in electronic imaging

Their portraits are at <http://spie.org/x32.xml>.

Monday, February 5, 2007

Loss aversion in decision-making

Prospect theory, a behavioral model of decision making under risk and uncertainty, explains risk aversion for mixed (gain/loss) gambles using the concept of loss aversion: People are more sensitive to the possibility of losing money than they are to gaining the same amounts of money; the subjective impact of losses is roughly twice that of gains. In a recent Science article a team from UCLA shows how neuroimaging can be used to directly rest predictions from behavioral theories.

In their paper The Neuronal Basis of Loss Aversion in Decision-Making Under Risk , in Science Volume 315, 26 January 2007 pp. 515–518 (subscription required), Sabrina Tom, Craig Fox, Christopher Trepel, and Russell Poldrack collected functional magnetic resonance imaging (fMRI) data while participants decided whether to accept or reject mixed gambles that offered a 50/50 chance of either gaining one amount of money or losing another amount.

Their study shows that in the context of decision-making, potential losses are represented by decreasing activity in regions that seem to code for subjective value rather than by increasing activity in regions associated with negative emotions. In other words, loss aversion does not appear to be driven by a negative affective response, such as fear, vigilance, discomfort, or anxiety.

Finally, their results provide evidence in favor of one of the fundamental claims of prospect theory, namely that the function that maps money to subjective value is markedly steeper for losses than gains.

EI wrap-up

Yesterday afternoon I drove the last VIP to the airport and also for this year EI is over—the event that is, for the new collaborations and insights sparked by the symposium will keep us busy over the next year. The main benefit of conferences and symposia are not the papers per se, those will be available online from the SPIE Digital Library in a few weeks, using the program.

Rather, the benefits are the networking with other scientists involved in similar research, talking to authors to find out why their research took a certain twist, what was tried and did not work. Cases in point are the papers by John McCann of McCann Imaging.

Almost every year John presents us a new research result in a very lucid presentation, unraveling the result step by step in a clear and logical sequence. Yet at the end of the presentation we ask ourselves: what has hit us? John just presented a totally non-obvious new deep insight, but what does it really mean? Only by sequestering John for a lunch or dinner and discussing with others, one has a chance to really grasp the impact of his work, which he has developed during many hours of hard labor and then distilled into a 20 minute presentation.

Of the three papers he presented, the one on veiling glare he wrote with Alessandro Rizzi was the most surprising one for me. Essentially it suggests that the human visual system has evolved its Retinex circuitry to compensate the eye’s strong veiling glare. The practical impact is that it teaches us how to deal with high dynamic range scenes. Since John and Alessandro put a lot of effort writing their abstract, I will just reproduce it verbatim instead of writing it in my own words at the risk of getting it wrong.

Veiling glare: the dynamic range limit of HDR images. High Dynamic Range (HDR) images are superior to conventional images. However, veiling glare is a physical limit to HDR image acquisition and display. We performed camera calibration experiments using a single test target with 40 luminance patches covering a luminance range of 18,619:1. Veiling glare is a scenedependent physical limit of the camera and the lens. Multiple exposures cannot accurately reconstruct scene luminances beyondthe veiling glare limit. Human observer experiments, using the same targets, showed that image-dependent intraocular scatter changes identical display luminances into different retinal luminances. Vision’s contrast mechanism further distorts any correlation of scene luminance and appearance. There must be reasons, other than accurate luminance, that explains the improvement in HDR images. The multiple exposure technique significantly improves digital quantization. The improved quantization allows displays to present better spatial information to humans. When human vision looks at high-dynamic range displays, it processes them using spatial comparisons.

There were many other excellent papers, for example in the Digital Publishing Special Session. However, in this limited space I will mention only one other paper, which stood out for an extraordinarily high quality experimental procedure. The research was by Kenichiro Masaoka, Masaki Emoto, Masayuki Sugawara, and Yuji Nojiri of the NHK Science and Technical Research Labs. in Japan. Here is their abstract:

Comparing realness between real objects and images at various resolutions. Image resolution is one of the important factors for visual realness. We performed subjective assessments to examine the realness of images at six different resolutions, ranging from 19.5 cpd (cycles per degree) to 156 cpd. A paired-comparison procedure was used to quantify the realness of six images versus each other or versus the real object. Three objects were used. Both real objects and images were viewed through a synopter, which removed horizontal disparity and presented the same image to both eyes. Sixty-five observers were asked to choose the viewed image which was closer to the real object and appeared to be there naturally for each pair of stimuli selected from the group of six images and the real object. It was undisclosed to the observers that real objects were included in the stimuli. The paired comparison data were analyzed using the Bradley-Terry model. The results indicated that realness of an image increased as the image resolution increased up to about 40-50 cpd, which corresponded to the discrimination threshold calculated based on the observers' visual acuity, and reached a plateau above this threshold.

As for the two papers I presented, the one on readability went particularly well, mainly because a few hours before the presentation I received from Silvia Zuffi and Carla Brambilla a new set of slides with new results on color preferences for colored text on colored background.