Sunday, February 22, 2009

Fresh Wind in DC

A fresh wind blows in D.C. — there has been a multi-level reset; good excuse for a multi-level post, which I have not done for a while.

George Washington Masonic Memorial

Across the Potomac river from the White House palace lies the town of Alexandria. As the taxi rolls off the Beltway, the driver notes the many police patrol cars. He explains a couple of days ago three kids shot and killed a taxi driver right here. All they got was a hundred dollars cash and a GPS. He continues, from the economic meltdown and all the money flowing into bailouts, there is not more a bad or a good part of town, crime is way up all over town.

We are in the part of town between Duke Street and Eisenhower Avenue, formerly called Spring Garden Farm. It is located just beyond the town old Corporation limits — and was of course exempt from taxation — along a major commercial artery, Little River Turnpike. It was the site of the Duke Street Tanyard.

Duke Street Tanyard, Alexandria West End

Peter Wise, a city councilman and tanner, established the Duke Street Tanyard by 1797. The business was situated near a stone bridge on the east bank of Hooff's Run by West End Village. The tannery's ownership and name changed many times: Quakers operated the large tannery from 1812 until it was destroyed by fire in 1853.

West End, Alexandria's first suburb, was a processing center for cattle, which were brought here for slaughter and butchering. The hides were then taken to the tannery to be processed into leather by soaking them in solutions of lime, tree bark and animal dung. The tanned hides were curried by oiling, scrapping and pounding and then made into saddles, harnesses and boots.

Carlyle District, USPTO administration building

Today, this area is very different and has been renamed to Carlyle District. Of course some very bad things like slaughter and butchering still happen here today, as in the building below. In the Albert V. Bryan United States Courthouse the government deals with the worst of its citizens.

Albert V. Bryan United States Courthouse

However, the mindful visitor notes the Carlyle District is the habitat for a very different human being than the West End was. People walk in and out from the many huge office buildings at every hour of the night or day, seven days a week. They have the fast and decisive gait of busy professionals, scientists and engineers.

Whole Foods Market

All around are fancy condominium towers, and the Whole Foods Market is an order of magnitude larger than the one in posh Palo Alto, which was their first store outside Texas and filled the entire former Oldsmobile dealership. The deli food section is particularly large and multi-cultural, a sure sign for a very busy customer base.

Are we looking at the main Microsoft or Google campus? No, those are on the West Coast.

James Madison Building

Let us check out the monument in front of the main building. The geodesic dome must have a plaque with the sponsoring company name.

Geodesic dome

Oh!, everything in the Carlyle District is labeled with an inventor name and a patent number. Well, almost everything; some establishments may have an actual name, like the bar and grille for the lawyers doing business in the courthouse:

Trademark Bar

Yes Virginia, the Carlyle district is the home of the United States Patent And Trademark Office

Carlyle district map

and most buildings carry the sign below, even when they are not light blue in the map above. It seems a new building goes up every year.

USPTO sign

So, what is this fresh wind? The USPTO in Alexandria has nothing to do with the old USPTO that was across the Potomac river in D.C. It is not a bureaucratic entity, but an engine of growth provided by the Department of Commerce for the benefit of society.

The USPTO now offers many services for those who want to contribute to society through technology. Check out their web site at and you will be pleasantly surprised at the many available services to make you and your technology business successful.

As you see right up on top at the right side of their home page, the USPTO is hiring. It is maybe the last organization in the U.S. hiring technologists in these troubled times.

Today's examiners are skilled scientists and engineers. They work in modern closed offices with the latest and best tools; gone are the shoes from yesteryear. They research each application with the same competence, skill, and care as the technologist who wrote the application.

Despite all the new services, the main function of the Patent Office is to protect the inventors of a technology so they can recuperate their research investment. This is especially important today.

The lesson learned from the lost decade in Japan after their bubble economy burst in 1993, is that the way a society digs itself out of a deep economic crisis is that companies start investing in new manufacturing equipment so they can produce products superior to those of the competition. It is a slow spiral and it is powered by technology.

Therefore, hurry up and make a breakthrough invention, then rush to Alexandria

Carlyle district sign

and file a patent application.

USPTO customer service window

The lesson from Japan is why it is good to see that the USPTO is still actively hiring.

However, when you stroll the streets of the Carlyle district or choose your meal at the Whole Foods Market's vast deli, you hear troubling things. The budget is very tight, and the managers are struggling. Some training is deferred and the support personnel is scarce. There is a sense of urgency and apprehension in the air.

The government needs its money to bail out the industry. There is a fear that the snakes in suits have come up with a new milking scheme. Instead of grabbing the consumers by the ankles and shaking out the money in their pockets, they now go to the government for a bailout and let the government take the taxpayers by the ankles and shake out their money.

If this becomes the new modus operandi, then we are doomed. If you encounter a snake in suit take action. These entrepreneurial pretenders are only a small minority of the executives, the other 96.5% are genuine. Make sure they put their technologists on the spiral out of this economic crisis.

A stop sign does not mean that you should fall asleep on the wheel. After you stopped, looked for crossing traffic and pedestrians, and checked your destination, press on. Urgently, please…

stop, but then move on

Monday, February 16, 2009

HDTV: To calibrate or not to calibrate

The January 2009 issue of SID's Information Display has an interesting article by Pete Putman with title To Calibrate, or Not to Calibrate? Mr. Putman is an expert on TV calibration who since its inception has held the ISF (Imaging Science Foundation) certification for this specialty.

The article has some interesting conclusions that can save you a lot of time and money. For example, if you have just basic cable service and a red-laser DVD player, an HDTV with 720p/768p resolution is more than adequate.

As for calibration, it is only really necessary for home-theater front projectors. Today's direct view HDTVs are so good, that all you have to do is to choose the correct factory image presets, such as low-level Cinema and Movie modes.

In summary, here are the five quick steps to set up your HDTV:

  1. Set the HDTV's contrast between 60 and 80 and brightness between 50 and 60.
  2. Switch from "Dynamic" to "Standard" or "Cinema/ Movie" picture mode.
  3. Select a warm-color-temperature preset.
  4. Turn down the sharpness control to 20% or less.
  5. Turn off any other edge-enhancement processing. (Think about it: Why would HDTV content need detail enhancement?)

This is for your HDTV. As a color scientist you still need to regularly calibrate your monitors and printers.

Thank you, SID and Mr. Putman!

Wednesday, February 4, 2009

Multimedia Grand Challenge

In the past, conferences used to have heated panel discussions on the future research vectors in their field. As I mentioned in my 16 January 2009 post, there is a revival. The color conference at the EI symposium had a successful new session on the Dark Side of Color, and at HP Labs Gary Dispoto and Ingeborg Tastl organized a successful panel discussion on Research in Color Science: Next Challenges.

The trend continues unabated. ACM Multimedia 2009 has announced a call for submissions and participation in a new competition track called The Multimedia Grand Challenge. It is a set of problems and issues from industry leaders like HP, Google, Yahoo, Radvision, CeWe, Nokia and other companies, geared to engage the multimedia research community in solving relevant, interesting and challenging questions about the industry's 2-5 year horizon for multimedia.

The top submissions will be presented in a special event. Based on the presentation, winners will be selected for Grand Challenge awards. "Competition" means that there will also be prizes. 19-24 October 2009 will be hot in Beijing!

Young Beijing couple watching TV, woman holding remote control

Tuesday, February 3, 2009

Summary of the panel discussion Research in Color Science: Next Challenges

Summary of the panel discussion Research in Color Science: Next Challenges

Host: Gary Dispoto

Moderator: Ingeborg Tastl

Panel members:
Jan Allebach,
Jennifer Gille,
Gabriel Marcu,
John McCann,
Michael Kriss,
Carinna Parraman,
Alessandro Rizzi,
Gaurav Sharma,
Sabine Süsstrunk


Sabine Süsstrunk

Color science is part of imaging science. One billion cameras were sold in 2007. Pictures are now taken for visual information, not for photography, so we have to ask what people do with them. Important functions appear to be archiving and searching.

  • Theory and practical problem.
  • Color imaging, spatial imaging, mixing, halftoning, new needs, production digital printing, color halftoning design.
  • Theory and science. Usable, implementation, usability, turn everything off. Application camera color management.
  • Aesthetics (what would a designer do); tool that helps.
  • Image quality (spatial, measurements, contents)

Gaurav Sharma

The organization of color images is a major challenge. From a color imaging perspective, we need to take a lesson from film and anticipate changes that we can expect to see. Several businesses in color/printing with regular annuity streams are disappearing (film, newspapers) and the emerging businesses that are replacing these have a very different and faster dynamic. Google is an example of this where they make money through click charges but extremely small amounts of money per click and rely on tremendous efficiencies. I think we will see more of this in the color imaging industry.

Understanding the human visual system (HVS) continues to be a formidable challenge in color imaging systems. In this aspect, the newer color imaging devices also represent a very significant opportunity. Experiments that David MacAdam (at the University of Rochester) and other pioneers in color perception conducted in the early days of color science can now be performed with regular "desktop" imaging equipment at a fraction of the original price and with much higher accuracy. I think we need to invest in this area.

  • Printing media and displays.
  • Technology, paper.
  • Variability of the printed paper (more at the print).
  • Display: smooth gamut; paper: larger gamut.
  • Experience of the viewer. Is the feeling close enough?

Alessandro Rizzi

Colorimetry has not been created to work for images. Colorimetry was designed for perception of color with nothing else in view. We need to move to colorimetry in context.

Colorimetry is very good because works out of his main scope. However a working appearance model is still to come.

  • Color management always works and never works (management limits, user satisfaction and HVS robustness)
  • Color and context. HVS into the process.

Carinna Parraman

At the Centre for Fine Print Research, I represent the fine art market and creative industry. The development of digital tools and hardware for mixing colour has a significant market potential within the creative industry: how therefore to develop an alternative method of mixing colours that are more intuitive and less dependent on traditional colour spaces, how to develop methods that assist in modifications to printer hardware, and flexible approaches to how colours are printed.

Colour is considered from a more subjective point of view of the user and their practical implications: Camera to print? — a clear imaging pipeline for users — limitations of colour management and what can be an alternative? What if? a return to a messy approach to mixing — how to mix a digital colour in a pot? Chronology of colour? Overlayering colours, multi-printing methods, translucent colours over opaque and vice versa. The printed surface qualities on paper, plastics, textiles, ceramic and metal. How long does it last? dyes versus pigments versus solvents. Printing colour in three dimensions, the development of a new range of possibilities that are not constrained by the limitations of colour management.

Michael Kriss

In terms of things to do for future products that HP might be involved consider the following (and you probably are doing them already)

  1. Image Understanding — Via a single scan of an image, determine the proper classification, optimize color, noise, sharpness, lighting issues (multi-illuminants) and then print. In the home, at a kiosk or via the web, this is what will sell. In the end, most customers just want a good print to show their family and friends and care little how it gets done.
  2. Image Synthesis — More and more the vast entertainment and training industries will want to create life-like images without using expensive sets and actors. Getting the right color will be part of that. The work here will also help Image Understanding.

There were some good question about quality metrics. I would like to expand on the answer I gave. My almost 40 years in the business has led me to believe that exact image quality metrics are not required and may not be attainable. There is a constant need for new quality analytics, particularly on artifacts and tone scale. But they should be more of a guide down the path of development than an absolute measure of quality. Also, well defined metrics may be of more use in the understanding of how a system work than the exact quality — this was very true for MTF in the film world. Also, solid analytical modeling should get you to the 85% level and the last 15% has to be more of an art due to the many variables in any imaging system.

John McCann

The science of “how we see” is the foundation of all imaging. Vision is a complex spatial calculation. This part of our research does not change with time. It simply progresses. There are two independent characteristics of vision science that are central to image quality. Image resolution (pixels /inch) influences sharpness of edges. Tone scale (responses between white and black) influences the magnitude of edges between objects (Digital Flash). Both are critical to making high quality images.

There are two kinds of change that control imaging products. All of us are fascinated by the first, namely the technology of making images. Image making is a triumph of science with technology, and it is fun. The other major force is how we use images, as a society.

The technology of video tape recorders was slowly replacing amateur silver-halide movie cameras. Video recording was a tiny venture, until movie rentals created a massive business. Digital cameras, in the early years, were also limited, until the internet provided instant global access to images. This change in usage changed image quality standards. The previous minimum quality standard, an artifact-free 8 by 10 inch print, was modified by the cost in time required to transmit the image. We use JPEG compression and image sharpening to compress resolution data while preserving appearances. We use tone-scale mapping and spatial processing to render HDR scenes into LDR displays.

There are many new display technologies, mostly in color. Tools need to anticipate these technologies. Some day LED displays will seem as antiquated as CRTs are seen today. The evolution of imaging hardware has caused many revolutions in image usage. In evaluating image quality one has to start at the final use, and work backwards. If we know whether an image will be seen as a large print in a gallery, vs. on a cellphone, we know the image quality we need. These extremes in image usage demand completely different quality properties.

Gabriel Marcu

There is a large number of new display technologies for which the current color models are not sufficient for predicting the color performance of the displays using them same as in the past the CRT model proved to be insufficient for predicting the LCD performance. New models need to be constructed for these technologies and for their correct integration in the color reproduction systems. Companies who will not understand this will fail to get advantage of them at the full potential. Conferences are a good way to learn form experts. The conference shows (expo) are also very cheap ways for having a glimpse of the future.

Most likely display future is integration. Same as the CRT have been replaced by LCD in the last few years, LCD as a hybrid technology (backlight, polarizers, LC) will be replaced by a fully integrated technology such as OLED, in the same way the transistors have been replaced by the integrated circuit.

New display technologies responding to the increasing needs of mobility, versatility and expansibility will open new market segments.

Jennifer Gille

Two challenges for color scientists for displays:

  • What is generally called "color management" has been complicated because of new technologies, new sizes for displays, and new uses, especially out-of-doors. Old specifications don't really describe how displays look.
  • Science papers often do not have answers to engineering problems; the data are aimed at theory rather than practice. In the real world color depends on spatial and temporal qualities, e.g. texture, and properties of the light itself other than color spectral content.

Jan Allebach

How and what colors for a mix and microscopic level is halftoning; likely to be an area of significant work. The usability and interoperability of color management systems is not described. Esthetics for multimedia design and content development. Image quality: spacial measures can add psychophysics.

  • Resolution is much better.
  • Glowing phosphor, display metallic look, light is just different.
  • Change of printing (newspapers, YouTube).
  • Books, usability.


  • Displays as wavefront reconstruction devices; nanotechnology should allow control of emissive or reflective wavefront.
  • EI paper, stacked phase using multiple projectors.
  • Sensors:

    • IR sensors; capture it in different ways, new optics, OLED worse signal, overlapped.
    • Right color; controlled system; pleasing color; open system; dependable model; understand the modality; 90% solutions make it easy; mixing until you get the right color; intuitive; more intuitive.
    • Aesthetics
  • About an aesthetic measure, it is possible to use machine learning methods to learn from graphic artists what is a good design (color well balanced). What are the features we should look into?
    • It is not done right now. It is a very complex problem and different viewing conditions make the visual balance measure different and therefore difficult.
    • Automating the "machine learning" aspects doesn't quite work.
    • When you try it, most of the time you get something that looks pretty good, but once in a while the algorithm fails so miserably that the whole system is unacceptable.
  • Very small pixels; nano-pixels, sensors, Lipmann holographs.
  • Sensor phase measure depth.
  • Projectors; macro, replacement technology.
  • Reflective technology, what is the minimum?
  • 8x10 picture for no artifacts.
    • Good enough is enough; but other aspects.
    • Content is important you look up with it.
  • Good vs. 75%.
  • Evolution.
  • Price, technically good enough.
  • Thread on color for displays
  • Thread on trends in printing and color pages
  • Thread on color testing and validation


  • 40 to 30 years ago, concepts like High-Fidelity and Audiophile became mainstream among young adults. People became obsessed with the linearity of frequency response, phase preservation, and distortions. The stereo equipment stores that opened in cities around the world to cater to this market began to disappear some 15 years ago, as people began to switch to good-enough equipment they bought in mass-merchandising stores. Today, people listen to music encoded with MP-3, with all kind of phase distortions, through ear-buds, etc. But they do care a lot about the social aspect of sharing their playlists and they music files. With this background, it is not surprising that yesterday's single lens reflex (SLR) amateurs today are happy with good-enough color.
  • Many years ago, in the Xerox Webster cafeteria, young researchers regularly sat at Hans Neugebauer's table and proudly told him they were studying his model to build a better color reproduction system. Hans used to tell them that his model was just a dissertation, and it fulfilled its purpose by giving him his Ph.D. diploma. He continued, that color printing is too complex for a simple mathematical model, and that in such a case one should use a look-up table and interpolation.
  • The next step up for modeling unknown complex systems are neural networks. Shoji Tominaga proved experimentally, that color management with neural networks delivers excellent color reproduction. Alas, the training phase is way too slow for practical use, and look-up tables with interpolation are good enough while having acceptable creation times.
  • The next step up is machine learning, using algorithms like hidden Markov chains and vector support machines. However, they still have the training phase problem.
  • One doctoral student at Chiba University worked on tone reproduction. In his research, he achieved a particularly deep intuitive understanding of tone reproduction. After he obtained his Ph.D., he went to work for Panasonic, where he expanded his deep intuition in tone reproduction to camera sensors. The first camera with his algorithm sold 7 million units, and since then Panasonic has become a market leader. The commercial success of a high-tech company depends on the scientists it hires and how it grooms them. This picture was taken with that camera:

Shoji Tominaga, Gabriel Marcu, John McCann


  1. There does seem to a trend in broadening Color Science into taking account the context in which the color appears. Is this because that color in isolation is a solved problem? Or it is no longer an interesting problem? Or it no longer speaks to the problems that we are currently facing?
  2. Is Color Research somehow tied cyclically to advent of new display technologies? Are the productive color times in color science reapplying the known principles to a new technology? Why does Display technology drive color research and not the other way around?
  3. What is the effect of color on the development and use of different media? Why doesn’t color properties drive the development of new media? Do color properties of a certain media differentiate the use cases from another? For instance, will Color in print always be better than a dynamic display? If so, is there a role for print media that can never be covered by dynamic displays?


There was a strong thread on what is good enough color, which lead on one side to the quality metrics (see Michael's blurb) and on the other side to the warning not to keep doing incremental things like increasing color fidelity, because there will be paradigm shifts and we do not want to miss them.

As I wrote to the Panel Members in the thank you note for coming to HP Labs after EI and illuminating us, they have clearly shown us that a good scientist is not after some holy grail, but relies on good intuition, hard work, and hope for serendipity.

This is different from what goes on in engineering or what other researchers do, where the focus is on improving current technology and advancing the status quo. When others embark on a project they probably have a pretty good idea of how and when their work pays off. The problems they address are well defined. They help improve color science state-of-the-art by going one step farther along a well-plotted path.

The Panel Members told us, there is no plotted path. The problems they work on are the ones they help to invent. When their embark on their projects, they are prepared to go in directions they could not have predicted at the outset. They explained us they feel challenged to take risks and to give up cherished methods or beliefs in order to find new approaches. They told us they encounter periods of deep uncertainty and frustration when it seems that their efforts are leading nowhere.

From the panel discussion we now understand how this is why following our instinct is so important. Only by having deep intuitions, being able to trust them, and knowing how to run with them we are able to keep our bearings and guide ourselves through unchartered territory. The Panel Members showed us that the ability to do research that gets to the root is what separates merely good researchers from world-class ones like them. The former are reacting to a predictable future; they are enacting a qualitatively new one.