Tuesday, March 4, 2008

Encoding color

No matter what color space you are using, how you are compressing an image's spatial content, and in what file format you will encapsulate your image, you have to choose a color encoding standard. In this post I will write about color encoding.

One of the first color encoding standards had been proposed by Xerox in their document XNSS 289005 dated May 1990 and based on their earlier Raster Encoding Standard, which was the basis for Interpress. Color coordinates were normalized to the [0, 1] interval, but not constrained to it, to allow for the correct representation of out-of-gamut colors. For each image and each color channel, offsets and scaling factors where specified to represent the coordinate values with a certain number of bits.

In the Eighties personal computers were slow and had little memory, so efficiency was very important. It turns out that if an image is represented in the CIELAB color space with a realistic gamut, 8 bits are sufficient to store a color coordinate with sufficient precision to avoid artifacts. We then invested a considerable effort in decoding the color coordinates in a the fewest possible number of clock cycles. To implement a color management system you had to be a Cedar wizard. You also had to have a very good understanding of scientific computing, because the Xerox Color Encoding Standard also had the concept of tolerances as integral part of encoded color.

A few years later a much more pragmatic approach was taken. Color had become sufficiently cheap to be used in the office and it was necessary that an ordinary engineer be able to implement a color management system. The thinking was that by then almost all CRT monitors used the ITU-R BT.709-2 primaries, had a D65 white point, gamma 2.2, and achieved a luminance level of 80 cd/m2. With this one could require all input and output devices should be build according to this specification, and then simply normalize the RGB coordinates to [0, 225] and forget about color management systems. For good measure, the gamma non-linearity was thrown in the encoding (before, for efficiency we used only linear color model operators and did the gamma operation in firmware in the display controller).

This sRGB trick worked well for a decade. However, today, after 45 years of R&D, LCD displays have taken over and CRTs have virtually disappeared. LCDs do not have a gamma and do are not limited to the ITU-R BT.709-2 phosphor gamut. For the backlight unit (BLU) typically an active local-area dimming (ALD) direct-array of LEDs is used. These LEDs have as much as 70% external quantum efficiency and use multi-spectrum phosphors to generate a very wide color gamut.

As an aside, today's LCD technology is so fast, that color-field-sequential approaches are again considered, which eliminate the inefficiency of color filters in the panel.

Today's panels have a depth of 12 bits in each channel, a gamut that almost reaches the visual system's gamut, and can blast out 500 cd/m2 or more. True, your visual system is in film mode and you will adapt to the display, but if you just send sRGB coordinates to the display, it will look very ugly. When a modern display is unleashed, color must be managed.

Furthermore, a modern consumer digital camera can capture up to 14 bits per channel. Why do you want to throw them away by encoding your images in sRGB? Keep at least 12!

How then should you encode your color images? Fortunately there is no need to reinvent the wheel. The people in the digital effects business had to deal with these issues many years ago and developed several high dynamic range (HDR) color encoding standards. Most display controllers support at least one of them in hardware, namely Industrial Light and Magic OpenEXR (EXR). Open here refers to the C++ source code published by ILM for reading and writing OpenEXR image format files.

To learn more on this topic, Greg Ward's white paper High Dynamic Range Image Encodings is a good starting point.

No comments:

Post a Comment