Thursday, March 31, 2011

Parallel Error Diffusion Update

In January I wrote a post on parallel error diffusion. In the meantime the paper has been published with this citation: Yao Zhang, John L. Recker, Robert Ulichney, Giordano B. Beretta, Ingeborg Tastl, I-Jong Lin and John D. Owens, "A parallel error diffusion implementation on a GPU", Proc. SPIE 7872, 78720K (2011); doi:10.1117/12.872616. The link is In that paper we focussed on achieving a possibly efficient CUDA implementation of the BIPED algorithm.

A new paper, Yan Zhou, Chun Chen, Qiang Wang, Jiajun Bu and Hua Zhou, "Block-based threshold modulation error diffusion", J. Electron. Imaging 20, 013018 (Mar 25, 2011); doi:10.1117/1.3555132 just appeared in JEI. Their focus is on achieving a possibly high image quality with BIPED. Lacking performance data, I do not know how it performs compared to sequential ED. The link is

IBEX Camera Sees a Ribbon in the Sky

The NASA IBEX (Interstellar Boundary Explorer) mission (the size of a kitchen table) was launched in 2008 to map the heliosphere that surrounds our solar system. It carries a High-Energy Neutral Atom (HENA) camera that images energetic neutral atoms, rather than photons, to create maps of the boundary region between our solar system and the rest of our galaxy.

The surprise result (so far) is that the energy and particles at the galactic boundary are confined to a "ribbon" structure that envelopes the heliosphere. For reference, the Voyager spacecraft are just now passing through the heliopause, at about 100 AUs, after more than 30 years of in-flight operation. Both the heliosphere and heliopause are shown below on a logarithmic scale.

For the first ten billion kilometres of its radius, the solar wind travels at over a million kilometers per hour. As it begins to drop out with the interstellar medium, it slows down before finally ceasing altogether. The point where the solar wind slows down is the termination shock; the point where the interstellar medium and solar wind pressures balance is called the heliopause; the point where the interstellar medium, traveling in the opposite direction, slows down as it collides with the heliosphere is the bow shock. [Source: Wikipedia]

Tuesday, March 29, 2011


When we think about places we have never visited, we build on other information about the place—the stereotypes—we have gained from various information sources, like friends, movies, documentaries, books, and newspapers—or YouTube in this day and age. About the Silicon Valley, the stereotype is that of the young entrepreneur who drops out of college to start up a company and become a billionaire before the tender age of 25.

In reality, most technologists here are just gnomes that work hard to make a contribution to humanity. The difference to other places in the world is that we do have the opportunity to create billion dollar businesses in technology, but as for our personal lives, they tend to be very modest, both in monetary term as well as in terms of fame or peer recognition. After all, being charismatically challenged is one of the reasons for becoming a programmer.

When I moved to the Valley with a freshly minted doctorate in computational geometry, for three years I worked on design rule checking. The task was not easy, especially from the point of view of the group dynamics. The project was building the next generation workstation—called Dragon—using full custom VLSI design instead of the ECL bit-slice technology common at the time.

The bootstrap problem was that there were no tools to design chips of such complexity (the Dragon had four to eight processors with separate IFU and EU chips, plus bus arbiter, memory controller, floating point unit, display controller, etc.). We leaned on principles from UCB's Magic and Spice tools to create our own. The key difference was that to handle the complexity of Dragon (each chip had an individual designer using a 32-bit Dorado with 8 MB of RAM), the tools we were inventing were hierarchical.

Although at first doing a hierarchical design instead of a flat design looked like a stroke of genius because of the bit parallelism, in practice it was a fata morgana. Indeed, designers tended to use the hierarchical features as macros, and the design was flat. The cells just contained the repetitive geometry, the key logic being added on top flat above the hierarchy, globally across the chip.

Therefore, maintaining the hierarchical design rule checker was in large part an act of self-flagellation. Nevertheless, I was puzzled by the enormous amount of design rule violations I was seeing. The designers were the best of the best in the world; why would they do so many mistakes?

I was most puzzled by the very high incidence of using the wrong sex diffusion over wells. Originally the underlying technology was NMOS, but when I joined the project they had already switched to CMOS, and as the designers were learning the new technique, the sex of diffusion was one thing on which they were really focusing. Why did they err so frequently?

I decided to study the problem and talked to each designer asking them to explain me some layout created by a different designer. I quickly noticed, they were not able to read the layout: they had to physically deconstruct it in order to navigate it. In my view, this was a shortcoming of the layout editor and I thought I can fix it by using a more appropriate color scheme.

As I learned, the specific colors came from Carver Mead and Lynn Conway's book written at PARC. At that time the thickness of the layers on a chip was in the range of visible light, so when you looked at a chip under a microscope in transmission mode, you would see each layer in a different color, according to its thickness.

Detail of an NMOS chip

Since the whole point of Mead and Conway's design technique was to abstract from the physical reality, I thought this coloring was arbitrary and I could come up with a better coloring. However, I immediately found myself accused of anathema: the colors by religion must be red for polysilicon, green for diffusion, yellow for gates, and blue for metal! Never mind the wells were also yellow and there were two metal layers.

Wary of religious wars, I decided to learn about color so I could nudge the colors to make layout more readable. I contacted the color scientists in Gary Starkweather's group and Mik Lamming kindly lent me his copy of Wyszecki and Stiles, telling me it contained all I needed to know about color.

After reading about 150 pages, I learned enough to come up with a coloring scheme, which essentially consisted in nudging the colors so that poly and diffusion would try to preserve their lightness, while the metal layers would preserve their hue. This made the layout appear transparent, so one could follow a wire no matter what other wires were under or above it.

This concept of transparency in IC designs is different from that of real world transparency, because—for example—it must prevent large power or clock wires from hiding the layout under them.

At that time my assignment was to automate the printing of checkplots, so I integrated the new color scheme in the plotter driver. This is how the layout for an inverter looked with the old driver:

Inverter checkplot

Although with the new color scheme the layout was much more readable, most of the designers were shocked about the radical change. I tried to compromise by sitting down with each designer and try to reach a compromise on the color scheme.

It was at this time that I realized some of the designers had serious color discrimination problems. Unfortunately, they declined to be tested for color vision deficiency, but I developed a strong suspicion that one designer was a dichromat and another was either a dichromat or seriously anomalous.

Anyway, due to the memory restrictions, the designers were driving the color displays in 8-bit mode and each workstation had both a color and a black-and-white display, because text was too fuzzy on the color displays of the time. I wrote a little graphical tool called Meta-Palette running on the black-and-white display that had a chromaticity diagram with a mark for each color map entry, whose RGB values I could change by simply dragging around the corresponding mark. With the designers, I then created a couple of consensus palettes, which I made user-selectable in the printer driver.

This is the same inverter layout as above rendered with one of the preferred color palettes:

Inverter checkplot

The number of design errors dropped dramatically to a manageable number, but the intervention still had strong religious opposition.

I wrapped up my work in a technical report and moved on to greener pastures in the new Electronic Documents Lab (EDL):


This report was sort of a kitchen sink, focusing more on the system integration aspects than on the color problems in rendering logical circuits for VLSI design. Therefore, I later doubled up with a shorter report just on the color problem:


I never submitted them anywhere because I immediately went on to tackle the more general problem of selecting colors for creating electronic documents. I used the same implementation strategy as for the VLSI design tool. However, the illustrator Gargoyle was used mostly in full color mode (24 bits), so to edit the colors by dragging marks in chromaticity diagrams I had to copy the colors into a hash table (metaphor: apply turpentine), noting that a typical 512 pixel square image typically contains only 26,000 different colors and most often less than 256.

After editing the colors in the color map I had to write them back into the Gargoyle data structure (metaphor: apply fixative). This is shown in the video at the top of this post.

Despite eloping to EDL, I did not escape the religious color wars. When I implemented the Xerox Color Encoding Standard as a color management system, I carefully optimized the inner loops so that managed color would render faster than unmanaged color, assuming it would be generally adopted by the Cedar community.

However, despite the efforts of my more charismatic colleagues to explain colorimetric color reproduction, generally the idea that a device-independent colorimetric color specification would be a good universal solution for portable color documents, the general belief was that any device RGB values specified by an author were the holy untouchable truth.

The idea that a printer produced a different color appearance for the same device coordinates than a display monitor was considered to be a failure of the printer designers. The religious fervor was so strong, that many people preferred to manually gamut-map color one by one by modifying color values in a simulation of the print, rather than accept color management and its automatic gamut mapping by algorithm.

Despite this fervor, most people did not really understand the concept of gamut mapping, let alone additive and subtractive color. Encouraged by the unexpected success of the flamingo movie, yet unable to defend my work in a talk, I decided to do a video explaining my work and then let people watch the video. This is the video at the top of this post.

In summary, Meta-Palette is an interactive tool to edit a color palette colorimetrically. To achieve device independence, I implemented a color management system (CMS) based on the Xerox Color Encoding Standard. Since device independent color was not generally accepted at the time, I did not use the CMS to just match color across devices, but to simulate how the same device coordinates are rendered across devices.

With this, you might just think I was a moron who lacked the persuasive skills to evangelize device independent color reproduction. This is not so. Half a decade later, Adobe created PostScript Level 2 (PS2) with colorimetric color reproduction. PS2 being device independent, you would expect that color would be encoded in a device independent colorimetric manner. However, based on the feedback from its professional users during the design and implementation phase, Adobe stored the color data in the input device coordinates, along with the devices profile.

The reason the printer's gamuts were so limited was because the inks and toners still had toxicity problems, especially liquid electrophotography.

Versatec liquid electrography color printer with checkplot

It would take a decade for printers to achieve a gamut comparable to that of a CRT display monitor. This progress, however, did not bring a renaissance of colorimetric color reproduction. Instead, it brought sRGB, where the same device coordinates are sent to every device. In retrospect, the skeptics of yore were right.

As for colorimetric color reproduction, it has achieved full maturity with ICC version 4. However, mostly due of its ignorance of workflow, managed color is still a nightmare almost 30 years later.

Tuesday, March 22, 2011

Little structured data

Today we are mostly interested in large data sets, like the megaimages we mentioned recently. Moreover, we are happy with flat unstructured data, which we comb and mine as needed. Personally, I prefer navigation and structure, but that is a matter of taste. Anyway, what is the trend for little data?

The Totally Color Channel

Monday, March 21, 2011

Large tiled images

Remember the large tiled multiresolution images from Live Picture's IVUE file format and its son FlashPix? Current architectures allow them to make a comeback. New hardware architectures can reduce processing time for gigapixel and terapixel images.

Read the article in the SPIE Newsroom: Multicore speedup for automated stitching of large images.

Saturday, March 19, 2011

Japan Prize Ceremony cancelled

The Board of the Japan Prize Foundation has reached a deliberate conclusion to call off a series of events that were planned to honor the 2011 Japan Prize laureates, given the circumstances in the aftermath of the devastating earthquake and tsunami that hit the northeastern coastal area of the Japanese main island on March 11.

There is no ceremony held but the Foundation is planning to hand the medal and certificate of the Japan Prize to the laureates directly; Dr. Dennis Ritchie and Dr. Ken Thompson in the field of "Information and Communication" and Dr. Tadamitsu Kishimoto and Dr. Toshio Hirano in the field of "Bioscience and Medical Science." The Foundation is also planning to invite them to the next year's ceremony as the Foundation's special guests where they would like to express Japan's celebration to their achievement.

Wednesday, March 16, 2011

Visions of Africa

When I teach my color course I always stress that the visual system does not work like a camcorder: there is no fixed pixel array, no bitmap, and no homunculus in our head watching the bitmap frames on a biological display. True, in the LGN and cortex we can record (distorted) maps of the visual field, but this does not explain color vision.

In the course I have diagrams illustrating how vision is not hierarchical but a network of bidirectional paths. I also reorder the factors in the tristimulus formulæ so it is evident the color matching functions are measures in the mathematical sense, i.e., probabilities for the catch of a photon of a certain energy. The latter means that for example an M cone cannot know if the photon it just catched is green or some other color.

Photon detection in the retina is a quantum effect and we can only describe probabilities. The fact that the brain cannot know the color of a point in the visual field at a given time is generally known as the principle of univariance and was originally formulated by William Albert Hugh Rushton (1901–1980).

That said, the geometric distribution of the L, M, and S cones in the retina is puzzling. Science Now reports on a recent hypothesis claiming on an African savanna 10 million years ago, our ancestors awoke to the sun rising over dry, rolling grasslands, vast skies, and patterned wildlife. This complex scenery influenced the evolution of our eyes, according to a new study, guiding the arrangement of light-sensitive cone cells. The findings might allow researchers to develop machines with more humanlike vision: efficient, accurate, and attuned to the natural world.

Read more at this link: Visions of Africa Shaped Eye Evolution

Monday, March 7, 2011


Honeysuckle is not a color term in Nathan Moroney's color thesaurus, meaning none of the contributors to the online color naming experiment ever mentioned this term. It is a reddish pink, which is a color term in his dictionary. Unfortunately Google's Book Ngram Viewer is not of much help, because it cannot distinguish the color term from the widely distributed climbing shrub with tubular flowers that are typically fragrant and of two colors or shades, opening in the evening for pollination by moths. • Genera Lonicera and Diervilla, family Caprifoliaceae (the honeysuckle family).

Indeed, the graph (click on it for a larger view) shows a much higher frequency for honeysuckle than for rose pink, which is just a color term and would be more frequent than honeysuckle.

Click for larger view

Nevertheless, this year you will hear it often used as a color term. Indeed, Pantone has declared it the color for 2011. Its formal specification is PANTONE 18-2120 Honeysuckle.


"In times of stress, we need something to lift our spirits. Honeysuckle is a captivating, stimulating color that gets the adrenaline going — perfect to ward off the blues," explains Leatrice Eiseman, executive director of the Pantone Color Institute. "Honeysuckle derives its positive qualities from a powerful bond to its mother color red, the most physical, viscerally alive hue in the spectrum."

Eiseman continues, "The intensity of this festive reddish pink allures and engages. In fact, this color, not the sweet fragrance of the flower blossoms for which it was named, is what attracts hummingbirds to nectar. Honeysuckle may also bring a wave of nostalgia for its associated delicious scent reminiscent of the carefree days of spring and summer."

Now we have to see how long it will take for honeysuckle to show up in Nathan's color dictionary and thesaurus.

Tuesday, March 1, 2011

The appearance of a Flamingo

flamingo group

Once upon a time, a day came when the management at Xerox PARC decided to hold elaborate Open Lab events to share our knowledge and achievements in pursuit of synergies. In the color project we had just finished building a research lab, and our director instructed us to better have a good demo in the Gray Lab, justifying is construction.

In fact, we had achieved quite a bit of notoriety, because we had it painted in gray, which was taken as a joke by our colleagues, who expected us building a colorful room. We even had it painted twice, because the first time, when we instructed the painting company to add pure black to white base and nothing else because we needed a spectrally flat color, they thought they were smarter than us and mixed a multitude of pigments to match the gray Munsell Sheet of Color we gave them as the standard.

When they called us upon finishing their job, their boss proudly held the Munsell Sheet against the wall, but we could see immediately that something was fishy, because the wall had a different color where it was hit by the light from the hallway (the lamps in the room were D50 simulators). We simply showed them their spectrum and they had to repaint the lab at their expense.

Other than the instruments and display monitors, the lab was completely bare, as to avoid contaminating the retina during psychophysics experiments. On the side we had also a small room completely painted in black with a spectroradiometer for the measurements. All lamps were D50, so we did not have to wait to adapt our visual system, and could reset it anytime by staring at a wall.

The announcement of the Open Lab event came with a big surprise: all the other team members would be on sabbatical or vacation that week, so I would have to set up the demo all by myself, including dealing with the crowd.

After some reflection, I concluded this was an impossible task, because all the other demos were very high concept. I decided to instead shoot a video in the lab and then just put in the door to the lab a cart with a big TV and a U-matic tape player. The question now was what experiment could I tape to demonstrate the need for a gray lab?

Chilean flamingoOne Sunday I surveyed the offices of my colleagues working in graphics and imaging, in search of an error possibly due to inaccurate color evaluation. Of course each office had pictures of Utah teapots showing off the occupant's algorithms, but I noticed that images of flamingos were quite common. I was amazed all these flamingos were of a vivid pink, unlike the vermilion I remembered from a zoo visit when I was a child.

So I thought I drive to Marine World/Africa U.S.A., which had just moved from Redwood City (now the site of Oracle) to Vallejo, get a flamingo feather, measure it, and achieve a perfectly matching reproduction on our monitors and printers, showing off the importance of chromatic adaptation and cross-device color reproduction.

My plan was to keep a professional Betacam in my office and just opportunistically record material, so I could make up a story at the end depending on what I was able to gather. When I showed the first drafts to my colleagues, they educated me that when Americans think of flamingos, they do not think of the bird at all, but instead they think of pink plastic lawn flamingos.

Well, so much for a naive boy from the Alps. There was not enough time for a different demo, so I stared at my hours of video sequences and made up this movie:

[If there is a problem with the above stream or you have a slow connection, you can download the movie from this link. If you stream from this link, there will be a buffering delay due to the slow connection.]

Unfortunately, the original U-matic cassette is no longer available, and my VHS copy is all gummed up. Unlike U-matic, VHS does not have SMPTE time code and the signal bandwidth is very narrow, so it took me 2 months of conditioning the tape and attempting replays, until I got most of the frames.

I would have loved to have Peter Schnorf's digital video editor having lost video frame protection, because I would just have run the digitization process a few times and the system would assemble the complete video.

I used a semiprofessional VHS player and first split the signal in separate luma and chroma components. I then adjusted each signal to fill its gamut and after analog-to-digital conversion denoised each signal. Since the signal is pretty bad, I did not try to do any enhancements, as they would amplify the defects: I simply transformed the digital video stream into MPEG-4 using Quicktime.

In the VHS device gamut of the YIQ color space, very little of the bandwidth is allocated to the magenta region, therefore the flamingos look terrible in the movie, washed out and like followed by a ghost.

An now a lame flamingo joke: Why do flamingos stand on one leg?

Digital Palette

If they would lift also the other leg, they would fall over.