Thursday, December 13, 2007

High-Dynamic-Range (HDR) Photographic Survey

Mark FairchildLast August, Prof. Mark D.Fairchild, Professor of Color Science and Director of the Munsell Color Science Laboratory in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology released an image database for research in High-Dynamic-Range (HDR) imaging. This database is called the HDR Photographic Survey.

In these days, when camera sensors have a bit depth of 12 or 14 bits and LCD panels are at 12 bits, developing good compression and rendering algorithms for HDR images is increasingly important. Also, most SLR cameras nowadays feature automatic exposure bracketing, so it is relatively easy to create HDR images. The problem is having well characterized reference images that everybody can use, so algorithms can be compared.Mark Fairchild in 9 brackets

One aim of Prof. Fairchild's HDR Photographic Survey is to provide such images in the public domain to researchers working on HDR systems and perception with a key feature being the inclusion of camera characterization data to allow conversion to accurate device-independent image data, colorimetric measurements of original scene elements, color appearance scaling of scene elements, and other scene data allowing increased utility of the images.

There are 106 images in all. Twenty-eight have accompanying colorimetric and appearance data. The remaining images have various data associated with them, but as a minimum have an absolute luminance calibration. The link is

The images are saved as OpenEXR files. The OpenEXR format retains the image data as 32-bit floating-point values. These minimally-processed — and photometrically linear — OpenEXR files are what is available in the database. Each HDR image retains a dynamic range of about 800,000:1 (19-20 stops, or bits, of real image information).

The advantages of knowledge of the original scenes, both direct and through the appearance scaling and colorimetric data, cannot be overstated. It is already evident that having such data combined with HDR images can greatly enhance research on human visual perception and imaging techniques.

Image content includes many natural landscapes, but also portraiture and indoor/outdoor scenes of man-made objects. Lastly, some extra images are also available that are simply nice photographs.

The database images and data are in the public domain for research purposes. It is only requested that they not be reproduced for commercial purposes and that the source of the images be acknowledged in any publications or presentations resulting from research in which they are used.

Monday, December 10, 2007

Income level: economists are wrong

In traditional economic models of decision-making, the most important determinant of individual well-being is the absolute level of income. A recent study based on brain activity observed using functional magnetic resonance imaging (fMRI) proves these models wrong. Indeed, social comparison affects individuals' subjective well-being, and thus behavior.

Dr. Armin FalkThe study was done at the Neuroeconomics Lab at the University of Bonn located at the Life&Brain Research Center by Dr. Armin Falk and his co-workers and is reported in the 23 November 2007 issue of Science, Vol. 318. no. 5854, pp. 1305 - 1308 in the paper Social Comparison Affects Reward-Related Brain Activity in the Human Ventral Striatum.

The team in Bonn had access to two MRI machines placed side by side and was able to concomitantly give the same task to two subjects while rewarding them differently in case of coincident success. They included nineteen subject pairs, and analyzed data from 33 subjects.

The task involved estimating the number of dots on a screen. At the end of each of 300 trials, both subjects received a feedback. This feedback provided information about both subjects' performance (whether the estimates were correct or incorrect), as well as about both subjects' payments in a given trial. Subjects solved the estimation task correctly in 81 percent of the trials.

Analysis of variance suggested that the importance of relative comparison is independent of the level of payment. In addition, there was no significant impact of the side of the activation or the scanner type. Thus, the results provide neurophysiological evidence for the importance of social comparison on reward processing in the human brain.

Compensation boards should keep this in mind, if they want their organization's success to be sustainable.

Thursday, December 6, 2007

Bit rot antidote

PDF 1.7 is now a Draft International Standard, soon to become ISO 32000.

In my October 30 post on Photo permanence and durability I had written about the problem of bit rot. For documents, since 1993 the best antidote for bit rot has been PDF (Portable Document Format). In fact, since then I always keep a copy of my documents in its original format and in PDF. Most of my tutorial materials are created in FrameMaker, but since Adobe abandoned it on the Mac platform, I often end up making small changes directly in the PDF file, which I can edit in Illustrator or in PitStop, depending on the edit's nature.

Now PDF 1.7 has become an even more potent antidote to bit rot, because the ISO has promoted it to a Draft International Standard. To learn more about this, read Jim King's PDF blog entry of December 4 with title ISO Ballot for PDF 1.7 Passed!.

Above I wrote explicitly about documents, not about pictures. If you put your pictures in a PDF file you will always be able to read them, but note that PDF is a structured file format, not a file format for images. Therefore, inside a PDF file, the image will be encoded for example in GIF, JPEG or JPEG-2000. These encodings are also ISO standards, so there will always be decoders for them. However, JPEG does not specify a file format, so wrapping JPEG images in a PDF file is better against bit rot. This is not an issue for JPEG-2000, which specifies also file formats.

Wednesday, November 28, 2007

Postcard from Albuquerque

In big corporations the hand often does not know where the foot is and then shoots itself in the foot. Now I am finally able to get at my email after my old mailbox was secretly deleted over two weeks before I got access to the new mailbox. There I found a postcard from Albuquerque I would like to share with you. It was sent by John McCann, who shot it on his HP PhotoSmart C945 camera and kindly gave permission to reproduce it in this post.

The crown jewels of learned societies are its Fellows. The Fellowship Award has three purposes: it recognizes individuals with a lifelong contribution to their field, it points out eminent examples for the young to look up to, and it ranks the conferring society by the quality of its Fellow members.

When a society evaluates members for the Fellowship Award, it has to ponder if their promotion increases the average quality of its Fellows compared to that of other societies. Roughly, two criteria are evaluated: the scientific quality of their research, and the structural impact they had in their field and society in general.

Robert Buckley, Shoji Tominaga, Daniele Marini. (c) 2007 John McCann

The three gentlemen in McCann's postcard from Albuquerque are freshly minted IS&T Fellows who scored particularly high in structural impact, while obtaining outstanding achievements in imaging. Let me introduce them from left to right, first with their citation then with their bio sketch:

Robert R. Buckley
for his contributions to gamut mapping, color encoding, document encoding, and their standardization

A Research Fellow at Xerox Research Center in Webster, NY, Dr. Robert Buckley began his career at Xerox Corporation's Palo Alto Research Center (PARC) in 1981, after receiving a PhD in Electrical Engineering from MIT. He holds an MA in Psychology and Physiology from the University of Oxford, where he was a Rhodes Scholar, and a BSc in Electrical Engineering from the University of New Brunswick. During his career at Xerox, he has held research management and project leadership positions in color imaging and systems, and has worked on color printing, image processing, enterprise coherence, and standards for color documents and images.

Dr. Buckley pioneered gamut mapping and led the way in the use of uniform color spaces in the processing and coding of color images. He co-authored the first color encoding standard, invented the Mixed Raster Compression method, and co-invented object-optimized printing technology.

In the area of standards, Dr. Buckley influenced the color fax standard and was the lead author of the IETF standard file format for internet fax. He chaired the CIE Technical Committee on the Communication of Color and was project editor for Part 6 of the JPEG2000 standard. Dr. Buckley has lectured and consulted on the use of JPEG2000 in the cultural heritage community, designing the profile that the Library of Congress uses in the National Digital Newspaper Program.

Dr. Buckley has been active in the IS&T/SID Color Imaging Conference since its inception, serving on the Organizing Committee and co-chairing CIC in the second and twelfth year. More recently, he served as the founding co-chair of the new IS&T Archiving Conference for its first two years. He received the IS&T Service Award in 2005; in 2006, he became president of the Inter-Society Color Council and chaired the ISCC/CIE Symposium that celebrated the twin 75th Anniversaries of ISCC and the CIE Standard Observer.

Shoji Tominaga
for his contributions to color imaging science, particularly the interaction of light with materials, color constancy, and illuminant estimation

Shoji Tominaga was born in Hyogo Prefecture, Japan (1947) and received his BE, MS, and PhD in Electrical Engineering from Osaka University (1970, 1972, and 1975, respectively).

Since 2006 he has been professor in the Department of Information Science of the Graduate School of Advanced Integration Science at Chiba University in Japan. Prior to that he was with Electrotechnical Laboratory in Osaka (1975-1976) and Osaka Electro-Communication University (1976-2006). While at Osaka Electro-Communication University, Tominaga was professor in the Department of Engineering Informatics (1986-2006) and Dean of the Faculty of Information Science and Arts (2003-2006). During the 1987-1988 academic year, he was a Visiting Scholar in the Department of Psychology at Stanford University in California.

Dr. Tominaga's research is in the field of color imaging science. His interests include interaction of light with materials, color constancy, illuminant estimation, spectral imaging, digital archiving, color image rendering, omnidirectional imaging, imaging processing algorithms, and color image appearance.

Dr. Tominaga is active in several academic societies. He served on AIC Kyoto as an organizing committee member (1996-1997), and as a program committee member for the IS&T/SID Color Imaging Conference (1996-2004). In 2000, he founded the Visual Information Research Workshop in the Kansai Section of the Information Processing Society, Japan, and in 2001 the Visual Information Research Institute at Osaka Electro-Communication University, where he conducted many research projects as the chairman. He was conference co-Chair of the Eighth International Symposium on Multispectral Color Science (2006) and is now president of the Color Science Association of Japan. He has authored more than 150 scientific publications and received the Scientific Technology Award from the Suga Weathering Technology Foundation in Japan (2002) and an IEEE Fellow Award (2005).

Daniele Marini
for his contributions to computer graphics and his development of a practical approach to Retinex theory

Daniele Marini graduated with a degree in Physics from the Università di Milano in 1972. Since 1978, his research has encompassed several areas of graphics and image processing, with specific reference to visual simulation, realistic rendering, classification, image recognition and compression, color science and computational color models, and virtual reality. He taught Computer Graphics for the Graduate Program on Industrial Design at the Architecture Faculty of Politecnico di Milano (1996-1997) and is Associate Professor at the University of Milano, teaching Computer Graphics and Image Processing in the undergraduate programs in Informatics and Digital Communications. He is presently a member of the Dipartimento di Informatica e Comunicazione.

Prof. Marini pioneered the field of image synthesis in Italy, contributed to the founding of the Italian journal PIXEL, and was one of the founders of the Italian Aicographics Association. He founded Eidos, the first Italian company to specialize in advanced image processing, and created the Laboratorio di Eidomatica at the Dipartimento di Scienze dell'Informazione.

He has been scientific secretary of the National Commission "Conoscenza per Immagini" of the National Committee of Science and Information Technology of the National Research Council, and a member of the Commission for the SMAU Prize for Software Industrial Design, the National University Council (1997-2006), and the Academic Senate of the Università di Milano (2003-2006). In 1998, he was appointed supervisor and coordinator of the initiatives on multimedia at Triennale di Milano. This year Prof. Marini started a new initiative on virtual reality, installing the first University Virtual Theater at the Università di Milano. Prof. Marini has published more than 130 scientific and dissemination papers, as well as authored two books. He has been consultant for many Italian private companies, including Laben, Agusta Sistemi, ACS, SEA Informatica, Olivetti, CISE, VTR, Delphi, UIC, AIS, Artech Video Record, and STMicroelectronics, and has coordinated many national and international research programs.

So far for the postcard from Albuquerque. Other 2007 IS&T Fellowship awardees were Ralph E. Jacobson for his contribution in the field of image quality metrics and his leadership in imaging science education, and Bahram Javidi for his contributions to 3-D imaging science, information security, and image recognition.

Thursday, November 22, 2007

Fine art ink jet

The French magazine Réponses Photo just published its fifth special edition issue. It has a very interesting survey of ink jet printers for fine arts.

It starts on pages 39-39 with a brief overview. Then it presents the technologies and product line-ups by Epson, HP, and Canon. Interspersed is a glossary that elucidates terms like bronzing and metamerism.

After a section on papers, you will find a short article on creating black and white prints. After a question and answer section, Réponses Photo presents a series of interviews with gallery owners and curators.

This survey of ink jet printers for fine arts concludes on page 60 with a summary and discussion of the designation of prints. Réponses Photo recommends not to use the term digital print, as the original image can be AgX or digital, while a printer can be digital or laser on AgX. While in France — where Epson test marketed the first ink jet printers for fine arts, and therefore fine art ink jet prints have been around for many years — the term Digigraphie is common, Digigraphie it is not known in Anglo-Saxon countries. For example, in the US the French term of giclée is commonly used, while in France the same term is unknown in this context (the verb gicler means to spray ink on paper).

Réponses Photo recommends to write on the back of each print the ink type, paper, and printer model, which will help a future restoration if the print becomes valuable. For the nomenclature it recommends to at least distinguish between giclée pigmentaire and giclée dye.

It should go without saying that this survey is not a product test. Rather, it gives you the knowledge necessary to form your own buying decision depending on your artistic message.

Monday, November 19, 2007

The blue hour

In the Francophone world you often come across theatres and hotels named "l'heure bleue." When clocks are depicted in suggestive paintings, they are often set at the blue hour. When is the blue hour, and why is it important to painters and photographers?

Vincent van Gogh: Nuit étoilée (Saint-Rémy-de-Provence), 1889The blue hour is at four o'clock in the morning, before the opulent and busy morning has started and many people still sleep. During the blue hour, when the night plays with dawn, light has a rare quality from the sky's cold blue and the star's warm yellow light, which bathe objects with two opponent illuminants. At this mesoptic illuminance level our visual system is tetrachromatic, with rods and cones all contributing to colors appearance.

In the Silicon Valley, the light pollution is so high, that this special time of the day cannot be appreciated, but for example in the Alps, nature is still pristine and nights are dark. I invite you to experience the blue hour, and even dawn, on top of a pristine mountain. Then, please, help fighting light pollution and turn off your lights when you sleep.

By the way, five and six o'clock are not colored, they are "l'aube" (dawn), and "le levé" (get up).

In reality, l'heure bleue, this quiet time of the day when nocturnal animals have already gone to sleep and diurnal animals are still sleeping, has a different meaning to artists. That perfumes have been named L'Heure Bleue is a hint. As Félix Vallotton's 1899 "La visite" shows, it is the time for lovers to say good bye, and the time on the alarm clock on the gentleman's night stand indicates the blue hour.

Félix Vallotton: La visite, 1899

Wednesday, November 7, 2007

digital photo workflow for the rest of us

Recently a colleague a few cubicles down showed me some prints he did on his HP Photosmart Pro B9180. I was impressed with the image quality and I am wondering if time is ripe for the rest of us to switch from AgX to digital photography. You get your print in just 90 seconds and as Ingeborg Tastl's fade simulator illustrates, the permanence and durability is excellent.

Let me first explain what I mean with "the rest of us." In today's times of the transitioning organizations, the masses have become "transitioning consumers," obediently updating their gadgets every time a new release comes out. Besides the fact that for us researchers in transition such a strategy is not affordable, it is not meaningful from a quality point of view.

In fact, if you are not a commercial photographer, to win in competition you must be able to produce a gallery quality print in a reasonable amount of time. In my darkroom, I can crank out a 12x16 inch print every 10 minutes, including cropping and exposing a black frame to separate the photograph from the white frame. I am using an old Focomat and a Gossen exposure meter, so I do not need to focus nor make exposure trials.

This means that it is not only a pecuniary question but also a question to be skilled with the equipment. Developing implicit knowledge takes time and perseverance. Therefore, you cannot be a transitioning consumer but have to do it the old way by paradigm shifts or technology disruption.

For a long time I was shooting with a Nikon FM titanium and manual fixed focal length lenses. Then came the time my eyes had gotten sufficiently old that I was no longer able to focus instantaneously, so I had to make a paradigm shift to an F100 and a couple of autofocus lenses.

Today, digital cameras offer a substantial speed advantage over traditional cameras in that they use imaging technology to do quite a lot of processing from the time you press the release and the image is recorded. In fact, the cameras first sample an ambient light picture, then shoot one or two measurement flash bursts, compare the two images, and then compute the ideal exposure parameters and flash duration. All this takes place during the few milliseconds it takes to move up the mirror, a fraction of the time it takes the old way of controlling contrast with filters.

In summary, the printing technology and the camera technology are both ready for a paradigm shift. What I am not sure about is the workflow. As I stand now, a digital workflow is an order of magnitude slower than a wet chemistry workflow. Maybe you can tell me what I am doing wrong, so I will describe what I found out.

I downloaded a number of trial programs (fully functional software with a 30 day expiration date) and had to discover that little of it is practical.

Let me start with my hardware. For my tests I am using a D70 body I had around for producing legal documentation. My PC has a 667 MHz processor, 512 MB of RAM, and a 133 MHz bus, which are all sufficient for my day job as a researcher in computational color science.

Since today's sensors have a bit depth of 12 bits per pixel and today's LCD panels also have a bit depth of 12 bits per pixel, it does not make sense to use a JPEG workflow, which today requires retinexing the image down to 8 bits per pixel. Yes, ISO is adding a new layer to JPEG for high dynamic range (HDR) images and is considering JPEG XR, but they are not yet out and hence not implemented in any cameras.

Therefore, there is no other choice than using a "raw" workflow, in which the raw bits from the sensor are processed directly. Instead of raw file, the term "digital negative" is also used.

There is a number of quite powerful photo management programs for the consumer market. However, I quickly found out that when you throw at them a compact flash card full of raw images, they quickly choke and become unusable.

At the other end of the spectrum, professional photo management programs are unusable slow on my PC. It appears you need at least a quad-core, some 4 GB of RAM, and a 1.5 GHz bus — not everybody's kind of iron.

What I found to be usable is Nikon Transfer to download the images. The main advantages are that it completes the embedded XMP/IPTC metadata, renames the files to a systematic archival name, and automatically stores backup files on a second medium, which is more important than a conventional backup for my legal documentation images.

As you can see in the embedded EXIF metadata in the image below, this program has a bug in dealing with the date and time. During the transfer it did correctly update the clock in the camera, but is did not correct the time stamp in the EXIF data by the changed amount, which happens to be 8 hours because I forgot to reset the Daylight Saving Time and the offset from Universal Time Coordinates for my current location.

digital negative developed with ViewNX

The next step in the workflow is to individualize the metadata, tag the images, organize them in folders, and then convert copies to JPEGs that can be thrown in a consumer level photo management program. Since I keep my negative in binders with their contact sheets, I also want contact sheets from my digital images, so I can keep them in a binder for quick browsing.

Since Adobe did a good job with XMP, I would like to have all the metadata in the image file, not in a separate database or in sidecar files. Essentially the software for this workflow step should just be a veneer over the operating system's native file system.

Adobe's Bridge is attractive, but it is designed for sharing files between Creative Suite programs and does not fit well in the workflow I have come up with so far.

Nikon's ViewNX is a better fit, but is has some quirks. For example, to convert the image for the above figure, when I specified the size for this blog's column width, it changed the size to 640x426 pixels. Also, the conversion to JPEG is so slow that I have to let it run as a batch job over night. However, organizing the images is very fast, because the program appears to use the preview image in the metadata instead of rendering the raw image.

Before the images are converted, there should be a step for manipulating them. For example, I like to apply an unsharp mask, do some minor local contrast manipulation, and correct inevitable optical distortions in some lenses. I tried to use Capture NX, but on my PC it is way to slow and the program also has a problem to allocate memory, because often the image becomes just a black rectangle.

At this point I am stopping, because I really would like to hear about your experience. What did you try? What are you happy with?

Thursday, November 1, 2007

Colour: Design & Creativity

Form the AIC e-news, november 2007:

Colour: Design & Creativity, the new online journal from the Society of Dyers and Colourists (SDC), is intended to serve a gap in the marketplace by appealing to a multidisciplinary audience seeking a better understanding of colour and its application in design, theory and practice. In particular the journal will emphasise the synergy between colour and design, as opposed to their individual importance.

Comments editor, Prof. Stephen Westland, ‘Although design is for convenience recognised as a discrete discipline, it is truly multidisciplinary, involving aspects of science, technology, art, crafts and business. Design represents one of the significant interfaces between art and science, and the journal will be dedicated to exploring this interface.’

Among the topics covered in the inaugural issue are colour and emotions, analysis of colours in branding, design concepts using thermochromic colour change, colour forecasting and preference in the fashion industry, and an article that describes what the author calls ‘a new colour form’.

The journal will appeal to colourists, designers, scientists, artists and other professionals alike and seeks submissions of research related to colour: from explanatory papers, case studies and essays, to reviews of books, events, collections and installations. Being published online, a key feature will be the inclusion of ‘galleries’ of work, as well as material in movie and audio format. All articles will undergo a process of peer review, managed by the editor and assisted by a body of international experts forming an advisory panel.

In the first instance, Colour: Design & Creativity will be open access, thanks to funding received from the Worshipful Company of Dyers, and can be found at Anyone active in colour research, development and application is invited to submit material for subsequent issues, or to contact the editor, by emailing:

Tuesday, October 30, 2007

Photo permanence and durability

A "house altar" depicting Akhenaten, Nefertiti and three of their DaughtersExperience your memory fading away.

Historically, we humans have been willing to spend any amount of money on communication. We have been willing to spend even more money on communicating with our descendants after our passing — sky is the limit.

In the past you had to be a head of state to afford having your life cast in stone for posterity, or you had to be so good and work so hard that you would leave behind a historical legacy through the history books.

Today, everyone in our society can afford to leave behind their legacy in the form of digital items, such as scanned or digital photos, HD videos, AAC files, etc. Although such an archive is compact and convenient, it is still subject to bit rot. Actually, with the acceleration of technological progress, the span of time until bits have rotted is getting shorter.

With this in mind, there is still a strong argument for printing your memories and keep them around as atoms instead of bits. There is also a remarkable convenience to hard copies, especially when they are in the form of photo books.

Atoms can also rot, and in particular the dyes and pigments in inks fade. Therefore, it is useful to know how long your prints last. More specifically, you want to know how long prints on a specific media printed with a specific ink last.

In a concrete example, imagine you took a photograph during your honeymoon trip and you would like to be able to still enjoy at at you golden anniversary. Can you use refilled cartridges on generic paper or should you shell out money for fancy HP Vivera inks and special photo paper?

My colleague Ingeborg Tastl wanted to find a good answer just to this kind of problem — how will the photo with my memory fade away during the years? Since Ingeborg is our ICC profiling specialist, she built an interactive tool that allows you to simulate the fading of your memory when printed on two different ink and paper combinations. This is how the tool looks in on my PC:

Yoko in Luzern, 1. August 1991. Häppy 700th börsdey Switzerland!

Ingeborg being a very nice person, she lets you have her tool. You can download it from here and look at your own memory fading away. Of course, the lawyers had to have their say to keep us out of Court, so the selections are somewhat restricted, but it is still an interesting tool. For now the simulator is Windows only, but you can run in on an a virtual PC in an Unix system like a Mac.

Monday, October 29, 2007

An On-Line Color Thesaurus

Color names are a powerful means of selecting and communicating colors. There are a variety of color vocabularies and dictionaries available but there has been less work in capturing the similiarities and differences in color naming. This post is a tool post in that the online color thesarus is embedded directly in the post.

Getting Started
To use the online color thesaurus, click on the screen-shot below to get to the thesaurus page. There simply type the color name in the text field. Once you have typed in your color name click on "submit" and then the results will be displayed. You can use the "clear" button to clear the text field.

The results that are returned are a large color square with a rendering of the color if it was found. If the name was not found, for example "greeb" was entered, then the next nearest color name in terms of edit distance, in this case "green", will be returned. So no you won't have to remember how to spell fuchsia.

In addition to the colored square are the corresponding RGB and hexadecimal values. Finally there is a note about how common the color name is. Below this are the color synonyms and antonyms. Each column has smaller color squares rendering the color names and the names with links so that you can easily click-through to these names. The results are based on analysis of 20,000+ color name database in English collected from a 20+ language ongoing online color naming experiment.

Wednesday, October 24, 2007

MPEG-21 blog

If you are interested in Multimedia Communication for Universal Media Access (UMA) and MPEG-21, you may want to keep an eye on Christian Timmerer's blog on Multimedia Communication. Christian has been involved in MPEG-21 from the beginning and is a very active expert in the committee.

If you do not know anything about MPEG-21, you may be interested in the notes of the course I gave at VCIP 2003. They are a bit dated, but after you go though them you can quickly get up to speed with the current status on Christian's blog.

See also my previous post on MPEG-A.

Tuesday, October 23, 2007

Software patents

Earlier today RocketRoo left a comment on my earlier post on A color scientist's role, but it really is a new thread because my post had nothing to do with patents, so I am answering with a new post. Here is the comment:

Regarding the role of patents, one of the 2007 Nobel laureates in economics, Eric Maskin (, also did research at MIT on the value of software patents. He determined that software was a market where innovations tended to be sequential, in that they were built closely on the work of predecessors, and innovators could take many different paths to the same goal. In such markets, he concluded, patents might serve as an innovation inhibitor rather than an innovation incubator. See

Personally, I have mixed feelings about patents. The original idea was for society to reward inventors for their contribution by allowing them the right to exclusively reap the commercial benefits of their invention. However, now the system is broken.

If there is a date to be set for when system broke, it is probably the day the patent for the intermittent wind-shield wipers was enforced. But in reality, the system got broken by the submarine patents.

Some submarine patents came into being innocently. As I wrote in the post on A scientist's role, discovery is in the air and the skill is in being the first to grasp them. An inventor's antenna can pick it up and he or she can intuitively reduce it practice and file for a patent before the rest of the pack does. But when this happens too far ahead of the bleeding edge, the discovery is not yet well defined in the ether and intuition plays a larger role. Because of this, it is very difficult for a disconnected outsider to appreciate the invention, especially when we no longer show up at the Patent Office with out physical prototype..

Patent examiners are in such a difficult position and it can take a lot of back and forth until the examiner is satisfied that all that implicit knowledge has been made explicit, and can grant the patent.

But then there are the slacker or parasites, known more scientifically as defectors, who when they detect a discovery in the air submit a vague patent application based on a hunch, without understanding the issue nor attempting to render it to practice. An outsider, such as an examiner, has no way to tell a defector from a cooperator, so they have to give the benefit of the doubt while at the same time pursuing due diligence.

In Japan, applications are laid open after six months and anybody can comment on the applications. The system is more fair, but comes at a high cost for the engineers who are forced to work through reams of applications every day for a few hours.

Here in the US a similar system is being studied and HP is one of the companies behind this effort. If you are very experienced, I may suggest you join this collaborative effort as a reviewer. Just go to the Peer-to-Patent site, enroll, and review those patent applications that are in your field of expertise.

So far for the ethical issues. Your comment was about software patents. One problem is that software patent applications have been allowed only since about 1989 (AT&T Bell Labs traveling salesman patent). By that time computer technology was as advanced in several research labs, as it is now in the commercial world. However, in these 30 years computer development has changed so much with the use of wizards and frameworks, that today's programmers have no knowledge of what was standard practice 30 years ago. Hence, the wheel keeps being reinvented.

So, shall we get rid of software patents? It depends. We had this discussion about five years ago in one of the Swiss National Science Foundation Review Panels. We came to the conclusion that an invention should be patented only if it is necessary to protect a new business venture. When this protection is not necessary, then the funding should be invested in new engineering, not in patenting, because they cost more or less the same and in the long term engineering is better for society.

This also leads to agile companies that must innovate faster than the competition can catch up with copying. The Swiss have recognized this as a competitive advantage of their industry.

Monday, October 22, 2007

Color space dimensionality

Today RocketRoo posted a comment to my short August post on a paper on Multiscale contrast enhancement. Since that is a few months ago, I will reply with a new post. Here is the comment:

Re: Multiscale contrast: achromatic dims

In this recent study,, Vladusich, Lucassen and Cornelissen provide evidence that brightness and darkness form the dimensions of a two-dimensional achromatic color space. This color space may play a role in the representation of object surfaces viewed against natural backgrounds, which simultaneously induce both brightness and darkness signals. The 2-D model generalizes to the chromatic dimensions of color perception, indicating that redness and greenness (blueness and yellowness) also form perceptual dimensions. Collectively, these findings suggest that human color space is composed of six dimensions, rather than the conventional three.

Posted by RocketRoo on 10/22/2007 11:45 AM

Let me first admit that I just read the first few paragraphs and not the complete article cited.

I am in violent disagreement with the authors. The opponent color model was first proposed by Leonardo da Vinci, (see chapters CLX and CLXII of his Trattato della Pittura, Langlois, Paris, 2nd edition, 1701) then discussed by Wolfgang Goethe (see also here) in his virtual diatribe with Isaac Newton. The first modern theory of color opponency was proposed in 1872 by Ewald Hering and was very hotly debated until G.E. Müller and Erwin Schrödinger reconciled Helmholtz's and Hering's theories in the zone theory of color vision, which they based on the 1904 law of coefficients proposed by Johannes A. von Kries.

The matter was finally settled in 1956 when Gunnar Svaetichin was able to record from horizontal fish retinas and show opponent response in red-green and yellow-blue potentials. At the time, he showed that each horizontal cell is presumed to inhibit either its bipolar cells or the receptors, with further processing occurring in the amacrine cells and in the retinal ganglion cells.

You an always postulate a mathematical model approximating some phenomenon, but at the end what counts is physics and you cannot contradict the physiology on which color vision is based.

What may be confusing the authors is that much of color science is for aperture color. If the appearance mode is different, then part of ordinary colorimetry falls apart. This is why there are color appearance models and the authors should have based their work on these.

Thursday, October 18, 2007

Hue Angles blog

The ISCC (Inter-Society Color Council) now has a blog, which allows color scientists to interact on articles published in the Hue Angles column of the ISCC Newlsletter. Here is how its editor Dr. Michael H. Brill of Dataclor descibes this new blog, whose name is Hue Angles:

In fall 2006, Hue Angles began as a column for the ISCC News, devoted to tidbits of interesting lore shared by ISCC members in short-essay form. In its first year, the topics spanned color in wetland preservation, spinning disks under colored lights, personal recollections of selling color-matching systems, green in the fashion industry, how to measure color using a beer cooler, and color contextual effects. Almost any color-related topic is fair game. As of fall 2007, Hue Angles is also being posted here to facilitate lively discussion. As always, you can submit ideas or contributions for the column itself to Michael H. Brill,

Prof. Osvaldo da Pos, University of Padua, Italy contributed the first post, which is on colors and contectual effects.

Wednesday, October 17, 2007

A color scientist's role

Here are some thoughts about a color scientist's role in society.

When people ask me what I do, I answer "color scientist," with scientist being the subject and color being the object. One reason is that when I just say "color" people ask me fashion or design questions, which I cannot answer because I am not a color consultant. My field is called "color science" and that makes me a "color scientist." But is it not arrogant calling oneself a scientist? After all, I am not wearing a lab coat…

"Scientist" is not a bragging word. It is a qualification that brings with it also social responsibilities. Bertolt Brecht has collected a lot of material on this subject collected in Werner Hecht's Materialien zu Brechts »Leben des Galilei«, so I'll just mention a short conversation I had last night.

Yesterday evening I attended as a guest the Computer History Museum Fellows Awards Dinner and Ceremony. I was cruising the room in which the cocktails were hosted to greet old buddies, when in one group somebody noted how many former or current PARC scientists were in the room, commenting on the huge impact they had in the valley.

Nobody attends the event for the food. There are many restaurants where you get incomparably better food for $500 per person. People attend such event for the air — or better, for what is in the air. Maybe "ether" is a more appropriate word than "air."

When I was working at Canon, I had the problem that my boss kept telling me that at my level I was not allowed to do technical work, that my role was to inspire people. This was a problem for me, because I am not an evangelist, quite the opposite. In fact, at my previous job at PARC, where we tended to work in a team of a talker and a doer, I was much more of a doer than a talker.

With the benefit of hindsight, I now know that my boss at Canon was right — at least in part. When you research as a scientist, you do not sit on a chair and squeeze your brains until a world-changing idea pops out. You can see this best in pharmaceutical research.

StaufenThe research for new drugs is very expensive, very difficult, and takes a long time for a team of people. With this you would suppose that a successful team then invents the miraculous drug that conquers yet another great disease, while everybody else is surprised and stands in awe. In reality, it is not like this. When you look at the patent awards, you will find out that there is always a small number of different companies that files for the same discovery a few weeks apart.

This is not what you would expect given the duration of the research and the secrecy in which the companies operate.

The explanation is that discoveries are in the air or ether. Discoveries happen when the time is ripe for them, and at that time many people will have the same insight with a time interval of a few weeks or months. Research is very expensive, it is a high risk investment. Timing is everything, otherwise you lose your investment.

Timing means that you need to be be at the right place at the right moment. This is why we are in an expensive location like Palo Alto, just a couple of freeway exits from the Computer History Museum. And this is why we spend $500 for a plate of ravioli — which allows us to get the buzz from the ether, emanating from all those reunited luminaries, before the guys working for the competition get it.

The social responsibility of scientists is put out their antennas and transcieve. You cannot do this kind of visceral networking with LinkedIn. You have to be there. There are no shortcuts, no miracles.

Scientists are like bees. A bee can be a busy bee, a worker bee, etc., but by itself it is not worth much. Wham!!! … and you can wack it out with a newspaper. Try that with a bee hive. The art of managing research is like the art of a bee keeper who has learned to create and groom a bee hive.

You are reading my contribution to society, emanating through the ether from my antenna. And this is where my manager at Canon had it wrong — you need to get your hands dirty and do real work, otherwise there is nothing to transmit and you do not know on which channel to tune in. This is why in the wardrobe separating their offices, Bill Hewlett and Dave Packard kept a cart with an oscilloscope, a soldering iron, and small tools.

These days the difficulty is to survive without having your neck broken 24 years later when the job is done, as Dr. Faustus would have told you if his brother in law Mefistopheles had would not have grabbed him first that fatal day in 1539 and illustrated above.

Monday, October 15, 2007

Blog action day: the environment

Today is blog action day and this year's issue is the environment. This blog is on color perception, so I should write about the visual perception of the environment. However, I am not working on complex color and have nothing new and original to write on this. I could brag about all the things HP does for the environment, but you can read that on our Global Citizenship Report site. Instead, I will do something completely different…

Bloggers Unite - Blog Action Day

Dave Packard and Bill Hewlett were lifelong environmentalists, who bought quite a bit of land for conservation. They even instituted a large well equiped park in the Santa Cruz mountains so employees and their families can enjoy nature. And they really enjoyed inviting all employees to BBQs in their parks.

In particular, Bill Hewlett has a lifelong interest in nature. He has photographed and cataloged hundreds of flowers over the years. A decade ago I asked him to send me a few of his favorites. What I got are photos of some of the most beautiful wildflowers of the Western United States.

My contribution to blog action day is to share these photos and let you reflect on nature's beauty.

Rosa Californica

California Wild Rose Rosa Californica

Wild rose is one of less than a dozen species of Rose native to California where it occurs in moist sites below 1800 meters mostly west of the Sierra Nevada. The flowers of this species have been used for perfume, jelly, candy, and tea. The hip, or mature fruit rivals oranges for its vitamin C content. Upon removal of the seeds, the small apple-like hips can also be used for making tea or jelly.

Mentzelia Lindleyi

Blazing Star Mentzelia Lindleyi

As might be inferred by the common name, this plant produces flowers of a rich golden color. The silky textured petals expand to expose the many stamens that stand upright to form a large tuft in the center of the flower that brushes insect visitors with a generous supply of pollen. Plants of Blazing Star are covered with barbed hairs that cause them to cling to whatever they come in contact with. These plants grow on rocky slopes, coastal scrub, and oak/pine woodlands in California typically at elevations below 800 meters.

Epipactis Gigantea

Stream Orchid Epipactis Gigantea

Because of its wide distribution in California and western North America generally and its ability to tolerate a wide range of habitats from near sea level to 2600 meters in the mountains, the stream orchid has avoided the threats that so many of its relatives are up against worldwide. This orchid attracts pollinators by mimicking their food choices without providing a true reward. It is pollinated by syrphid flies that are attracted by a floral odor that mimics the "honeydew" fragrance given off by aphids, but the aphids are nowhere to be found in the flowers of this orchid.

Achillea Millefolium

Yarrow Achillea Millefolium

Yarrow is widely distributed in most countries of the northern hemisphere. Its finely divided fernlike leaves and flat-topped or umbrella-like clusters of flowers make it one of the easiest members of the sunflower family to identify. Its dried leaves which are occasionally used in tea have a mint-like flavor. This plant is probably best known for its medicinal properties. Achilles, for whom the genus is named, evidently used extracts from this species to treat the wounds of his soldiers in the battle of Troy. It avoids the deserts of California but is otherwise common in many habitats below 3500 meters.

Triteleia Laxa

Ithuriel's Spear Triteleia Laxa

The blue to blue-purple flowers of Ithuriel's spear can add dazzling color to the California landscape in years with good winter rainfall. The corms which can be eaten raw or cooked were a favorite food of early California Indians. Ithuriel was an angel in Milton's Paradise Lost who found Satan squat like a toad, close at the ear of Eve, and transformed him by a touch of his spear to his proper form.

Papaver Nudicaule

Iceland Poppy Papaver Nudicaule

Iceland Poppy, originally described from Siberia is a widespread species of arctic regions of North America and Eurasia where it is one of the commonest yet most colorful wildflowers. The silky petals range in color from yellow, white, pinkish-coral, and orange. It is best known in California because it is a favorite garden plant in the cool coastal climate of the Pacific states. Each flower which measures 10-12 cm (4-5 inches) across is borne on wiry stems. They make superb cut flowers lasting up to a week if the flowers are cut in bud and the stalk tip scalded in boiling water before being placed in a vase.

Tragopogon Porrifolius

Oyster Plant Tragopogon Porrifolius

Oyster Plant, a close relative of Chicory, is distinctive because of its narrow grass-like leaves, dull lilac or purple flower heads, and milky sap. In Mediterranean Europe where this plant is native, the young green shoots are added to salads. It is also cultivated for the swollen fleshy rootstock that is cooked and said to have the flavor of oysters. In California, where this plant is introduced, it is a widespread weed of waste places largely unappreciated for its culinary virtues.

Bill's Blooming Hobby

Visitors to a select private Northern California campground have a unique tool for identifying the trees and flowers they see — an album of photographs and copies of identifying leaves assembled by Bill Hewlett. For nearly 50 years, Bill has been studying the plants and trees in all the places where he has spent time. An avid outdoorsman all his life, Bill's career as a part-time naturalist was sparked when the Army stationed Bill and his late wife, Flora, in Washington D.C. during World War II. On one of their frequent visits to Rock Creek Park, he realized that he didn't recognize any of the trees in the area. And when he returned to California, he realized he didn't know much about the trees and flowers here, either.

After reading to acquire a background in botany, he was soon photographing and identifying the trees and wildflowers he saw on camping, hiking, mountain climbing, and fishing trips. Over the years, his collection of photographs has grown to more than 400 different trees and flowers, from areas as diverse as the Santa Cruz and Sierra mountains of California, the American Great Plains, and the mountains of Europe.

Among his favorites from all the beautiful flowers he has photographed are those with the common name Mariposa, including the White Mariposa (Calochortus venustus). The name ties these flowers to the butterflies and Sequoia groves in the foothills and mountains of Mariposa County in eastern California.

The dream of every naturalist, amateur or professional, is to discover an as yet unnamed flower or plant and bring it to the attention of the scientific community. While this has not happened in Bill's years as a naturalist, he still enjoys the challenge of making a difficult identification.

"It is not too hard to make an educated guess as to the genus," he said. "It is the species that is difficult, but the average person is not interested in whether it is an 'Iris douglandiana' or an 'Iris macrosiphon.' Except for the expert, it is sufficient to know that it is an 'Iris.' But there is a challenge to try and find out the species. It is the difference between a job well done and a job half done."

And, as he notes happily, "there will always be new plants to identify."

Thursday, October 11, 2007

More on How Canon got its flash back

As reader juadlam suggests in his or her comments to my previous post on the book about Fujio Mitarai, the comments and questions raised require a new post. First, here is the comment:

So did the book include much about digital photography? The title seems spot on for a good bit of discussion about how their digital cameras came to be so strong in market. I'd be curious if their analysis covers how they seem to have made the transition to digital so well. Also creating a new division seems like quite an undertaking for a research lab. This almost sounds like another post. I expect that this is especially challenging so if the new division has any overlap with the existing divisions. It's probably even equally challenging if there is zero overlap with the existing divisions.

Posted by juadlam on 10/9/2007 3:58 PM

The book is on Fujio Mitarai and not on Canon's technology, but let me try to answer your questions anyway. The question on transitioning from analog to digital has to do with the culture of a company's head honcho, as we affectionately call presidents here in the Silicon Valley. When companies have a lock on a market, their financial success can be increased more easily by investing in a big sales force than investing in technologists. As a corollary, when a leader advances through the ranks to become the president, this leader is likely to come from sales not technology.

In sales, the formula for success is to not kill the goose that lays the golden eggs, and a president with a sales background will conservatively tend to muzzle anybody trying to rock the boat. In contrast, a president that has risen through the ranks as a technologist, will declare that you're not paranoid if they really are out to get you, and the competitors will indeed be out to top your technology.

You can find many case studies on this in business books. Burroughs was a classical example of a company of the first kind. More recent examples are Xerox, where in the late 70s Gary Starkweather (who later coined the cliché "how Xerox stumbled the future") had built the Lilac color laser printer/copier and explained that you can do color xerography only digitally, while his corporate management wanted to hold on to light lens processing. And of course Kodak, who early on invented many digital color technologies only to have corporate management stuck on AgX and photochemistry.

For the second kind of companies, the most vocal one is perhaps Intel with their motto that only the paranoid survive. In HP, Dave Packard had the business rule that at least 80% of the product catalog had to be in the catalog for 18 months or less, and Bill Hewlett's mantra was that HP had to create new divisions killing the old divisions before the competition did it.

Canon is such a technology company. While Xerox was busy fighting with digital vs. light lens, Canon was busy developing the digital color laser copier CLC-1, which was an immediate smash hit. Behind the scenes, Susumu Sugiura (a.k.a. Sid Sugiura in Australia), had built a large team with deep knowledge in digital color imaging. At the Canon developer conferences in 1991 and 1992 they held workshops on color appearance modeling, demonstrating they were ahead of the bleeding edge.

In 1993 the Imaging Research Center in Shimomaruko started the Digital Eye project with an initial staff of 100 R&D personnel. At the 1996 EI conference, the discussion of Yoshiro Udagawa's paper Color image processing in Canon's digital camera demonstrated a very deep understanding of the image processing for digital cameras and especially of how to make trade-offs between the various parameters.

Kumada and YamadaIn 2000, Canon started a company-wide movement to establish a unified standard for high image quality in all of their products, from input to output, which they called the "concept of Canon's unified high-quality color system." The technical wizards behind this effort were Shuichi Kumada and Osamu Yamada — portrayed at right — and the result was the Kyuanos color management system.

Essentially Kumada and Yamada tossed the sRGB color model operator and the ICC profiles with all their limitations out of the window and build a new system from first principles, based on color appearance modeling. Kyuanos is implemented in all Canon products and in the case of the digital cameras you are asking about, it is implemented in hardware as part of the DIGIC chip, which is at the core of all of Canon's cameras.

In essence, Canon's image processing is so good because they have been at it consistently for more than 25 years. The people behind it have become so good at what they are doing, that part of Kyuanos was even adopted by Microsoft for their Windows Vista operating system.

As I mentioned in my previous post, grooming people to excel as leaders is a difficult task but it is a crucial task for technology companies. In Canon's case, in phase III of their Excellent Global Corporation Plan, one of the key strategies is Nurture truly autonomous individuals to promote everlasting corporate innovation, which they express as follows:

For Canon to become a world-class company, our employees must strive for excellence. From a human-resource development standpoint, we will further enhance our education and training programs to cultivate capable employees who are trusted by society, and encourage employees to put into practice Canon's "Three Selfs" guiding principle. At the same time, we will step up efforts to develop insightful global leaders and business managers who actively contribute to not only progress at Canon, but also to the business world and society as a whole.

In the case of science and technology, this results in the Canon Academy of Technology with the theme Specialists Cultivating Technology.

Tuesday, October 9, 2007

The arcane art of leadership gestation

Today I will lift a bit the kimono to give you a glimpse on this aspect of governance. I barely have enough time to stay alive, so I apologize to use a compact European writing style instead of the more eloquent American style I am supposed to use in this blog. The occasion is today's Nobel Prize announcement.

When I used to have work assignments in corporate governance, the only business book that really helped me was Gordon Bell's book High-Tech Companies. Of course the most important lesson was on how to organically grow a balanced company, but there was also the lesson on the pygmy principle and how to build the company's leadership team.

In most herd animals, leaders are selected in duels. However, early on humans have developed the art of gestating — or grooming, in Silicon Valley lingo — leaders. It probably started with shamans, but by the time of Egypt's first dynasties it was already a well developed structured and formal process assigned to the monasteries, an institution the Pharaohs most likely invented for this specific purpose.

In Far Eastern cultures the main contributor to this art was Confucius, who coined the term naming names for what here in the Silicon Valley today we call pygmy hiring when it is done poorly (see for example Ryûichi Abé's The Weaving of Mantra — Kûkai and the Construction of Esoteric Buddhist Discourse for a detailed historical analysis of how the specific method used for naming names profoundly influenced the Japanese culture and was the germinating event for the formation of Shingon). The reason I mention this specific book is that it always was — and in good part still is — an art beyond the reach of the general population, i.e. it is esoteric.

If in the past it was esoteric, today it is mostly based on wisdom and implicit knowledge, which allow the leader gestator to extrapolate current trends, assign them as directions to follow, select gifted individuals, nurture them, and finally, when they have achieved, laudate them publicly so society can follow them as examples.

When are the gestators themselves recognized? Quietly, when they have successfully predicted leadership. Today, out of sight, a former HP Labs director and the members of a selection committee in Japan are quietly celebrating their successful early identification of leaders.

Today the event is giant magnetoresistance (GMR). You can read about it and the inventors all over today's press and blogosphere because they just received the Nobel Prize in physics.

Recognition goes to Chuck Moorhouse, who at an early time recognized its merits and had HP pursue research on this theme.

Recognition goes to Koichi Kitazawa, Takehiko Ishiguro, Hidetoshi Fukuyama, Tatsuo Izawa, Tetsuya Osaka, Katsuaki Sato, Junichi Sone, Kohei Tamao for recognizing the importance of this basic research in inspiring innovative devices and giving them the 2007 Japan Prize.

And now let's close the kimono and move over to Albert Fert and Peter Grünberg, and their laudation on

Monday, October 8, 2007

Color stereoscopic images

Researchers in Israel have shown that we perceive 3-D color images even when we are presented with only one color image in a stereoscopic pair, with no depth perception degradation and only limited color degradation.

The latest print issue of SPIE's Optical Engineering dated August 2007 (Volume 46, Issue 8), has an interesting article on page (or I should write Citation Identifier, CID) 087003 with title Color stereoscopic images requiring only one color image. This paper is a beautiful piece of color psychophysics, in which the experiments were conducted both with a 1905 stereoscope and with a state of the art head-mounted display (HMD).

Stereoscopic images yield a much improved depth perception and operator performance. However, the amount of information transmitted is doubled. Obviously the left and right images contain a lot of redundant data, and various methods to compress motion images have been proposed to reduce the data stream, although a considerable computational cost hit must be taken.

The authors asked themselves, if the human visual system's fusion capability can be used to process color only of one eye's image, processing the image for the other eye just in luminance. This would cut down both device cost and data before any compression has been performed.

Indeed, the psychophysics results show that subjects perceived 3-D color images even when they were presented with only one color image in a stereoscopic pair, with no depth perception degradation and only limited color degradation in the form of a loss in vividness.

Monday, October 1, 2007

Mini review. How Canon got its flash back

Published in 2004, this book is not new. However, it was published by John Wiley Asia in Singapore, so if unlike me you do not periodically check out a Kinokunia book store, you probably never came across it.

How Canon got its flash backHow Canon got its flash back was written by the editorial team of NIKKEI, which stands for Nihon Keizai Shimbun, Inc. and is the Japanese equivalent of Dow Jones here in the U.S.

In my opinion, the title promises more than the book holds, because it is not a critical business review of Canon, as we are used to get when we read similar American books about high-tech companies. In fact, the subtitle The innovative turnaround tactics of Fujio Mitarai would have been a much more appropriate title, because the book is a laudation of Fujio Mitarai.

Indeed, we learn about the positive changes Fujio Mitarai has introduced, like a better integration of the hundreds of companies that make up Canon, the introduction of a consolidated balance sheet, accountability, the ability to get an up-to-date status of the company, and the restoration of lifetime employment.

Here and there the book relates to the reader Fujio Mitarai's thoughts on various corporate governance topics, like the appointment of external directors, the role of auditors, and how you implement meritocracy in a traditional Japanese company.

Regarding manufacturing—which is a key Canon competency—the book explains in detail how cell method and ma-jime (closing the gap) was introduced and how it paid off (Chapter 2).

What we are never told in this book is what happened before Fujio Mitarai. On page 155 we learn that "The rapid appreciation of the yen in 1986 led to a sharp drop in the company's profitability. When this was then compounded by the deflating of the bubble in the Japanese domestic economy, the period from the mid 1980 to the mid 1990s turned into something of a 'lost decade' for Canon."

This concept of the lost decade comes up several times in the book, but we are never given a satisfactory explanation. In fact, the bubble did burst in 1993 and Canon had very rough time, with layoffs and abysmal employee morale. However, this cannot be the whole story.

Reading the book we are left with the impression that the lost decade was more akin to the Warrying States period in Japan, also known as Sengoku period. The book should have a chapter on this lost decade, which should answer the many questions the book leaves open. Indeed, while the book covers in detail the period of Canon's first president Takeshi Mitarai, it is completely silent about the presidents between the founder and Fujio Mitarai: Takeo Maeda (1974-), Ryuzaburo Kaku (1977-), Keizo Yamaji (1998-), and Hajime Mitarai (1993-).

Did they screw up? Where they unable to control the "war lords"? If so, who were these? During the lost decade, when I was asking Canon Inc. employees why something happened, the standard answer was to watch Ran (Chaos) and then I would understand. I got an idea, but I did not really understand who King Lear was and who Hidetora's sons were.

So the book has these strange voids, such as the Central Research Lab being like a magic castle that suddenly disappeared from Atsugi only to reappear in remote Susono in Shizuoka province, beyond Hakone. Was there a carnage like when in 1571 Oda Nobunaga destroyed the Enryaku-ji monastery.

Why was the Central Research Lab not moved to the Shimomaruko campus, like Yamaji did with the Headquarters? From the book we get that the scientists must have been more unruly than Enryaku-ji's sôhei (warrior monks), because there is a whole section entitled "Discipline paramount." Why did Canon have to implement the rule of the Five Ss: proper arrangement (seiri), cleanliness (seiso), orderliness (seiton), neatliness (seiketsu), and discipline (shitsuke), as well as Communal Possession and Functional Beauty?

When the authors write on page 78 that of these shitsuke is the most important, and on page 81 that a dress code had to be drawn up, which forced researchers to wear a prescribed jacked and forbade the wearing of jeans, one must think that these researcher must have been quite an unruly pack. This is difficult to understand when Canon historically had the tradition of cultivating their staff as heros and still continues to do so as evident from their Web site The Minds Behind Magic Special Interview.

Indeed, historically Canon has excelled in virtue of its principle of strategy being a top down process and tactics being a bottom up process. For Canon science and technology have never been intangible assets, but always brains attached to bodies that are nurtured. Today this is exemplified by their Canon Academy of Technology as depicted in the Web site Specialists Cultivating Technology.

Comparing to HP Labs, where the emphasis is on alignment with the Divisions, in Canon's Central Research Lab the emphasis is on the creation of new Divisions (page 156). Thus, one would expect their researchers to be disruptive revolutionaries or sôhei, not disciplined soldiers. Indeed, it contradicts Canon's Thee Selfs concept (page 110): self-motivation, self-management, and self-awareness.

Finally, there is the mystery of the prologue, which chronicles the exit of the PC business. This is described as the divestiture of FirePower. The FirePower system was not a business or consumer PC, it was a workstation. Its architecture with two PowerPC processors and a signal processor made it one of the best imaging systems available at that time, that would have been the ideal platform for embedded systems for a high-end printer and copier architecture.

Equally mysterious is the complete lack of any reference to Canon's competitors, such as Ricoh, Fuji Xerox, Nikon, Epson etc. Without having an idea of the ecosystem in which Canon operates, it is hard to form an overall appreciation of Fujio Mitarai's merits.

Friday, September 21, 2007

Imaging Entanglement

How a conventional tool of material science — neutron beams produced at particle accelerators and nuclear reactors — can be used to produce images of the ghostly entangled states of the quantum world.

Thank you to RocketRoo for this post:

This press release from University College London, shows a computer-generated image based on neutron-beam scattering of (anti-ferro)magnetically aligned electron spins which are entangled. So, now we have the complementary set as far is this blog is concerned: imaging with entanglement (e.g., quantum ghost imaging with photons), and imaging of entanglement (with neutrons).

Aside: The astute reader may be wondering how neutrons (which are electrically neutral by definition) can be used to image entangled electrons that are negatively charged. How can there be any interaction between these particles; a necessary condition for imaging anything?

Although electrically neutral (as is an atom that is not ionized), the neutron is a baryon and therefore composed of 3 quarks (see, 1 of which (the 'up' quark) has +2/3 the magnitude of the electron charge and the other 2 quarks ('down' quarks) have 1/3 the electron charge. If the neutron comes close enough to an electron the individual charges will begin to influence each other and cause scattering.

It's also blog-worthy that just last week it was reported that the neutron has a negative charge both in its inner core and its outer region with a positive charge sandwiched in between to make the particle electrically neutral. Previously, Fermi had proposed in 1947 (pre-quark model) that the neutron core was postitive with the outer region negative.

Credits: RocketRoo

Thursday, September 20, 2007

Mini review. In sheep's clothing

This is my fourth mini review in the 301.7—terrorism @ home series. In this post I review a practical booklet that can help you if you or somebody for whom you care feels terrorized by somebody in their ecosystem.

In my first three mini reviews in this series I got you acquainted with books intending to build awareness: The sociopath next door, Without conscience, and Snakes in suits. These books started by informing you that 1% of the population is a psychopath and 4% are sociopaths, hence each day you come across a psychopath and four sociopaths. After stating that they are gaining more and more acceptance in society — for example in business, where the transitioning companies have become psychopath friendly — they present composite case studies to illustrate the havoc they wreak.

However, they mostly build awareness, they are not practical guides (except for hiring, in Snakes in suits). In fact, they show how difficult these people are to diagnose and tell you to never ever label anyone a sociopath or psychopath. Their only advice is to steer clear from them.

In Sheep's Clothing: Understanding and Dealing with Manipulative PeopleThis is where Dr. George K. Simon's little booklet In Sheep's Clothing: Understanding and dealing with manipulative people comes in. In short, it teaches how to recognize manipulators, label them, and deal with them by being assertive.

Again, it is important to understand the concepts of personality, which derives from the Greek word persona for mask, and character, which refers to those aspects of an individual's personality that reflect the extent to which he or she has developed and maintained personal integrity and a commitment to responsible social conduct.

Dr. Simon explains how the society of the Victorian era was repressive and caused many people to become neurotic, in response to which Freud et al. developed psychology as a technique to help people overcome neurosis. In the meantime — among others through the influence of such thought leaders as Ayn Rand and her 1957 Atlas shrugged — society has become more and more permissive, but the field of psychology is still hanging on to the premises of the Victorian era. The mission of his book is to help correct this situation.

The book explains how personality traits form a multidimensional space, one dimension in it being the axis of neurosis. When this axis is extended in the opposite direction, it reaches the psychopath syndrome. Dr. Simon teaches that when you consider just this portion of the axis, you do not have to use the term "psychopath", just the general trait, and therefore you can label people on this portion of the axis. This also frees you from having to make a formal diagnosis, you just recognize a general trait.

Dr. Simon uses terms like manipulators, covert-agressive personalities, and disordered character, which are all terms you can use informally to label people. Aggression refers to the forceful energy we all spend in our daily bids to survive, advance ourselves, secure the things we believe will bring us some kind of pleasure, and remove obstacles to those ends [p.5]. When we do not fight aggressively, we are assertive, and when we do not fight, we are neurotic. This is the axis, and Dr. Simon wants to help us staying in the healthy neutral assertive location. In short, if a person is making himself miserable, he is probably neurotic, and if he makes everyone else miserable, he is probably character-disordered

neurotic personality axis

The tactics of manipulation are explained by exposing the powerful deception techniques manipulators use. Dr. Simon shows how hard it is to think clearly when someone has you emotionally on the run, and therefore even harder to recognize the tactics for what they really are. He writes: Severely disturbed covert aggressives are capable of masking a considerable degree of ruthlessness and power-thirstyness under a deceptively civil and even alluring social façade […], but even though a covert aggressive personality can be a lot more than just a manipulator, habitual manipulators are most always covert-aggressive personalities. The primary characteristic of covert-aggressive personalities is that they value winning over everything.

While the book's first part is about understanding manipulative personalities, the second part is about dealing effectively with manipulative people. Dr. Simon teaches you that to guard against victimization, you must:

  • be free of potentially harmful misconceptions about human nature and behavior
  • know how to correctly assess the character of others
  • have high self-awareness, especially regarding those aspects of your own character that might increase your vulnerability to manipulation
  • recognize and correctly label the tactics of manipulation and respond to them appropriately
  • avoid fighting losing battles

If you are dealing with a person who rarely gives you a straight answer to a straight question, is always making excuses for doing hurtful things, tries to make you feel guilty, or uses any of the other tactics to throw you on the defensive and get their way, you can assume you are dealing with a person who — no matter what else he may be — is covertly aggressive.

Dr. Simon concludes [p. 142]: In many arenas of life today — political, legal, corporate, athletic, personal relationships, etc. — we have become a nation of unscrupulous, undisciplined fighters, and we are greatly damaging ourselves and our society in the process. More than ever, we need to recover a guiding set of principles about how we must conduct the daily battle to survive, prosper, and succeed.

This mini review is somewhat out of line with this blog on research. I will make up for it in the next and final post in this series on 301.7—terrorism @ home with a review of current research on psychopaths.

Sunday, September 16, 2007

Positronium molecules

RocketRoo has contributed another interesting comment to the post on non-local realism of last April. A long time has past since then, and as this comment is more of a new post than a comment, I am taking the liberty to repost it here.

UC Riverside physicists have apparently created the first observed diatomic positronium molecule.

I suppose if I write Pi = (e+e-) for positronium [has to be capital pi, since lower case 'pi' is a meson = (quark-antiquark) pair], then what they have seen is Pi2. Their formal paper has appeared in the Sept. 13 issue of Nature.

This is interesting for another reason having to do with entanglement and coherence; the subjects of this blog thread.

Positronium is basically unstable, and when it decays by falling into itself (like falling down a set of quantum stairs) it usually gives off 1,2,3,… photons (depending on the number of stairs). The most common decay channel is 2 photons. John Wheeler (he of the so-called "delayed-choice" interferometer, amongst other things) suggested in c.1945 that these photons should have complemetary polarizations. In fact, they were the first entangled photons produced in the lab c.1949 by Wu and Shaknov at Columbia Univ. In today's lingo, they are type-II entangled.

Because of the annihilation energy involved, however, these are gamma-ray photons. So, we have the odd situation where it is "easier" to produce entangled gamma-photons than coherent gamma-photons! That's where the Pi2 comes in. The diatomic form occurs on a silica (sand) substrate. One goal is to get enough of these groupings on the substrate to form a BEC (see Chaotic light sources comments). That, it seems, would allow one to have more than one source emitting simultaneously and therefore phase-coherently. Voilà! The gamma-ray laser.

From this I can't tell how what the binding orbitals are, how the diatoms bind to the substrate or what temperatures apply. Perhaps someone who takes a look at the Nature paper when it comes out, can report on that.

Credits: RocketRoo

Wednesday, September 12, 2007

Retinoid metabolism in the eye

Our regular reader RocketRoo has recently contributed an interesting comment to the post on non-local realism of last April. A long time has past since then, and as this comment is more of a two-post than a comment, I am taking the liberty to repost it here. This is the second part:


The arrangement of the retina is like connecting a bunch of CCDs such that all the connecting wires lie in front between the light source and the detectors. (See, and for more background).

The metabolism behind photo-detection in the eye involves a kind of charge-discharge cycle, similar to the ATP (adenosine triphosphate) cycle used in bioluminescence (photo-production vs. photo-detection) e.g., fireflies. The chemical energy barrier is lowered via the clever use of enzymes (luciferase in the case of the firefly) . In vision chemistry, the enzyme is lecithin:retinol acyltransferase (aka LRAT). (See for an animation).

Vitamin A and retinene, the carotenoid precursors of rhodopsin, occur in a variety of molecular shapes, cis-trans isomers of one another. For the synthesis of rhodopsin a specific cis isomer of vitamin A is needed. Ordinary crystalline vitamin A, as also the commercial synthetic product, both primarily all-trans, are ineffective. Vitamin A is an isomer aka all-trans-retinol. The -ol ending means the molecule overall acts like an alcohol. It is synthesized in the human body from precursor compounds like beta-carotene (a carotenoid), which is why carrots are suggested to improve night vision. The major role for vitamin A in the eye is to provide the chromophore of the visual pigment, the molecule responsible for the detection of incoming photons.

For more details on cis/trans isomers, see The cis-trans conversion in rhodopsin occurs in picoseconds! (see

Esterification is the process of combining an alcohol with an acid. An ester can be thought of as the organic analog of a salt. An inorganic salt is formed by reacting a base (e.g., sodium hydroxide) with an acid (e.g., sulfuric acid) to produce sodium sulphate and water. In biological systems, the acid is often a carboxylic acid (e.g., vinegar: acetic acid) and the base is replaced by an alcohol (in the organic chemistry sense). The esterification of ethanol (common "alcohol") and acetic acid produces ethyl acetate, which gives certain wines their fruity aroma.

The visual pigment is composed of a chromophore, 11-cis-retinal (the corresponding aldehyde), covalently linked to a protein, opsin, and is concentrated in the outer parts of the rod and cone photoreceptors; the cells responsible for the conversion of light to an electrical signal. Light isomerizes the rhodopsin retinyl chromophore into an all-trans configuration. The chromophore is released and reduced in the rod to form all-trans-retinol. All-trans-retinol is transported to the retinal pigment epithelial cells, where it is esterified by LRAT. All-trans-retinyl esters are stored in the retinosomes and/or utilized for production of 11-cis-retinol through enzymatic hydrolysis and isomerization. Oxidation of 11-cis-retinol to retinal, the subsequent transport to rod outer segments, and binding to opsin complete the cycle.

Credits: RocketRoo

Links mentioned in the comments:

Two-Photon Microscopy

Our regular reader RocketRoo has recently contributed an interesting comment to the post on non-local realism of last April. A long time has past since then, and as this comment is more of a two-post than a comment, I am taking the liberty to repost it here. This is the first part:

A very interesting example of non-local realism appears in the paper entitled, "Two-Photon Microscopy: Shedding Light on the Chemistry of Vision," (Biochemistry 2007, v46, 9674-9684) . Since it is written by chemists, the going is a little tough in parts, so here are some way-points for the interested reader:


Fluorescence typically involves single photon production from a particular atomic transition in either inorganic or organic materials. TPEM relies on dual simultaneous photo-production. The key point is that, unlike ordinary fluorescence microscopy, TPEM enables 3-D imaging of living tissues and has the potential to allow noninvasive study of biochemical processes in vivo. For more details, see

The TPEM effect was predicted in 1930 by Max Born's (female) student, Maria Göppert-Mayer.

TPEM circumvents the high phototoxicity and the limited penetration depth of UV light. In addition, imaging using two-photon excitation sidesteps the need for expensive optics optimized for UV excitation and suffers less from chromatic aberration problems.

Phototoxicity and fluorophore bleaching can sometimes present a significant problem for confocal microscopy, as the intense light is shone repeatedly through the specimen. Since 1990, TPEM has revolutionized the (in vivo) study of biological structure and function by exciting fluorophores in biological specimens through the simultaneous absorption of two IR photons. This is achieved by focusing an infrared laser beam (700—1100 nm) on the specimen, so that the high concentration of photons at the focal plane substantially increases the probability of the simultaneous absorption of two photons by a molecule of the fluorophore. In TPEM, the requirement of a high infrared light intensity necessitates the use of a laser (e.g., Ti:Saph). The near-IR and red (600-700 nm) regions are considered to be the “optical window” of cells and tissues.

A variant of TPEM called Second harmonic Imaging Microscopy (SHIM). SHIM refers to the induction of a nonlinear polarization by the incident light that results in the production of photons at half the wavelength. This effect seems remarkably similar to the production of type-II entangled photons by spontaneous down-conversion. (See non-local realism discussion above).

Collagen and elastin emit enough fluorescence to provide suitable contrast for imaging. In the case of the eye, SHIM imaging has been used to investigate the organization of the collagen in the cornea and the sclera.

Credits: RocketRoo