Wednesday, November 7, 2007

digital photo workflow for the rest of us

Recently a colleague a few cubicles down showed me some prints he did on his HP Photosmart Pro B9180. I was impressed with the image quality and I am wondering if time is ripe for the rest of us to switch from AgX to digital photography. You get your print in just 90 seconds and as Ingeborg Tastl's fade simulator illustrates, the permanence and durability is excellent.

Let me first explain what I mean with "the rest of us." In today's times of the transitioning organizations, the masses have become "transitioning consumers," obediently updating their gadgets every time a new release comes out. Besides the fact that for us researchers in transition such a strategy is not affordable, it is not meaningful from a quality point of view.

In fact, if you are not a commercial photographer, to win in competition you must be able to produce a gallery quality print in a reasonable amount of time. In my darkroom, I can crank out a 12x16 inch print every 10 minutes, including cropping and exposing a black frame to separate the photograph from the white frame. I am using an old Focomat and a Gossen exposure meter, so I do not need to focus nor make exposure trials.

This means that it is not only a pecuniary question but also a question to be skilled with the equipment. Developing implicit knowledge takes time and perseverance. Therefore, you cannot be a transitioning consumer but have to do it the old way by paradigm shifts or technology disruption.

For a long time I was shooting with a Nikon FM titanium and manual fixed focal length lenses. Then came the time my eyes had gotten sufficiently old that I was no longer able to focus instantaneously, so I had to make a paradigm shift to an F100 and a couple of autofocus lenses.

Today, digital cameras offer a substantial speed advantage over traditional cameras in that they use imaging technology to do quite a lot of processing from the time you press the release and the image is recorded. In fact, the cameras first sample an ambient light picture, then shoot one or two measurement flash bursts, compare the two images, and then compute the ideal exposure parameters and flash duration. All this takes place during the few milliseconds it takes to move up the mirror, a fraction of the time it takes the old way of controlling contrast with filters.

In summary, the printing technology and the camera technology are both ready for a paradigm shift. What I am not sure about is the workflow. As I stand now, a digital workflow is an order of magnitude slower than a wet chemistry workflow. Maybe you can tell me what I am doing wrong, so I will describe what I found out.

I downloaded a number of trial programs (fully functional software with a 30 day expiration date) and had to discover that little of it is practical.

Let me start with my hardware. For my tests I am using a D70 body I had around for producing legal documentation. My PC has a 667 MHz processor, 512 MB of RAM, and a 133 MHz bus, which are all sufficient for my day job as a researcher in computational color science.

Since today's sensors have a bit depth of 12 bits per pixel and today's LCD panels also have a bit depth of 12 bits per pixel, it does not make sense to use a JPEG workflow, which today requires retinexing the image down to 8 bits per pixel. Yes, ISO is adding a new layer to JPEG for high dynamic range (HDR) images and is considering JPEG XR, but they are not yet out and hence not implemented in any cameras.

Therefore, there is no other choice than using a "raw" workflow, in which the raw bits from the sensor are processed directly. Instead of raw file, the term "digital negative" is also used.

There is a number of quite powerful photo management programs for the consumer market. However, I quickly found out that when you throw at them a compact flash card full of raw images, they quickly choke and become unusable.

At the other end of the spectrum, professional photo management programs are unusable slow on my PC. It appears you need at least a quad-core, some 4 GB of RAM, and a 1.5 GHz bus — not everybody's kind of iron.

What I found to be usable is Nikon Transfer to download the images. The main advantages are that it completes the embedded XMP/IPTC metadata, renames the files to a systematic archival name, and automatically stores backup files on a second medium, which is more important than a conventional backup for my legal documentation images.

As you can see in the embedded EXIF metadata in the image below, this program has a bug in dealing with the date and time. During the transfer it did correctly update the clock in the camera, but is did not correct the time stamp in the EXIF data by the changed amount, which happens to be 8 hours because I forgot to reset the Daylight Saving Time and the offset from Universal Time Coordinates for my current location.

digital negative developed with ViewNX

The next step in the workflow is to individualize the metadata, tag the images, organize them in folders, and then convert copies to JPEGs that can be thrown in a consumer level photo management program. Since I keep my negative in binders with their contact sheets, I also want contact sheets from my digital images, so I can keep them in a binder for quick browsing.

Since Adobe did a good job with XMP, I would like to have all the metadata in the image file, not in a separate database or in sidecar files. Essentially the software for this workflow step should just be a veneer over the operating system's native file system.

Adobe's Bridge is attractive, but it is designed for sharing files between Creative Suite programs and does not fit well in the workflow I have come up with so far.

Nikon's ViewNX is a better fit, but is has some quirks. For example, to convert the image for the above figure, when I specified the size for this blog's column width, it changed the size to 640x426 pixels. Also, the conversion to JPEG is so slow that I have to let it run as a batch job over night. However, organizing the images is very fast, because the program appears to use the preview image in the metadata instead of rendering the raw image.

Before the images are converted, there should be a step for manipulating them. For example, I like to apply an unsharp mask, do some minor local contrast manipulation, and correct inevitable optical distortions in some lenses. I tried to use Capture NX, but on my PC it is way to slow and the program also has a problem to allocate memory, because often the image becomes just a black rectangle.

At this point I am stopping, because I really would like to hear about your experience. What did you try? What are you happy with?