Last December First, I wrote about the story behind the data glyph technology and the preservation of digital images (here is the link). As it happens, at the Electronic Imaging Symposium two weeks ago, two papers presented recent progress on this technology.
Gaurav Sharma's presentation disclosed an extension of Tom Holladay's rotated dots from grayscale to full color. The authors were particularly concerned about the æsthetic quality of the images with the added payload. They conclude in their paper:
In this paper, we present a high capacity image barcode scheme for applications that require both high capacity and pleasing visual appearance of the encoded region. The scheme combines orientation modulation based data encoding on per-channel basis and color separation. We demonstrate that significant performance improvements can be obtained in terms of embedding rates by sacrificing image fidelity in favor of embedding robustness. Our simulation and experimental results indicate that dot orientation modulation based data embedding can achieve high embedding rates and well suited for per-colorant channel based data encoding in printed documents.
Link to the paper: http://dx.doi.org/10.1117/12.872215. Citation: Orhan Bulan, Basak Oztan and Gaurav Sharma, "High capacity image barcodes using color separability", Proc. SPIE 7866, 78660N (2011); doi:10.1117/12.872215.
Robert Ulichney's presentation described a system specifically for solving the image bit rot problem. The method presented was grayscale, so the payload would not be in the image's halftoning, but in a logo or other monochrome ornamental artifact.
The novelty is that the halftoning method is not Tom Holladay's rotated dots but a new algorithm called stegatones. Compared to the rotated dots, which allow a binary code, stegatones consist of 1-bit to 3-bit carriers, thus allowing a much higher capacity payload. The authors conclude:
We have improved on the scheme reported earlier for hardcopy image backup by embedding metadata into a steganographic halftone object. The advantages of this approach are:
- a better æsthetic presentation of the photo archive
- the elimination of the need to solve the complex OCR problem
- a more compact representation of the color tiles and metadata
- a layout for which auto-alignment is easier and thus the data is more recoverable
Building on the original motivation to use an analog hardcopy means of long-term image storage, our solution transcends hardware obsolescence by requiring any means of scanning the data coupled with the recovery software. While we can predict that hardware for reading digital storage media will likely not be available decades from now, some means of hardcopy scanning will be. So our strategy shifts the need to archive recovery hardware, to archiving recovery software. Long term recovery then depends on the availability of generic source code that includes means to read the accompanying stegatone.
Unfortunately the authors do not address the requirement to preserve a system capable of running the recovery software, so we are still stuck in the PhotoCD problem.
Link to the paper: http://dx.doi.org/10.1117/12.872612. Citation: Robert Ulichney, Ingeborg Tastl and Eric Hoarau, "Analog image backup with steganographic halftones", Proc. SPIE 7866, 78661I (2011); doi:10.1117/12.872612
If you missed the conference, you can easily read the two papers after downloading them from the two links above. However, you would have missed the conversation in the hall after Ulichney's talk. Actually, Elvis had already left the building, when a conversation started with Reiner E. from Rochester and Keith K. from Kihei.
We were wondering how far back this and the related technologies go. Reiner now has the date: 1982. During his first visit to the DGaO Conference (Deutsche Gesellschaft für angewandte Optik e.V.) he was getting a 'free ride' for operating the slide projector.
The talk contained the following: since digital storage is too expensive and cumbersome :-) and since it is always better to store in human readable form, since all other forms will disappear over time: create a system that stores digital in a human readable format. Data was from some satellite images (or other high quality imaging system).
Each data pixel (M > N bit) was converted to a N bit signal, where the N bits will be used as human readable signal and directly converted into an 'explicit' halftone. Meaning each pixel will get its own halftone cell with the corresponding number of elements set to "on". Since M > N, we have a many-to-one map and thus will create an M bit lookup-table for the explicit halftones, where the Mi,j that map to Ni have the identical number of 'on' bits, but in different spatial arrangement. Such a system was known in the digital field, but Reiner is not sure about the name. It is a less than optimal system for information density.