Thursday, March 23, 2017

Breaking the barriers to true augmented reality

Today, when you run a job on a digital press, you just turn it on, load the stock, and start printing. An army of sensors and feedback loops work with mathematical models will set up the press. In the heydays of color printing, the situation was very different: skilled press operators would spend hours making ready the press, with only the use of a densitometer and their eyes. It took them years of experience to achieve their master status.

A big breakthrough came when in 1968 when Felix Brunner invented the print control strip, which made press make-ready more of a technical process instead of a magic ceremony. Felix Brunner lived in Corippo, Val Verzasca.

Corippo seen from Fiorenzo Scaroni's rustico in Lavertezzo. © 13 July 2003 by Yoko Nonaka

Corippo is a beautiful village but it had been abandoned—people like Michael Silacci of the Opus One Winery, whose grandparents had come to California and never went back. Corippo is still the smallest municipality in Switzerland, with a population of just 13.

Corippo is so stunning, in 1975 it became a protected heritage village. This was quite difficult because the village had become dilapidated. Switzerland raised the funds to transform it into a state-of-the-art modern village that would attract a sophisticated population like Felix Brunner. The challenge was to rebuild it to modern architectural standards without changing its atmosphere and look.

The architecture department at the ETH in Zurich build a 3D model of the entire village, then one by one they started rebuilding the interiors of the houses to the state-of-the-art. The department acquired an Evans and Sutherland Picture System and at each planning step, the commission walked through the virtual village to ascertain that nothing changed the spirit outdoors. For example, if a roof was raised, it was not allowed to cast new and unexpected shadows. If a window was changed, the character of the street was not allowed to change for a passerby, and the view had to feel original from any window.

Although the Picture System was limited to 35,000 polygons, the experience was truly impressive for the planners. If you have a chance to visit Corippo, you will be surprised by the realization. The system was such a breakthrough for urbanists, that Unesco used it for the restoration of Venice. I was also sufficiently impressed to sit down and implement an interactive 3D rendering system, although on the PDP-11with 56 KB of memory running RT-11, I could only display wireframes.

My next related experience was in 1993 when Canon had developed a wearable display and was looking for an acquirer of the rendering software. While the 1975 system for Corippo was rendering coarse polygons, by early 1990 it was possible to do ray tracing, although using an SGI RealityEngine for each eye. An application was to train astronauts for building a space station.

On the quest of finding an interested party for the software, I had the chance to visit almost all companies in the San Francisco Bay Area who were developing wearable displays. On one side, using ray tracing instead of rendering plain solid color polygons made the scene feel more natural, but the big advantage over the Picture System was to be immersed in the virtual scene instead of looking at a display.

There were still quite a few drawbacks. For one, the helmets felt like they were made of lead. The models were still crude because to follow the head movements, ideally, the refresh rate should have been 90 Hz, but even with simple scenes, the refresh rate was typically just 15 or 30 Hz. However, the worst perceptual problem was the lag, which disabled the physiological equilibrium system and caused motion sickness. Another positive development was the transition from the dials and joysticks of 1975 to gloves providing a haptic user interface.

People from my generation spent 13 years in school learning technical drawing, which allows us to visualize mentally a 3D scene from three orthographic projections or from an axonometric projection with perspective. However, in general, understanding a 3D scene from projections is difficult for most people. The value of an immersive display is that you can move your head and thus more easily decode the scene. Consequently, there is still a high interest in wearable displays.

Today, a decent smartphone with CPU, GPU, and DSP has sufficient computing power to do all the rendering necessary for a wearable display. The electronic is so light that it can be fit in a pair of big spectacles that are relatively comfortable to wear and are affordable for professionals to buy. Last year, Bernard Kress had predicted that 2017 would be the year of the wearable display, with dozens of brands and prices affordable by consumers. Why is it not happening?

On March 14, 2017, Prof. Christian Sandor of the Nara Institute of Science and Technology (NAIST) gave a talk with title Breaking the Barriers to True Augmented Reality at SCIEN in Stanford, where he suggested the problem might be that today's developers are not able to augment reality so that the viewer cannot tell what is real. He showed the example of Burnar, where flames are mixed with the user's hands and these users had to interrupt the experiment because their hands were feeling too hot.

Christian Sandor, Burnar

True AR has the following two requirements:

  1. undetectable modification of user's perception
  2. goal: seamless blend of real and virtual world

On a line from manipulating atoms with controlled matter to manipulating perception with implanted AR, current systems should achieve surround AR (full light field display) or personalized AR (perceivable subset). In a full light-field display, the display functions as a window, but with the problem of matching accommodation and vergence. Personalized AR is a smarter approach because the human visual system is measured and only a subset of the light-field is generated, reducing the required display pixels by several orders of magnitude.

In many current systems, the part of the image generated from a computer model is just rendered as a semitransparent blue rendering, hence it is perceived as separate from the real world. True AR requires a seamless blend. The most difficult step is the alignment calibration with the single point active alignment method (SPAAM). The breakthrough from NAIST is that they need to perform SPAAM only once: after that, they use eye tracking for calibration.

The technology is hard to implement. The HoloLens has solved the latency problem, but Microsoft has invested thousands of man-years in developing the system. The optics are very difficult and there are only a few universities teaching it.

Thursday, February 9, 2017

mirror mirror on the wall

Last November, I mentioned an app that makes you look like you are wearing a makeup when you do a teleconference. Now Panasonic lets you take it a step further. A new mirror analyzes the skin on your face and prints out a makeup that you can apply directly to your face.

The aim of the Snow Beauty Mirror is “to let people become what they want to be,” said Panasonic’s Sachiko Kawaguchi, who is in charge of the product’s development. “Since 2012 or 2013, many female high school students have taken advantage of blogs and other platforms to spread their own messages,” Kawaguchi said. “Now the trend is that, in this digital era, they change their faces (on a photo) as they like to make them appear as they want to be.”

When one sits in front of the computerized mirror, a camera and sensors start scanning the face to check the skin. It then shines a light to analyze reflection and absorption rates, find flaws like dark spots, wrinkles, and large pores, and offer tips on how to improve appearances.

But this is when the real “magic” begins. Tap print on the results screen and a special printer for the mirror churns out an ultrathin, 100-nanometer makeup-coated patch that is tailor-made for the person examined. The patch is made of a safe material often used for surgery so it can be directly applied to the face. Once the patch settles, it is barely noticeable and resists falling off unless sprayed with water.

The technologies behind the patch involve Panasonic’s know-how in organic light-emitting diodes (OLED), Kawaguchi said. By using the company’s technology to spray OLED material precisely onto display substrates, the printer connected to the computerized mirror prints a makeup ink that is made of material similar to that used in foundation, she added.

Read the full article by Shusuke Murai in the Japan Times News.

Panasonic Corp. engineer Masayo Fuchigami displays an ultrathin makeup patch during a demonstration of the Snow Beauty Mirror

Panasonic Corp. engineer Masayo Fuchigami displays an ultrathin makeup patch during a demonstration of the Snow Beauty Mirror on Dec. 1 in Tokyo. | Shusuke Murai

Wednesday, February 8, 2017

Konica Minolta, Pioneer set to merge OLED lighting ops

Konica Minolta and Pioneer are concluding talks to merge their OLED lighting businesses under a 50–50 joint venture as early as spring. The Japanese companies will spin off their organic light-emitting diode development and sales operations into a new venture that will be an equity-method affiliate for both.

The two companies aim primarily to gain an edge in the automotive OLED market, which is seen expanding rapidly. Konica Minolta's strength in bendable lighting materials made with plastic-film substrates will be combined with Pioneer's own OLED expertise and broad business network in the automotive industry. Taillights and interior lighting are likely automotive applications.

Read the full story in Nikkei Asian Review.

yellow may tire autistic children

A research team including Nobuo Masataka, a professor at Kyoto University’s Primate Research Institute, has found that boys with autism spectrum disorder (ASD) tend not to like yellow but show a preference for green. “Yellow may tire autistic children. I want people to take this into account when they use the color on signboards and elsewhere,” Masataka said.

The team, also including France’s University of Rennes 1, has confirmed the color preference of boys with the disorder, according to an article recently published in the journal Frontiers in Psychology. In the study, the color preference of 29 autistic boys aged 4 to 17 was compared with that of 38 age-matched typically developing (TD) boys. All participants were recruited in France, which has clear diagnostic criteria for autism spectrum disorder.

Shown cards of six colors—red, blue, yellow, green, brown and pink—the children were asked to answer which color they like. Yellow was liked by TD boys without the disorder but far less preferred by ASD boys. On the other hand, green and brown were liked more by boys in the ASD group than by those in the TD group, while red and blue were favored to similar degrees by both groups of boys. Pink was unpopular in both groups.

Given the relatively small sample size in each of the three age groups, the failure to find any difference in preference scores between TD children and children with ASD with regard to red, blue and pink might be attributable to a ceiling/floor effect.

The article said yellow has the highest luminance value among the six colors. “The observed aversion to this color might reflect hypersensitivity” of children with ASD, the article said. There is also a general consensus that yellow is the most fatiguing color. When yellow is perceived, both L and M must be involved. The perception of yellow should thus be the most heavily sensory-loaded of the perception of any type of color. Its perception is bearable for TD children but could be over-loaded for children with ASD whose sensitivity to sensory stimulation is enhanced.

Marine Grandgeorge and Nobuo Masataka: "Atypical Color Preference in Children with Autism Spectrum Disorder," Front. Psychol., 23 December 2016, https://doi.org/10.3389/fpsyg.2016.01976


the sun can make the bamboo straw wall of a tea house repulsive

that すずみだい might not be that restful after all

is a golden obi the best choice?

Thursday, January 19, 2017

Unable to complete backup. An error occurred while creating the backup folder

For the past four years, I have been backing up my laptop on a G-Technology Firewire disk connected to the hub in my display. So far it worked without a hitch, but a few days ago I started to get the error message

Time Machine couldn’t complete the backup to “hikae”.
Unable to complete backup. An error occurred while creating the backup folder.

The message appeared without a time pattern, so it was not clear what it could be. The drive could not be unmounted and had to be force-ejected and power-cycled and then worked again until the next irregular event, maybe one backup out of ten.

When I ran Disk Utility to see if something was wrong with the drive, it told me the boot block was corrupted. After fixing it, the Time Machine problem did not go away, so I must have corrupted the boot block with the force-eject. Time to find out what is going on.

The next time it happened, I tried to eject the drive from Disk Utility, which gave me the message

Disk cannot be unmounted because it is in use.

Who on Earth would be using it? Did Time Machine hang? Unix to the rescue, let us get the list of open files

sudo lsof /Volumes/hikae

The user is root and the commands are mds and mds_store on index files. They are indexing the drive for Spotlight. Why on Earth would an operating system index a backup drive by default? Let us get rid of that.

sudo mdutil -i off /Volumes/hikae

However, in this state, the command returns "Error: unable to perform operation. (-400) Error: unknown indexing state." This might mean Spotlight has crashed or is otherwise hanging.

Force Eject and power cycle the drive. This time mdutil works:

/Volumes/hikae:
2017-01-18 17:10:00.657 mdutil[25737:7707511] mdutil disabling Spotlight: /Volumes/hikae -> kMDConfigSearchLevelFSSearchOnly\\Indexing and searching disabled.

For the past two days, I have no longer experienced the problem.

If you are the product manager, why is Spotlight indexing backup drives by default?

If you prefer using a GUI, drag and drop your backup drive icon into the privacy pane of the Spotlight preference window (I did not try this):

Tell Spotlight not to index your backup drive

Wednesday, January 11, 2017

Designing and assessing near-eye displays to increase user inclusivity

Today Emily Cooper, Psychological and Brain Sciences Department at Dartmouth College, gave a talk on designing and assessing near-eye displays to increase user inclusivity. A near-eye display is a wearable display, for example, an augmented reality (AR) or a virtual reality (VR) display.

With most near-eye displays it is not possible or recommended to wear glasses. Some displays, like the HTV Vive, have available lenses to correct the accommodation. We do want to integrate flexible correction into near-eye displays. This can be achieved with a liquid polymer lens with a membrane that can be tuned.

In her lab, for the refraction self-test, the presenter uses an EyeNetra auto-refractometer, which is controlled with a smartphone.

The near-eye display correction is as good as with contact lenses, both in sharpness and in fusion correction. Therefore, it is not necessary to make users wear their correction glasses.

There are two factors determining the image quality of a near-eye display: accommodation and vergence. The problems with incorrect vergence are that users get tired after 20 minutes and the reaction time is slower when the vergence is incorrect.

The solution is to use tunable optics to match the user's visual shortcomings.

A different problem is presbyopia, which is a range reduction. For people older than 45 years, an uncorrected stereo display provides better image quality than correcting the accommodation. However, tunable optics provide better vergence for older people.

A harder problem are people with low vision, regardless of their age. In her lab, Emily Cooper investigated whether consumer-grade augmented reality displays are good enough to help users with low vision.

She used the HoloLens, in which the depth camera in the NIR domain is the key feature to address this problem. Her proposal is to overlay the depth information as a luminance map over the image so that near objects are light and far objects are dark. This allows the users to get by with their residual vision.

Instead of a luminance overlay, a color overlay also works. In this approach, the hue is changed on a segment from warm to cold colors in dependence of their distance. She also tried to encode depth with flicker but is does not work well.

With the HoloLens, it is possible to integrate OCR in the near-eye display and then read all text in the field of view using the 4 speakers in the HoloLens, making the sound come from the location where the text is written.

Saturday, December 31, 2016

Business backs the basics

The last third of the year has been very busy and I did not have a chance to stay current with my reading. Consequently I do not have anything to write.

Editors write editorials, which are rarely read. Indeed, editorials are useful mostly for editors because it forces them to structure their journal or conference. Unfortunately, they are usually written under time pressure and are not always well-rounded. Still, better than my writer's block, here is an editorial written by Subra Suresh and Robert A. Bradway for their CEOs and Leaders for Science retreat at Sunnylands in Rancho Mirage, as it was published in Science 14 Oct 2016: Vol. 354, Issue 6309, pp. 151 DOI: 10.1126/science.aal1580.

Earlier this year, a number of leaders from major U.S. corporations gathered at Sunnylands in California to discuss the critical importance of basic scientific research. For decades, the private sector has withdrawn from some areas of basic research, as accelerating market pressures, the speed of innovation, and the need to protect intellectual property in a global marketplace made a Bell Labs–style, in-house model of discovery and development hard to sustain. However, the leaders who gathered for the “CEOs and Leaders for Science” retreat (which we convened) agreed that basic research will make or break corporations in the long term. Why?

Long-term basic research, substantially funded by the U.S. government, underlies some of industry's most profitable innovations. Global positioning system technology, now a staple in every mobile phone, emerged from Cold War Defense Department research and decades of National Science Foundation explorations. As well, long-term public–private partnerships in basic research have driven U.S. leadership, from information technology to drug development and medical advancement. For example, the Human Genome Project combined $14.5 billion in federal investment with a private-sector initiative, generating nearly $1 trillion in jobs, personal wealth for entrepreneurs, and taxes by 2013. Such endeavors created a science ecosystem that in turn generated the talent pipeline upon which it depended.

Although for-profit corporations still invest in proprietary product development and expensive clinical trials, industry finds itself unable to invest in basic research the way it once did. The need for increased corporate secrecy, market force–driven short-term decision-making, and narrowing windows to monetize new technologies have whittled away industry's willingness and ability to conduct basic research. This change threatens U.S. preeminence in research. For instance, the nation may lose its ability to attract and retain the finest talent from around the world. A good fraction of the students who earn advanced degrees in science and technology in the United States come from abroad because of the nation's scientific excellence. For decades, American companies could attract and retain the finest talent from around the world. But if the U.S. loses its edge in research, it may also lose this vital resource of expertise and innovation.

Consequently, business leaders assembled at Sunnylands resolved to use their individual and collective credibility, and their stature as heads of enterprises that fuel the economy, to advocate for greater government support for basic scientific research to revitalize the science ecosystem. However, they will need to lift sagging public opinion because many Americans now see basic research as a luxury rather than a necessity. A 2015 Pew poll found that Americans who view publically funded basic research as “not worth it” rose from 18 to 24% between 2009 and 2014. At the same time, those who believe private investment is enough to ensure scientific progress also increased from 29 to 34%.

With that in mind, the CEOs will partner with academic leaders to educate the public about the importance of basic research. Together, they will advocate for this in meetings with federal officials, through various media channels, and by asking presidents in the Association of American Universities to identify corporate leaders in their respective communities to join the effort. The hope is that this concerted action positions basic research atop the next U.S. president's agenda.

History has shown that investments in basic research are the primary engine by which humanity has advanced, and major economic gains—often unanticipated when the research was initially funded—have been realized. In the United States, that will require a long-term commitment from the government, complementing the ongoing investment of risk capital and key industry sectors.

America's leadership role in scientific innovation is an inherited responsibility and an economic imperative. It must not be neglected.

Credit: Emily Gadek. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Friday, November 11, 2016

App adds makeup to faces on video conferences

In a potential boost for the government’s drive to get more people telecommuting, cosmetics company Shiseido Co. has developed an app that makes users look as if they are wearing makeup. It amounts to an instant makeover for the unfortunate worker called to appear on screen from home at an awkward hour.

Read the article in the Japan Times.

Yoko's lips

Thursday, October 13, 2016

Facebook Surround

Yesterday afternoon, Brian Cabral, Director of Engineering at Facebook, gave a talk at the Stanford Center for Image Systems Engineering (SCIEN) with the title "The Soul of a New Camera: The design of Facebook's Surround Open Source 3D-360 video camera." Here is his abstract:

Around a year ago we set out to create an open-source reference design for a 3D-360 camera. In nine months, we had designed and built the camera and published the specs and code. Our team leveraged a series of maturing technologies in this effort. Advances and availability in sensor technology, 20+ of computer vision algorithm development, 3D printing, rapid design photo-typing and computation photography allowed our team to move extremely fast. We will delve into the roles each of these technologies played in the designing of the camera, giving an overview of the system components and discussing the tradeoffs made during the design process. The engineering complexities and technical elements of 360 stereoscopic video capture will be discussed as well. We will end with some demos of the system and its output.

The design goals for the Surround were the following:

  • High-quality 3D-360 video
  • Reliable and durable
  • Fully spherical
  • Open and accessible
  • End-to-end system

These goals cannot be achieved by strapping together GoPro cameras because they get too hot and it is very difficult to make them work reliably. Monoscopic is old and no longer interesting. The challenge for VR is to do it stereoscopically: we are interested in a stereoscopic 3D-360 capture.

They are using 14 Point Grey cameras with wide angle lenses around the equator and a camera with a fisheye on the north pole. For the south pole they are using two fisheyes to get rid of the pole holding the Surround.

A rolling shutter is much worse in 3D than in 2D, so it is necessary to use a global shutter, at the expense of SNR. Brian Cabral discussed the various trade-offs between number and size of cameras, spatial resolution, wide angle vs. fisheye lenses and physical size.

Today, we have a lot of progress in rapid prototype designs. We can just try out things in the lab. For this application, the hardware is easy, but stitching together the images is difficult. The solution is to use optical flow and to simulate slit cameras.

No attempt is made to compress the data. The images are copied completely raw to a RAID of SSD drives. The rendering then takes 30 seconds per frame.

The Surround has been used for a multi-million dollar shot at grand Central Station. The camera is being open sourced because so far it is only 1% of the solution and making it open will encourage many people to contribute to the remaining 99%.

At the end of the presentation, two VR displays were available to experience the result. I did not quite dare to strap in front of my eyes a recalled smartphone that can explode anytime, so I passed on the demo. However, the brave people commented, that you can rotate your head but not move sidewise because the image falls apart. It was also commented, that the frame rate should be at least 90 Hz. Finally, people reported vergence problems and slight nausea.

Facebook Surround kit

Dataset metadata for search engine optimization

Last week I wrote a post on metadata. Google is experimenting with a new metadata schema it calls Science Datasets that will allow it to better make public datasets discoverable.

The mechanism is under development and they are currently soliciting interested parties with the following kinds of public data:

  • A table or a CSV file with some data
  • A file in a proprietary format that contains data
  • A collection of files that together constitute some meaningful dataset
  • A structured object with data in some other format that you might want to load into a special tool for processing
  • Images capturing the data
  • Anything that looks like a dataset to you

In your metadata schema you can use any of the schema.org dataset properties, but it should contain at least the following basic properties: name, description, url, sameAs, version, keywords, variableMeasured, and creator.name. If your dataset is part of a corpus, you can reference it in the includedInDataCatalog property.

There are also properties for download information, temporal coverage, spatial coverage, citations and publications, and provenance and license information.

This is a worthwhile effort to make your research and public datasets more useful to the community.

Creative Commons LicenseGoogle