Thursday, February 9, 2017

mirror mirror on the wall

Last November, I mentioned an app that makes you look like you are wearing a makeup when you do a teleconference. Now Panasonic lets you take it a step further. A new mirror analyzes the skin on your face and prints out a makeup that you can apply directly to your face.

The aim of the Snow Beauty Mirror is “to let people become what they want to be,” said Panasonic’s Sachiko Kawaguchi, who is in charge of the product’s development. “Since 2012 or 2013, many female high school students have taken advantage of blogs and other platforms to spread their own messages,” Kawaguchi said. “Now the trend is that, in this digital era, they change their faces (on a photo) as they like to make them appear as they want to be.”

When one sits in front of the computerized mirror, a camera and sensors start scanning the face to check the skin. It then shines a light to analyze reflection and absorption rates, find flaws like dark spots, wrinkles, and large pores, and offer tips on how to improve appearances.

But this is when the real “magic” begins. Tap print on the results screen and a special printer for the mirror churns out an ultrathin, 100-nanometer makeup-coated patch that is tailor-made for the person examined. The patch is made of a safe material often used for surgery so it can be directly applied to the face. Once the patch settles, it is barely noticeable and resists falling off unless sprayed with water.

The technologies behind the patch involve Panasonic’s know-how in organic light-emitting diodes (OLED), Kawaguchi said. By using the company’s technology to spray OLED material precisely onto display substrates, the printer connected to the computerized mirror prints a makeup ink that is made of material similar to that used in foundation, she added.

Read the full article by Shusuke Murai in the Japan Times News.

Panasonic Corp. engineer Masayo Fuchigami displays an ultrathin makeup patch during a demonstration of the Snow Beauty Mirror

Panasonic Corp. engineer Masayo Fuchigami displays an ultrathin makeup patch during a demonstration of the Snow Beauty Mirror on Dec. 1 in Tokyo. | Shusuke Murai

Wednesday, February 8, 2017

Konica Minolta, Pioneer set to merge OLED lighting ops

Konica Minolta and Pioneer are concluding talks to merge their OLED lighting businesses under a 50–50 joint venture as early as spring. The Japanese companies will spin off their organic light-emitting diode development and sales operations into a new venture that will be an equity-method affiliate for both.

The two companies aim primarily to gain an edge in the automotive OLED market, which is seen expanding rapidly. Konica Minolta's strength in bendable lighting materials made with plastic-film substrates will be combined with Pioneer's own OLED expertise and broad business network in the automotive industry. Taillights and interior lighting are likely automotive applications.

Read the full story in Nikkei Asian Review.

yellow may tire autistic children

A research team including Nobuo Masataka, a professor at Kyoto University’s Primate Research Institute, has found that boys with autism spectrum disorder (ASD) tend not to like yellow but show a preference for green. “Yellow may tire autistic children. I want people to take this into account when they use the color on signboards and elsewhere,” Masataka said.

The team, also including France’s University of Rennes 1, has confirmed the color preference of boys with the disorder, according to an article recently published in the journal Frontiers in Psychology. In the study, the color preference of 29 autistic boys aged 4 to 17 was compared with that of 38 age-matched typically developing (TD) boys. All participants were recruited in France, which has clear diagnostic criteria for autism spectrum disorder.

Shown cards of six colors—red, blue, yellow, green, brown and pink—the children were asked to answer which color they like. Yellow was liked by TD boys without the disorder but far less preferred by ASD boys. On the other hand, green and brown were liked more by boys in the ASD group than by those in the TD group, while red and blue were favored to similar degrees by both groups of boys. Pink was unpopular in both groups.

Given the relatively small sample size in each of the three age groups, the failure to find any difference in preference scores between TD children and children with ASD with regard to red, blue and pink might be attributable to a ceiling/floor effect.

The article said yellow has the highest luminance value among the six colors. “The observed aversion to this color might reflect hypersensitivity” of children with ASD, the article said. There is also a general consensus that yellow is the most fatiguing color. When yellow is perceived, both L and M must be involved. The perception of yellow should thus be the most heavily sensory-loaded of the perception of any type of color. Its perception is bearable for TD children but could be over-loaded for children with ASD whose sensitivity to sensory stimulation is enhanced.

Marine Grandgeorge and Nobuo Masataka: "Atypical Color Preference in Children with Autism Spectrum Disorder," Front. Psychol., 23 December 2016, https://doi.org/10.3389/fpsyg.2016.01976


the sun can make the bamboo straw wall of a tea house repulsive

that すずみだい might not be that restful after all

is a golden obi the best choice?

Thursday, January 19, 2017

Unable to complete backup. An error occurred while creating the backup folder

For the past four years, I have been backing up my laptop on a G-Technology Firewire disk connected to the hub in my display. So far it worked without a hitch, but a few days ago I started to get the error message

Time Machine couldn’t complete the backup to “hikae”.
Unable to complete backup. An error occurred while creating the backup folder.

The message appeared without a time pattern, so it was not clear what it could be. The drive could not be unmounted and had to be force-ejected and power-cycled and then worked again until the next irregular event, maybe one backup out of ten.

When I ran Disk Utility to see if something was wrong with the drive, it told me the boot block was corrupted. After fixing it, the Time Machine problem did not go away, so I must have corrupted the boot block with the force-eject. Time to find out what is going on.

The next time it happened, I tried to eject the drive from Disk Utility, which gave me the message

Disk cannot be unmounted because it is in use.

Who on Earth would be using it? Did Time Machine hang? Unix to the rescue, let us get the list of open files

sudo lsof /Volumes/hikae

The user is root and the commands are mds and mds_store on index files. They are indexing the drive for Spotlight. Why on Earth would an operating system index a backup drive by default? Let us get rid of that.

sudo mdutil -i off /Volumes/hikae

However, in this state, the command returns "Error: unable to perform operation. (-400) Error: unknown indexing state." This might mean Spotlight has crashed or is otherwise hanging.

Force Eject and power cycle the drive. This time mdutil works:

/Volumes/hikae:
2017-01-18 17:10:00.657 mdutil[25737:7707511] mdutil disabling Spotlight: /Volumes/hikae -> kMDConfigSearchLevelFSSearchOnly\\Indexing and searching disabled.

For the past two days, I have no longer experienced the problem.

If you are the product manager, why is Spotlight indexing backup drives by default?

If you prefer using a GUI, drag and drop your backup drive icon into the privacy pane of the Spotlight preference window (I did not try this):

Tell Spotlight not to index your backup drive

Wednesday, January 11, 2017

Designing and assessing near-eye displays to increase user inclusivity

Today Emily Cooper, Psychological and Brain Sciences Department at Dartmouth College, gave a talk on designing and assessing near-eye displays to increase user inclusivity. A near-eye display is a wearable display, for example, an augmented reality (AR) or a virtual reality (VR) display.

With most near-eye displays it is not possible or recommended to wear glasses. Some displays, like the HTV Vive, have available lenses to correct the accommodation. We do want to integrate flexible correction into near-eye displays. This can be achieved with a liquid polymer lens with a membrane that can be tuned.

In her lab, for the refraction self-test, the presenter uses an EyeNetra auto-refractometer, which is controlled with a smartphone.

The near-eye display correction is as good as with contact lenses, both in sharpness and in fusion correction. Therefore, it is not necessary to make users wear their correction glasses.

There are two factors determining the image quality of a near-eye display: accommodation and vergence. The problems with incorrect vergence are that users get tired after 20 minutes and the reaction time is slower when the vergence is incorrect.

The solution is to use tunable optics to match the user's visual shortcomings.

A different problem is presbyopia, which is a range reduction. For people older than 45 years, an uncorrected stereo display provides better image quality than correcting the accommodation. However, tunable optics provide better vergence for older people.

A harder problem are people with low vision, regardless of their age. In her lab, Emily Cooper investigated whether consumer-grade augmented reality displays are good enough to help users with low vision.

She used the HoloLens, in which the depth camera in the NIR domain is the key feature to address this problem. Her proposal is to overlay the depth information as a luminance map over the image so that near objects are light and far objects are dark. This allows the users to get by with their residual vision.

Instead of a luminance overlay, a color overlay also works. In this approach, the hue is changed on a segment from warm to cold colors in dependence of their distance. She also tried to encode depth with flicker but is does not work well.

With the HoloLens, it is possible to integrate OCR in the near-eye display and then read all text in the field of view using the 4 speakers in the HoloLens, making the sound come from the location where the text is written.

Saturday, December 31, 2016

Business backs the basics

The last third of the year has been very busy and I did not have a chance to stay current with my reading. Consequently I do not have anything to write.

Editors write editorials, which are rarely read. Indeed, editorials are useful mostly for editors because it forces them to structure their journal or conference. Unfortunately, they are usually written under time pressure and are not always well-rounded. Still, better than my writer's block, here is an editorial written by Subra Suresh and Robert A. Bradway for their CEOs and Leaders for Science retreat at Sunnylands in Rancho Mirage, as it was published in Science 14 Oct 2016: Vol. 354, Issue 6309, pp. 151 DOI: 10.1126/science.aal1580.

Earlier this year, a number of leaders from major U.S. corporations gathered at Sunnylands in California to discuss the critical importance of basic scientific research. For decades, the private sector has withdrawn from some areas of basic research, as accelerating market pressures, the speed of innovation, and the need to protect intellectual property in a global marketplace made a Bell Labs–style, in-house model of discovery and development hard to sustain. However, the leaders who gathered for the “CEOs and Leaders for Science” retreat (which we convened) agreed that basic research will make or break corporations in the long term. Why?

Long-term basic research, substantially funded by the U.S. government, underlies some of industry's most profitable innovations. Global positioning system technology, now a staple in every mobile phone, emerged from Cold War Defense Department research and decades of National Science Foundation explorations. As well, long-term public–private partnerships in basic research have driven U.S. leadership, from information technology to drug development and medical advancement. For example, the Human Genome Project combined $14.5 billion in federal investment with a private-sector initiative, generating nearly $1 trillion in jobs, personal wealth for entrepreneurs, and taxes by 2013. Such endeavors created a science ecosystem that in turn generated the talent pipeline upon which it depended.

Although for-profit corporations still invest in proprietary product development and expensive clinical trials, industry finds itself unable to invest in basic research the way it once did. The need for increased corporate secrecy, market force–driven short-term decision-making, and narrowing windows to monetize new technologies have whittled away industry's willingness and ability to conduct basic research. This change threatens U.S. preeminence in research. For instance, the nation may lose its ability to attract and retain the finest talent from around the world. A good fraction of the students who earn advanced degrees in science and technology in the United States come from abroad because of the nation's scientific excellence. For decades, American companies could attract and retain the finest talent from around the world. But if the U.S. loses its edge in research, it may also lose this vital resource of expertise and innovation.

Consequently, business leaders assembled at Sunnylands resolved to use their individual and collective credibility, and their stature as heads of enterprises that fuel the economy, to advocate for greater government support for basic scientific research to revitalize the science ecosystem. However, they will need to lift sagging public opinion because many Americans now see basic research as a luxury rather than a necessity. A 2015 Pew poll found that Americans who view publically funded basic research as “not worth it” rose from 18 to 24% between 2009 and 2014. At the same time, those who believe private investment is enough to ensure scientific progress also increased from 29 to 34%.

With that in mind, the CEOs will partner with academic leaders to educate the public about the importance of basic research. Together, they will advocate for this in meetings with federal officials, through various media channels, and by asking presidents in the Association of American Universities to identify corporate leaders in their respective communities to join the effort. The hope is that this concerted action positions basic research atop the next U.S. president's agenda.

History has shown that investments in basic research are the primary engine by which humanity has advanced, and major economic gains—often unanticipated when the research was initially funded—have been realized. In the United States, that will require a long-term commitment from the government, complementing the ongoing investment of risk capital and key industry sectors.

America's leadership role in scientific innovation is an inherited responsibility and an economic imperative. It must not be neglected.

Credit: Emily Gadek. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Friday, November 11, 2016

App adds makeup to faces on video conferences

In a potential boost for the government’s drive to get more people telecommuting, cosmetics company Shiseido Co. has developed an app that makes users look as if they are wearing makeup. It amounts to an instant makeover for the unfortunate worker called to appear on screen from home at an awkward hour.

Read the article in the Japan Times.

Yoko's lips

Thursday, October 13, 2016

Facebook Surround

Yesterday afternoon, Brian Cabral, Director of Engineering at Facebook, gave a talk at the Stanford Center for Image Systems Engineering (SCIEN) with the title "The Soul of a New Camera: The design of Facebook's Surround Open Source 3D-360 video camera." Here is his abstract:

Around a year ago we set out to create an open-source reference design for a 3D-360 camera. In nine months, we had designed and built the camera and published the specs and code. Our team leveraged a series of maturing technologies in this effort. Advances and availability in sensor technology, 20+ of computer vision algorithm development, 3D printing, rapid design photo-typing and computation photography allowed our team to move extremely fast. We will delve into the roles each of these technologies played in the designing of the camera, giving an overview of the system components and discussing the tradeoffs made during the design process. The engineering complexities and technical elements of 360 stereoscopic video capture will be discussed as well. We will end with some demos of the system and its output.

The design goals for the Surround were the following:

  • High-quality 3D-360 video
  • Reliable and durable
  • Fully spherical
  • Open and accessible
  • End-to-end system

These goals cannot be achieved by strapping together GoPro cameras because they get too hot and it is very difficult to make them work reliably. Monoscopic is old and no longer interesting. The challenge for VR is to do it stereoscopically: we are interested in a stereoscopic 3D-360 capture.

They are using 14 Point Grey cameras with wide angle lenses around the equator and a camera with a fisheye on the north pole. For the south pole they are using two fisheyes to get rid of the pole holding the Surround.

A rolling shutter is much worse in 3D than in 2D, so it is necessary to use a global shutter, at the expense of SNR. Brian Cabral discussed the various trade-offs between number and size of cameras, spatial resolution, wide angle vs. fisheye lenses and physical size.

Today, we have a lot of progress in rapid prototype designs. We can just try out things in the lab. For this application, the hardware is easy, but stitching together the images is difficult. The solution is to use optical flow and to simulate slit cameras.

No attempt is made to compress the data. The images are copied completely raw to a RAID of SSD drives. The rendering then takes 30 seconds per frame.

The Surround has been used for a multi-million dollar shot at grand Central Station. The camera is being open sourced because so far it is only 1% of the solution and making it open will encourage many people to contribute to the remaining 99%.

At the end of the presentation, two VR displays were available to experience the result. I did not quite dare to strap in front of my eyes a recalled smartphone that can explode anytime, so I passed on the demo. However, the brave people commented, that you can rotate your head but not move sidewise because the image falls apart. It was also commented, that the frame rate should be at least 90 Hz. Finally, people reported vergence problems and slight nausea.

Facebook Surround kit

Dataset metadata for search engine optimization

Last week I wrote a post on metadata. Google is experimenting with a new metadata schema it calls Science Datasets that will allow it to better make public datasets discoverable.

The mechanism is under development and they are currently soliciting interested parties with the following kinds of public data:

  • A table or a CSV file with some data
  • A file in a proprietary format that contains data
  • A collection of files that together constitute some meaningful dataset
  • A structured object with data in some other format that you might want to load into a special tool for processing
  • Images capturing the data
  • Anything that looks like a dataset to you

In your metadata schema you can use any of the schema.org dataset properties, but it should contain at least the following basic properties: name, description, url, sameAs, version, keywords, variableMeasured, and creator.name. If your dataset is part of a corpus, you can reference it in the includedInDataCatalog property.

There are also properties for download information, temporal coverage, spatial coverage, citations and publications, and provenance and license information.

This is a worthwhile effort to make your research and public datasets more useful to the community.

Creative Commons LicenseGoogle

Thursday, October 6, 2016

Progress in wearable displays

Yesterday afternoon, Bernard Kress, Partner Optical Architect at Microsoft Corp, in the HoloLens project, gave a talk at the Stanford Center for Image Systems Engineering (SCIEN) with the title "Human-centric optical design: a key for next generation AR and VR optics." Here is the abstract:

The ultimate wearable display is an information device that people can use all day. It should be as forgettable as a pair of glasses or a watch, but more useful than a smartphone. It should be small, light, low-power, high-resolution and have a large field of view (FOV). Oh, and one more thing, it should be able to switch from VR to AR.

These requirements pose challenges for hardware and, most importantly, optical design. In this talk, I will review existing AR and VR optical architectures and explain why it is difficult to create a small, light and high-resolution display that has a wide FOV. Because comfort is king, new optical designs for the next-generation AR and VR system should be guided by an understanding of the capabilities and limitations of the human visual system.

There are three kinds of wearable displays:

  • Smart eyewear: extension of eyewear. Example: Google Glass
  • Augmented reality (AR) and mixed reality (MR): extension of the computer. An MR display has a built-in 3d scanner to create a 3d model of the world
  • Virtual reality (VR): extension of the gaming console

Bernard surveyed all avenues in wearable displays from their inception to the projections in the future. The speed of the presentation and the amount of material made it impossible to follow the talk unless you are an expert in the field. After the presentation, Bernard told me the size of his PowerPoint file is about 250 MB!

My takeaway was that the biggest issue in wearable displays is cost. So far, the optics engineers designed with cameras in mind and over-designed. The current breakthrough is that now the optics engineers start understanding the HVS, so they can design systems that are just as good as our MTF. Bernard claims that so far the industry has been mostly about hype but in 2017, products will take off and the new challenge is "show me the money."

By Microsoft Sweden [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons