Monday, December 18, 2017

Lakota Waldorf School Fighting Poverty on an Indian Reservation

If you are still looking to give a Christmas or year's end gift that can make a big impact, consider a school in one of the poorest counties in the USA. It is the Lakota Waldorf School in Kyle, SouthDakota, the only Waldorf School on an Indian Reservation.

Swiss native Isabel Stadnick is one of the founders and current administrator, and her husband Robert Stadnick is a tribal member. In 1992, they traveled to the Goetheanum with two additional Lakota tribal members, when Dr. Heinz Zimmermann—the head of the Waldorf movement—encouraged the founders to incorporate the Lakota language and culture.

The Lakota Waldorf School's mission is to empower the children and initiate their educational process with creativity, positivity, community and Lakota culture. The Lakota Waldorf School is a small school, surrounded by never-ending prairie, in the midst of the Pine Ridge Indian Reservation. This reservation is one of the poorest counties in the United States, with an unemployment rate of 75% to 80%. Many of the local people suffer from severe alcohol and drug abuse, and much of the reservation is considered a food desert.

Because of these circumstances, the Lakota Waldorf School is an incredible support system for the 24 children who attend the school. They provide the children with wholesome meals and send them home on Friday afternoon with a weekend pack filled with healthy snacks since many of the families do not have the resources for a nutritious meal.

Each morning, the children are greeted with the wonderful smell of a healthy breakfast of oatmeal, scrambled eggs from their chicken or rice pudding with honey and raisins. Lunch consists of only organic food, vegetables from their garden and bison meat from a local store. All meals are cooked at the school.

The sixteen kindergartners and eight first and second graders that make up the Lakota Waldorf School, begin their day with the morning verse in the Lakota language, Lakota songs, music, and stories. The curriculum includes language, arts, math, science, and social studies as well as handwork, flute music, painting, drawing and modeling classes and storytelling throughout the day.

Currently, the entire school consists of one small building which houses the kindergarten, kitchen, and office. There is a separate small building for the first and second grade. To continue supporting students and their families, they are planning to add grades 3, 4 and 5 and up to 8th grade in the coming years. Plans are also underway to build an urgently needed additional building, housing a bigger kitchen, three or four additional classrooms and a healthy café shop. The new building would be built with only straw bales and solar and wind energy. Jeff Dickinson, well known as a Waldorf and a clean energy architect, is involved in the addition of the school.

Not only is Waldorf education important for these children, but the support they receive is crucial to their overall well-being. The families cannot afford to pay tuition. Therefore, the school is 100% donation-funded. The Lakota Waldorf School, the administration, current and future students and families would appreciate any donation, small or large, to sustain Waldorf education on the Pine Ridge Reservation.

People who volunteered and spent a week working with the students, who are growing up in severe poverty and some in traumatic circumstances, can personally attest to the positive impact the school has on each of their lives. The sincere hope is that the Lakota Waldorf School will continue to thrive and educate young ones for years to come.

Please visit the website www.lakotawaldorfschool.org and consider making a donation to ensure the survival of this extraordinary school.

Lakota Waldorf School

Monday, December 4, 2017

3D printing of bacteria into functional complex materials

A team from the ETH in Zurich and the University College in Dublin has been able to demonstrate a 3D printing approach to create bacteria-derived functional materials by combining the natural diverse metabolism of bacteria with the shape design freedom of additive manufacturing.

They have developed a biocompatible hydrogel with optimized rheological properties that allows for the immobilization of bacteria into 3D-printed architectures at a high accuracy. They have demonstrated two applications: degrading environmental toxins, and making cellulose, which can be used as scaffolds for skin replacements and coatings for biomedical devices that help protect patients against organ rejection.

Immobilization of Pseudomonas putida, a known phenol degrader, when printed allows to degrade phenol into biomass, showing the potential of the 3D bacteria printing platform for biotechnological applications. Immobilization of Acetobacter xylinum in a predesigned 3D matrix enables the in situ formation of bacterial cellulose scaffolds on nonplanar surfaces, relevant for personalized biomedical applications.

Science Advances 01 Dec 2017: Vol. 3, no. 12, eaao6804 DOI: 10.1126/sciadv.aao6804


Schematics of the 3D bacteria-printing platform for the creation of functional living materials

Tuesday, November 14, 2017

AAAS Statement on Scientific Freedom and Responsibility

Scientific freedom and scientific responsibility are essential to the advancement of human knowledge for the benefit of all. Scientific freedom is the freedom to engage in scientific inquiry, pursue and apply knowledge, and communicate openly. This freedom is inextricably linked to and must be exercised in accordance with scientific responsibility. Scientific responsibility is the duty to conduct and apply science with integrity, in the interest of humanity, in a spirit of stewardship for the environment, and with respect for human rights.

For more information: https://www.aaas.org/page/aaas-statement-scientific-freedom-responsibility

Camille Flammarion: "Urbi et Orbi”, in L'atmosphère: météorologie populaire, 1888

Where the sky and the Earth touch

Thursday, November 9, 2017

Panasonic buying deep learning startup

Arimo was born Adatao in 2013 and is being acquired by Panasonic. It calls its product Behavioral AI and targets it to machine learning for Industry 4.0.

It started with two tools. pAnalytics is a Spark environment providing an API where developers can work with the data and expose it to the end users with charts and graphs. pInsights is the end user layer, which takes natural language queries. This tool learns from the end user's interactions and can suggest possible queries.

This approach is used to learn from the past behavior of equipment to identify complex anomalies that are hard to predict with traditional statistical modeling. The same deep learning algorithms can also be used to predict retail shopper's behavior to offer them incentives and optimize store inventories. A related solution area is financial services, where the technology can find signals and anomalies in large-scale transactional data to detect fraud, model risk, and predict investor or consumer behavior.

Panasonic first aims to apply the technology to data on business refrigerators for supermarkets and convenience stores. It envisions a service reducing energy consumption for a store chain overall by setting optimal operating patterns for individual stores, based on past data on refrigerators' internal temperature and energy use. Panasonic can then expand the application to industrial air conditioners.

In the future, Panasonic plans services to manage the physical health of the elderly based on data from appliances and a range of sensors. Since Panasonic has few data analysis experts, Arimo will be a training ground for its employees.

Kansai

Friday, November 3, 2017

3d face recognition

This morning, #45 announced a massive tax relief for the American people. Also as of this morning, the new iPhone X is available for purchase in Apple stores.

If you are investing your massive tax relief in an iPhone X, do not just look at the gorgeous OLED screen, but also at the 3d face recognition sensor, because you have been reading about the underlying physics on this blog.

It has been over a dozen years since Neil J. Gunther of Performance Dynamics, annoyed by a Harvard professor's claim of having disproved Bohr's complementarity principle, proposed to follow the idea of VLSI design rules to formulate practical design rules for quantum communications and quantum imaging devices.

We performed interference experiments in Neil's kitchen using a green laser and a paper clip to form an image. Sergio Magistri noticed that doing physics is good, but creating an artifact that we could sell would be better. He hooked us up with Edoardo Charbon, who had invented a CMOS SPAD array.

After lengthy discussions, Edoardo—who in the meantime had become a professor at EPFL—was willing to reduce our ideas to practice. We received a 500,000 franc grant from the Swiss National Science Foundation to buy the lab equipment and a matching grant from the European Union to hire Dmitri Boiko as a postdoc.

To form the image, we used the metal plate creating the nozzles in an ink jet cartridge to obtain an array of pinholes.

We performed experiments supporting the concept of a g2-camera, summarized on this blog. The statistical post-analysis was so challenging that Neil had to implement it in the fast processor of an oscilloscope. We wrote two papers with the early details:

The blog posts hot body, excited particles, and the north sky and chaotic light sources are the basis for telling apart the sources for the photons reaching the SPAD array.

It is amazing that today the computations can be done on a small, inexpensive smartphone. However, it took 13 years and hundreds if not thousands of people to get to today's device, a simpler version of which, by the way, is also used in Bosch measures.

石の上にも三年

Thursday, October 5, 2017

Computational Near-Eye Displays with Focus Cues

SCIEN has resumed at Stanford with the talk Computational Near-Eye Displays with Focus Cues by Gordon Wetzstein. This presentation is an overview of research at Stanford.

Inflection points in near-eye displays:

  • 1838 Stereoscopes by Wheatstone, Brewster, …
  • 1968 Ivan Sutherland
  • 1995 Nintendo Virtual Boy
  • 2012–2017 VR explosion

Currently, the big enablers are the smartphone components.

The main purpose of the lenses in near-eye displays is to set the virtual image further away because we cannot focus too close.

Stereoptics is binocular; the mechanism of vergence is cued by binocular disparity. Focus cues are monocular; the mechanism of accommodation is cued by retinal blur.

The big problem is the vergence-accommodation conflict..

Gaze-contingent focus. For non-presbyopes, the adaptive focus is like the real world, but it requires eye tracking. Presbyopes need a fixed focal plane with correction.

Light field displays are not yet well-developed. The idea is to project multiple different perspectives into different parts of the pupil. Example: tensor displays. Light field displays are limited by diffraction.

The next step is multifocal lenses: point spread function engineering.

The challenges for AR are

  1. Design thin beam combiners using waveguides
  2. Eye box vs. field of view trade-off
  3. Eye tracking
  4. Chromatic aberrations
  5. Occlusions; difficulty: need to block real light

Only a few mm of physical display displacement results in a large change of the perceived virtual image

Monday, September 11, 2017

Retinotopy

The latest issue of Science magazine has an article explaining how the retinotopic map is built during the development of the eye. The authors show that glial cells that ensheath axons relay cues from photoreceptors to induce the differentiation of the photoreceptor target field—the so-called lamina neurons. Thus, glia can play an instructive role in differentiation, helping to direct the spatiotemporal patterning of neurogenesis.

Science 01 Sep 2017: Vol. 357, Issue 6354, pp. 886–891, DOI: 10.1126/science.aan3174

Another recent article demonstrated that there is no retinotopic map further up in the visual system where object recognition takes place.

Science 18 Aug 2017: Vol. 357, Issue 6352, pp. 687-692, DOI: 10.1126/science.aan4800

Glia relay differentiation cues to coordinate neuronal development in Drosophila

Monday, August 21, 2017

biology of color

The 4 August issue of Science (Vol. 357, Issue 6350, eaan0221) has a valuable article on the biology of color describing the current state of the art in this interdisciplinary field of animal coloration. This article is important because in the past 20 years there has been significant progress in this field.
mantis shrimp

Thursday, August 17, 2017

Critical thinking in a changing world

Monday evening, Gioia Deucher, the new CEO of swissnex San Francisco on Pier 17, hosted a double event on critical thinking. The first event was only for ETH alumni and consisted of networking followed by a speech by ETH President Lino Guzzella and a general discussion. Prof. Guzzella noted that in recent years, students have changed and despite social media have become much nerdier and socially isolated. Consequently, the ETH has to change how it teaches.

As a professor of mechanical engineering, Guzzella does not expect any new breakthroughs in the physics for building mechanical equipment. What is more important for a mechanical engineer is to understand the context requiring a new machine and grasp the problem holistically and proposing a new approach.

The human genetic code has not changed over the ages and is still the same as for hunter gatherers. Critical thinking is essential, but it is hard to criticize oneself: we are dependent on a group that mutually criticizes and debates.

This autumn, the ETH is introducing significant changes. In teaching, the emphasis will be more on understanding and solving problems than on learning. Students will have the option for project-oriented study and more personal coaching with group study. In the study directions, the ETH is starting a new department of medicine, which will allow a proper medical study. Initially, the new department will only go until the bachelor level, after which students can transfer directly to a Swiss university with a medicine program or change to a more traditional ETH department like bioinformatics. As we live longer and longer, significant medical progress is necessary to maintain life quality into the old age.

When a question came about ETH's plans for massive open online courses (MOOC), Prof. Guzzella stated that they go counter the new direction to foster critical thinking and team work: students need physical proximity and a shared experience to become extraordinary people.

The public second event, which had unexpectedly high attendance, started with lightning talks and a panel discussion, followed by a discussion with the audience and finally a standing dinner with animated discussions and networking.

The speakers were Lino Guzzella, President of ETH Zurich and Professor for thermotronics; Gerd Folkers, Chair Science Studies and Critical Thinking Initiative at ETH and former Head of the Collegium Helveticum, a joint think-tank of ETH and University of Zurich; Hans Ulrich Gumbrecht, Professor in Literature in the Departments of Comparative Literature and of French & Italian at Stanford; Philippe Kahn, the CEO of Fullpower, the creative team behind the Sleeptracker IoT Smartbed technology platform and the MotionX Wearable Technology platform. The moderator was Chris Luebkeman, Arup Fellow and Global Director of Arup Foresight.

There was a consensus that to contribute to the wellness and progress of society, and it is indispensable to excel in critical thinking and bring about paradigm shifts. There is no point for a bright mind to just do repetitive intellectual tasks like at the Academy of Projectors. Critical thinking requires a fertile environment, therefore creating groups and projects is more important than promoting individual excellence.

Publications are a very bad metric. A paper needs the unpaid work of three reviewers and is expensive regarding social costs, yet 52% of publications are never cited and consequently have no value because they do not contribute to society.

Excellence in research requires freedom and money. Professors should not be told which research to conduct and should not waste time chasing grants. Science is for the good of society and society should fund research and tuition at universities (I never had to pay a penny of tuition for my diploma in mathematics and my doctorate in informatics). Critical thinking is what prevents the Lagado of Gulliver's third voyage: a habitat for scientists critically thinking in a changing world instead of an Academy of Projectors.

When Stanford wanted to introduce the option for STEM students to major or minor in literature, Prof. Gumbrecht was the strongest opponent. However, after the first year, he now realized that his best students had all come from STEM and he has become a strong advocate for the program.

speculative learning machine at the Academy of Projectors in Lagado

Thursday, July 6, 2017

Well-being in the San Francisco Bay Area

At the University of Pennsylvania’s Positive Psychology Center, Martin Seligman and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project used machine learning and natural language processing to sift through Twitter. They have been able to rank each of the 3235 U.S. counties according to well-being, depression, trust, and five personality traits.

For the Bay Area, the rankings are:

Rank
County
Percentile
8
Marin
99.8
17
Sonoma
99.5
26
San Francisco
99.2
36
Napa
98.9
37
San Mateo
98.9
61
Santa Clara
98.1
120
Santa Cruz
96.3
341
Alameda
89.5
455
Sacramento
86.0
505
Contra Costa
84.4
1429
Solano
55.9

If you live in the U.S., you can check your county in their online map. For example, Kings County in New York ranks 448, while the District of Columbia ranks 49.

How is your well-being?

well-being ranks of Bay Area counties

Tuesday, July 4, 2017

Imaging and Astronomy

At the 2018 IS&T International Symposium on Electronic Imaging (EI 2018), taking place 28 January – 1 February 2018 at the Hyatt Regency San Francisco Airport, Burlingame, California, Prof. Daniele Marini is organizing a joint session on imaging and astronomy.

This new session brings together amateur and professional astronomers, vision scientists, color scientists, astrophysicists, data visualization specialists and all others with interest in astronomy and photography. Astronomers and others interested are invited to submit papers considering different aspects of digital imaging that are relevant for astronomical imaging, image processing, and data visualization, e.g., including color reproduction, display, quality, and noise.

We anticipate that the astronomical imaging community will have an exceptional opportunity to connect with digital imaging professionals and exchange experiences. If your work in the field of photography of astronomic subjects, We would be delighted to have you as a speaker discussing your work. Please use this link for your submission.

Rob Buckley, Shoji Tominaga, and Daniele Marini

Daniele Marini (right) receives the IS&T Fellow award with Shoji Tominaga (center) and Rob Buckley (left).

Tuesday, June 20, 2017

Regenerating optic pathways

Less than a mile down Embarcadero Road from Newell Road is Stanford University's Ophthalmology department. In Science Vol. 356, Issue 6342, pp. 1031–1034, three researchers from the School of Medicine report on the current status in retinal ganglion cell (RGC, pink in the figure) regeneration. When the optic nerve is severed, for example after an accident or with glaucoma, the retinal ganglion cells quickly die off. Even if the rest of the retina remains intact, sight is lost.

The retinal ganglion cells are part of the central nervous system, thus unlike in the peripheral nervous system, severed axons do not regenerate. After injury and inflammation, in the eye, there is a balance of activating and inhibiting factors. For example, amacrine cells (orange in the figure) release zinc, which is an inhibitor, while the lens can cause macrophages to release oncomodulin, a protein that promotes RGC axon extension. The challenge is to understand these balancing mechanisms. A further challenge is to regrow the axons correctly all the way to the lateral geniculate nucleus (LGN).

The authors outline three possible avenues for restoring the RGCs and thus sight.

retinal ganglion and amacrine cells in the retina

Monday, May 22, 2017

To bees, edges are green

In color science, we like to start from spectral data. To obtain the relative response in a photoreceptor, we multiply the reflectance function of a stimulus with the illuminant spectrum and then integrate over the visual spectral range using the receptor's spectral sensitivity function as the integration measure, up to a normalization factor.

Most often, what changes are the stimuli. Sometimes, we change the illuminant to predict the response under a different light source. When we study the response of people with color vision deficiencies, we swap the spectral sensitivity functions, for example, we shift the peak frequencies of the M or L catch probabilities to simulate deuteranomaly respectively protanomaly. For people with normal color vision, the standard values for the peak sensitivities are approximately 430 nm (S-cones), 540 nm (M-cones), and 570 nm (L-cones).

The approach is not limited to humans. For example, bees also have three receptors, with peak sensitivities at 344 nm (S), 436 nm (M), and 544 nm (L): their visual spectrum is shifted towards the ultraviolet. If we taught color naming to bees, their red would correspond to our green. Actually, looking at the honeybee (Apis mellifera) sensitivity functions, their color vision is different from ours because they have a secondary peak in the UV region. With only 10,000 ommatidia, their vision also has a much lower spatial resolution.

spectral sensitivity functions of the honeybee (Apis mellifera) receptors

In their paper Multispectral images of flowers reveal the adaptive significance of using long-wavelength-sensitive receptors for edge detection in bees, Vera Vasas et al. use a collection of multispectral photographs of flowers preferred by bees.

Assuming bees and flowers coevolved to maximize pollination, the authors perform an interesting statistical analysis of what bees would see, to determine under what condition they can best recognize the flower's center areas where the nectar and the stamens/carpels are located. An important boundary condition is that the process has to work in the presence of movement, as flowers are swayed by Zephyr.

The statistical analysis suggests that bees use only the L-receptors to identify edges and segment the visual field and detect movement. This process is different from us humans who use the M- and L-receptors to analyze an essentially monochromatic image.

Citation: Vasas, V., Hanley, D., Kevan, P.G. et al. J Comp Physiol A (2017) 203: 301. doi:10.1007/s00359-017-1156-x

Monday, April 24, 2017

Juggling Tools

Discussions about imaging invariably mention imaging pipelines. A simple pipeline to transform the image data to a different color space may have three stages: a lookup table to linearize the signal, a linear approximation to the second color space, and a lookup table to model the non-linearity of the target space. As an imaging product evolves, engineers add more pipeline stages: tone correction, gamut mapping, anti-aliasing, de-noising, sharpening, blurring, etc.

In the early days of digital image processing, researchers quickly realized that imaging pipelines should be considered harmful because, due to discretization, at each stage, the resulting image space became increasingly sparse. However, in the early 1990s, with the early digital cameras and consumer color printers, imaging pipelines came back. After some 25 years of experience, engineers have become more careful with the pipelines, but they are still a trap.

In data analytics, people often make a similar mistake. There are also three basic steps, namely data wrangling, statistical analysis, and presentation of the result. As development progresses, the analysis becomes richer; when the data is a signal, it is filtered in various ways to create different views, statistical analyses are applied, the data is modeled, classifiers are deployed, estimates and inferences are computed, etc. Each step is often considered as a separate task, encapsulated in a script that parses in a comma separated values (CSV) data file, calls one or more functions, and the writes out a new CSV file for the next stage.

The pipeline is not a good model to use when architecting a complex data processing endeavor.

I cannot remember if it was 1976 or 1978 when at PARC the design of the Dorado was finished and Chuck Thacker hand-wrote the first formal note on the next workstation: the Dragon. While the Dorado had a bit-sliced processor in ECL technology, the Dragon was designed as a multi-processor full-custom VLSI system in nMOS technology.

The design was much more complex than any chip design that had been previously attempted, especially after the underlying technology was switched from nMOS to CMOS. It became immediately evident that it was necessary to design new design automation (DA) tools that could handle such big VLSI chips.

A system based on full-custom VLSI design was a sequence of iterations of the following steps: design a circuit as a schematic, lay out the symbolic circuit geometry, check the design rules, perform logic and timing analysis, create a MOSIS tape, debug the chip. Using stepwise refinement, the process was repeated at the cadence of MOSIS runs. In reality, the process was very messy, because, at the same time, the physicists were working on the CMOS fab, the designers were creating the layout, the DA people were writing the tools, and the system people were porting the Cedar operating system. Just in the Computer Science Laboratory alone, about 50 scientists were working on the Dragon project.

The design rule checker Spinifex played a somewhat critical role, because it parsed the layout created with ChipNDale, analyzed the geometry, flagged the design rule errors, and generated the various input files for the logic simulator Rosemary and the timing simulator Thyme. Originally, Spinifex was an elegant hierarchical design rule checker, which allowed to verify all the geometry for a layout in memory. However, with the transition from nMOS to CMOS, the designers transitioned more and more to a partially flat design, which broke Spinifex. The situation was exacerbated by the endless negotiations between designers and physicists to allow for exceptions to the rules, leading to a number of complementary specialized design rule checkers.

With 50 scientists on the project, ChipNDale, Rosemary, and Thyme were also rapidly evolving. With the time pressure of the tape-outs, there were often inconsistencies in the various parsers. As the whipping boy in the middle of all this, one morning, while showering, I had an idea. The concept of a pipeline was contra naturam compared to the work process. The Smalltalk researchers on the other end of the building had an implementation process where a tree structure described some gestalt and methods would be written that decorate this representation of the gestalt.

In the following meeting, I proposed to define a data structure representing a chip. Tools like the circuit designer, the layout design tool, and the routers would add to the structure while tools like the design rule checkers and simulators would analyze the structure, with their output being further decorations added to the data structure. Even the documentation tools could be integrated. I did not expect this to have any consequence, but there were some very smart researchers in the room. Bertrand Serlet and Rick Barth implemented this paradigm and project representation and called it Core.

The power was immediately manifest. Everybody chipped in: Christian Jacobi, Christian Le Cocq, Pradeep Sindhu, Louis Monier, Mike Spreitzer and others joined Bertrand and Rick in rewriting the entire tool set around Core. Bob Hagman wrote the Summoner, which summoned all Dorados at PARC and dispatched parallel builds.

Core became an incredible game changer. While before there was never an entirely consistent system, now we could do nightly builds of the tools and the chips. Besides, the tools were no longer broken at the interfaces all the time.

The lubricant of the Silicon Valley are the brains wandering from one company to the other. When one brain wandered to the other side of the Coyote Hill, the core concept gradually became an important architectural paradigm that is on the basis of some modern operating systems.

If you are a data scientist, do not think in terms of scripts for pipelines connected by CSV files. Think of a core structure representing your data and the problem you are trying to solve. Think about literate programs that decorate your core structure. When you make the core structure persistent, think rich metadata and databases, not files with plain tables. Last but not least, also your report should be generated automatically by the system.

data + structure = knowledge

Thursday, April 20, 2017

Free Citizenship Workshop May 12

On May 12th the International Rescue Committee (IRC) is holding a free citizenship workshop hosted at and supported by Airbnb HQ located at 888 Brannan St. in San Francisco. The event starts at 1:30pm and ends at 4:30pm. Flyers are available online: English Flyer & Spanish Flyer. There will be free food and each client will be offered a $10 Clipper Card to help with transportation.

At the workshop, clients will get help from IRC to apply for citizenship (submit the N-400), submit a fee waiver request (it’s $725 to apply otherwise), and prepare for the naturalization test. All cases will be reviewed, filed, and expertly managed by an IRC Department of Justice accredited legal representative who will serve as clients legal representatives with USCIS, alert clients to updates in their cases, and provide them advice throughout the entire process. All services are free and it’s open to the public. Registration is required, but folks can choose to register online at http://www.citizenshipSF.eventbrite.com or by phone at (408) 658-9206 or email Kayla.Ladd@Rescue.org. Lots of options!

The International Rescue Committee (IRC) is an international non-profit organization founded in 1933 at the request of Albert Einstein. IRC is at work in more than 40 countries and 28 U.S. cities and each year its programs serve 23 million people worldwide.

Thursday, April 13, 2017

Computational Imaging for Robust Sensing and Vision

In the early days of digital imaging, we were excited about having the images in numerical form and not being bound by the laws of physics. We had big ideas and quickly ran for their realization. However, we immediately reached the boundaries of the digital world: the computers of the day were too slow to process images, did not have enough memory, and the I/O was inadequate (from limited sensors to non-existing color printers).

Now has finally come the time when these dreams can be realized and computational color imaging has become possible, thanks to good sensors and displays, and racks full of general purpose graphical processing units (GPGPUs) with hundred of gigabytes of primary memory and petabytes of secondary storage. All this, at an affordable price.

Wednesday, 12 April 2017, Felix Heide gave a talk at The Stanford Center for Image Systems Engineering (SCIEN) with the title Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision. He presented three implementations.

One application is image classification. In the last couple of years we have seen what is possible with deep learning when you have a big Hadoop server farm and millions of users who provide large data sets they carefully label, creating gigantic training sets for machine learning. Felix Heide uses Bayesian inference to implement a much better system that is robust and fast. It better leverages the available ground-truth and uses proximal optimization to reduce the computational cost.

To facilitate the development of new algorithms, Felix Heide has created the ProxImaL Python-embedded modeling language for image optimization problems, available from www.proximal-lang.org.

computational imaging

Quantum imaging beyond the classical Rayleigh limit

A decade has passed since we were working on quantum imaging, as we reported in an article in the New Journal of Physics that was downloaded 2316 times. We had described the experimental set-up in a second article in Optics Express that was viewed 540 times. It is interesting that the second article was most popular in May 2016, indicating we were some 6 years ahead of time with this publication and over 10 years ahead when Neil Gunther started actively working on the experiment. The problem of coming too early is that it is more difficult to get funding.

Edoardo Charbon continued the research at the Technical University of Delft, where he built a true digital camera that used a built-in flash to create a three-dimensional model of the scene, and the sunlight to create a texture map of the image that could be mapped on the 3-d model. This is possible because the photons from the built-in flash—a chaotic light source that produces the photons from excited particles—and those from the sun—which is a thermal radiator (hot body)—have different statistics.

We looked at the first- and second-order correlation functions to tell the photons from the flash from those originating in the sun. Since the camera controlled the flash, the photon's time of flight could be computed to create the 3-d model. The camera worked well up to a distance of 50 meters.

I am glad that Dmitri Boiko is still continuing this line of research. With a group at the Fondazione Bruno Kessler (FBK) in Trento, Italy and a group at the Institute of Applied Physics at the University of Bern in Bern, Switzerland, he is working on a new generation of optical microscope systems by exploiting the properties of entangled photons to acquire images at a resolution beyond the classical Rayleigh limit).

Read the SPIE Newsroom article Novel CMOS sensors for improved quantum imaging and the open access invited paper SUPERTWIN: towards 100kpixel CMOS quantum image sensors for quantum optics applications in Proc. SPIE 10111, Quantum Sensing and Nano Electronics and Photonics XIV, 101112L (January 27, 2017).

Thursday, March 23, 2017

Breaking the barriers to true augmented reality

Today, when you run a job on a digital press, you just turn it on, load the stock, and start printing. An army of sensors and feedback loops work with mathematical models will set up the press. In the heydays of color printing, the situation was very different: skilled press operators would spend hours making ready the press, with only the use of a densitometer and their eyes. It took them years of experience to achieve their master status.

A big breakthrough came when in 1968 when Felix Brunner invented the print control strip, which made press make-ready more of a technical process instead of a magic ceremony. Felix Brunner lived in Corippo, Val Verzasca.

Corippo seen from Fiorenzo Scaroni's rustico in Lavertezzo. © 13 July 2003 by Yoko Nonaka

Corippo is a beautiful village but it had been abandoned—people like Michael Silacci of the Opus One Winery, whose grandparents had come to California and never went back. Corippo is still the smallest municipality in Switzerland, with a population of just 13.

Corippo is so stunning, in 1975 it became a protected heritage village. This was quite difficult because the village had become dilapidated. Switzerland raised the funds to transform it into a state-of-the-art modern village that would attract a sophisticated population like Felix Brunner. The challenge was to rebuild it to modern architectural standards without changing its atmosphere and look.

The architecture department at the ETH in Zurich build a 3D model of the entire village, then one by one they started rebuilding the interiors of the houses to the state-of-the-art. The department acquired an Evans and Sutherland Picture System and at each planning step, the commission walked through the virtual village to ascertain that nothing changed the spirit outdoors. For example, if a roof was raised, it was not allowed to cast new and unexpected shadows. If a window was changed, the character of the street was not allowed to change for a passerby, and the view had to feel original from any window.

Although the Picture System was limited to 35,000 polygons, the experience was truly impressive for the planners. If you have a chance to visit Corippo, you will be surprised by the realization. The system was such a breakthrough for urbanists, that Unesco used it for the restoration of Venice. I was also sufficiently impressed to sit down and implement an interactive 3D rendering system, although on the PDP-11with 56 KB of memory running RT-11, I could only display wireframes.

My next related experience was in 1993 when Canon had developed a wearable display and was looking for an acquirer of the rendering software. While the 1975 system for Corippo was rendering coarse polygons, by early 1990 it was possible to do ray tracing, although using an SGI RealityEngine for each eye. An application was to train astronauts for building a space station.

On the quest of finding an interested party for the software, I had the chance to visit almost all companies in the San Francisco Bay Area who were developing wearable displays. On one side, using ray tracing instead of rendering plain solid color polygons made the scene feel more natural, but the big advantage over the Picture System was to be immersed in the virtual scene instead of looking at a display.

There were still quite a few drawbacks. For one, the helmets felt like they were made of lead. The models were still crude because to follow the head movements, ideally, the refresh rate should have been 90 Hz, but even with simple scenes, the refresh rate was typically just 15 or 30 Hz. However, the worst perceptual problem was the lag, which disabled the physiological equilibrium system and caused motion sickness. Another positive development was the transition from the dials and joysticks of 1975 to gloves providing a haptic user interface.

People from my generation spent 13 years in school learning technical drawing, which allows us to visualize mentally a 3D scene from three orthographic projections or from an axonometric projection with perspective. However, in general, understanding a 3D scene from projections is difficult for most people. The value of an immersive display is that you can move your head and thus more easily decode the scene. Consequently, there is still a high interest in wearable displays.

Today, a decent smartphone with CPU, GPU, and DSP has sufficient computing power to do all the rendering necessary for a wearable display. The electronic is so light that it can be fit in a pair of big spectacles that are relatively comfortable to wear and are affordable for professionals to buy. Last year, Bernard Kress had predicted that 2017 would be the year of the wearable display, with dozens of brands and prices affordable by consumers. Why is it not happening?

On March 14, 2017, Prof. Christian Sandor of the Nara Institute of Science and Technology (NAIST) gave a talk with title Breaking the Barriers to True Augmented Reality at SCIEN in Stanford, where he suggested the problem might be that today's developers are not able to augment reality so that the viewer cannot tell what is real. He showed the example of Burnar, where flames are mixed with the user's hands and these users had to interrupt the experiment because their hands were feeling too hot.

Christian Sandor, Burnar

True AR has the following two requirements:

  1. undetectable modification of user's perception
  2. goal: seamless blend of real and virtual world

On a line from manipulating atoms with controlled matter to manipulating perception with implanted AR, current systems should achieve surround AR (full light field display) or personalized AR (perceivable subset). In a full light-field display, the display functions as a window, but with the problem of matching accommodation and vergence. Personalized AR is a smarter approach because the human visual system is measured and only a subset of the light-field is generated, reducing the required display pixels by several orders of magnitude.

In many current systems, the part of the image generated from a computer model is just rendered as a semitransparent blue rendering, hence it is perceived as separate from the real world. True AR requires a seamless blend. The most difficult step is the alignment calibration with the single point active alignment method (SPAAM). The breakthrough from NAIST is that they need to perform SPAAM only once: after that, they use eye tracking for calibration.

The technology is hard to implement. The HoloLens has solved the latency problem, but Microsoft has invested thousands of man-years in developing the system. The optics are very difficult and there are only a few universities teaching it.

Thursday, February 9, 2017

mirror mirror on the wall

Last November, I mentioned an app that makes you look like you are wearing a makeup when you do a teleconference. Now Panasonic lets you take it a step further. A new mirror analyzes the skin on your face and prints out a makeup that you can apply directly to your face.

The aim of the Snow Beauty Mirror is “to let people become what they want to be,” said Panasonic’s Sachiko Kawaguchi, who is in charge of the product’s development. “Since 2012 or 2013, many female high school students have taken advantage of blogs and other platforms to spread their own messages,” Kawaguchi said. “Now the trend is that, in this digital era, they change their faces (on a photo) as they like to make them appear as they want to be.”

When one sits in front of the computerized mirror, a camera and sensors start scanning the face to check the skin. It then shines a light to analyze reflection and absorption rates, find flaws like dark spots, wrinkles, and large pores, and offer tips on how to improve appearances.

But this is when the real “magic” begins. Tap print on the results screen and a special printer for the mirror churns out an ultrathin, 100-nanometer makeup-coated patch that is tailor-made for the person examined. The patch is made of a safe material often used for surgery so it can be directly applied to the face. Once the patch settles, it is barely noticeable and resists falling off unless sprayed with water.

The technologies behind the patch involve Panasonic’s know-how in organic light-emitting diodes (OLED), Kawaguchi said. By using the company’s technology to spray OLED material precisely onto display substrates, the printer connected to the computerized mirror prints a makeup ink that is made of material similar to that used in foundation, she added.

Read the full article by Shusuke Murai in the Japan Times News.

Panasonic Corp. engineer Masayo Fuchigami displays an ultrathin makeup patch during a demonstration of the Snow Beauty Mirror

Panasonic Corp. engineer Masayo Fuchigami displays an ultrathin makeup patch during a demonstration of the Snow Beauty Mirror on Dec. 1 in Tokyo. | Shusuke Murai

Wednesday, February 8, 2017

Konica Minolta, Pioneer set to merge OLED lighting ops

Konica Minolta and Pioneer are concluding talks to merge their OLED lighting businesses under a 50–50 joint venture as early as spring. The Japanese companies will spin off their organic light-emitting diode development and sales operations into a new venture that will be an equity-method affiliate for both.

The two companies aim primarily to gain an edge in the automotive OLED market, which is seen expanding rapidly. Konica Minolta's strength in bendable lighting materials made with plastic-film substrates will be combined with Pioneer's own OLED expertise and broad business network in the automotive industry. Taillights and interior lighting are likely automotive applications.

Read the full story in Nikkei Asian Review.

yellow may tire autistic children

A research team including Nobuo Masataka, a professor at Kyoto University’s Primate Research Institute, has found that boys with autism spectrum disorder (ASD) tend not to like yellow but show a preference for green. “Yellow may tire autistic children. I want people to take this into account when they use the color on signboards and elsewhere,” Masataka said.

The team, also including France’s University of Rennes 1, has confirmed the color preference of boys with the disorder, according to an article recently published in the journal Frontiers in Psychology. In the study, the color preference of 29 autistic boys aged 4 to 17 was compared with that of 38 age-matched typically developing (TD) boys. All participants were recruited in France, which has clear diagnostic criteria for autism spectrum disorder.

Shown cards of six colors—red, blue, yellow, green, brown and pink—the children were asked to answer which color they like. Yellow was liked by TD boys without the disorder but far less preferred by ASD boys. On the other hand, green and brown were liked more by boys in the ASD group than by those in the TD group, while red and blue were favored to similar degrees by both groups of boys. Pink was unpopular in both groups.

Given the relatively small sample size in each of the three age groups, the failure to find any difference in preference scores between TD children and children with ASD with regard to red, blue and pink might be attributable to a ceiling/floor effect.

The article said yellow has the highest luminance value among the six colors. “The observed aversion to this color might reflect hypersensitivity” of children with ASD, the article said. There is also a general consensus that yellow is the most fatiguing color. When yellow is perceived, both L and M must be involved. The perception of yellow should thus be the most heavily sensory-loaded of the perception of any type of color. Its perception is bearable for TD children but could be over-loaded for children with ASD whose sensitivity to sensory stimulation is enhanced.

Marine Grandgeorge and Nobuo Masataka: "Atypical Color Preference in Children with Autism Spectrum Disorder," Front. Psychol., 23 December 2016, https://doi.org/10.3389/fpsyg.2016.01976


the sun can make the bamboo straw wall of a tea house repulsive

that すずみだい might not be that restful after all

is a golden obi the best choice?

Thursday, January 19, 2017

Unable to complete backup. An error occurred while creating the backup folder

For the past four years, I have been backing up my laptop on a G-Technology Firewire disk connected to the hub in my display. So far it worked without a hitch, but a few days ago I started to get the error message

Time Machine couldn’t complete the backup to “hikae”.
Unable to complete backup. An error occurred while creating the backup folder.

The message appeared without a time pattern, so it was not clear what it could be. The drive could not be unmounted and had to be force-ejected and power-cycled and then worked again until the next irregular event, maybe one backup out of ten.

When I ran Disk Utility to see if something was wrong with the drive, it told me the boot block was corrupted. After fixing it, the Time Machine problem did not go away, so I must have corrupted the boot block with the force-eject. Time to find out what is going on.

The next time it happened, I tried to eject the drive from Disk Utility, which gave me the message

Disk cannot be unmounted because it is in use.

Who on Earth would be using it? Did Time Machine hang? Unix to the rescue, let us get the list of open files

sudo lsof /Volumes/hikae

The user is root and the commands are mds and mds_store on index files. They are indexing the drive for Spotlight. Why on Earth would an operating system index a backup drive by default? Let us get rid of that.

sudo mdutil -i off /Volumes/hikae

However, in this state, the command returns "Error: unable to perform operation. (-400) Error: unknown indexing state." This might mean Spotlight has crashed or is otherwise hanging.

Force Eject and power cycle the drive. This time mdutil works:

/Volumes/hikae:
2017-01-18 17:10:00.657 mdutil[25737:7707511] mdutil disabling Spotlight: /Volumes/hikae -> kMDConfigSearchLevelFSSearchOnly\\Indexing and searching disabled.

For the past two days, I have no longer experienced the problem.

If you are the product manager, why is Spotlight indexing backup drives by default?

If you prefer using a GUI, drag and drop your backup drive icon into the privacy pane of the Spotlight preference window (I did not try this):

Tell Spotlight not to index your backup drive

Wednesday, January 11, 2017

Designing and assessing near-eye displays to increase user inclusivity

Today Emily Cooper, Psychological and Brain Sciences Department at Dartmouth College, gave a talk on designing and assessing near-eye displays to increase user inclusivity. A near-eye display is a wearable display, for example, an augmented reality (AR) or a virtual reality (VR) display.

With most near-eye displays it is not possible or recommended to wear glasses. Some displays, like the HTV Vive, have available lenses to correct the accommodation. We do want to integrate flexible correction into near-eye displays. This can be achieved with a liquid polymer lens with a membrane that can be tuned.

In her lab, for the refraction self-test, the presenter uses an EyeNetra auto-refractometer, which is controlled with a smartphone.

The near-eye display correction is as good as with contact lenses, both in sharpness and in fusion correction. Therefore, it is not necessary to make users wear their correction glasses.

There are two factors determining the image quality of a near-eye display: accommodation and vergence. The problems with incorrect vergence are that users get tired after 20 minutes and the reaction time is slower when the vergence is incorrect.

The solution is to use tunable optics to match the user's visual shortcomings.

A different problem is presbyopia, which is a range reduction. For people older than 45 years, an uncorrected stereo display provides better image quality than correcting the accommodation. However, tunable optics provide better vergence for older people.

A harder problem are people with low vision, regardless of their age. In her lab, Emily Cooper investigated whether consumer-grade augmented reality displays are good enough to help users with low vision.

She used the HoloLens, in which the depth camera in the NIR domain is the key feature to address this problem. Her proposal is to overlay the depth information as a luminance map over the image so that near objects are light and far objects are dark. This allows the users to get by with their residual vision.

Instead of a luminance overlay, a color overlay also works. In this approach, the hue is changed on a segment from warm to cold colors in dependence of their distance. She also tried to encode depth with flicker but is does not work well.

With the HoloLens, it is possible to integrate OCR in the near-eye display and then read all text in the field of view using the 4 speakers in the HoloLens, making the sound come from the location where the text is written.