Thursday, October 5, 2017

Computational Near-Eye Displays with Focus Cues

SCIEN has resumed at Stanford with the talk Computational Near-Eye Displays with Focus Cues by Gordon Wetzstein. This presentation is an overview of research at Stanford.

Inflection points in near-eye displays:

  • 1838 Stereoscopes by Wheatstone, Brewster, …
  • 1968 Ivan Sutherland
  • 1995 Nintendo Virtual Boy
  • 2012–2017 VR explosion

Currently, the big enablers are the smartphone components.

The main purpose of the lenses in near-eye displays is to set the virtual image further away because we cannot focus too close.

Stereoptics is binocular; the mechanism of vergence is cued by binocular disparity. Focus cues are monocular; the mechanism of accommodation is cued by retinal blur.

The big problem is the vergence-accommodation conflict..

Gaze-contingent focus. For non-presbyopes, the adaptive focus is like the real world, but it requires eye tracking. Presbyopes need a fixed focal plane with correction.

Light field displays are not yet well-developed. The idea is to project multiple different perspectives into different parts of the pupil. Example: tensor displays. Light field displays are limited by diffraction.

The next step is multifocal lenses: point spread function engineering.

The challenges for AR are

  1. Design thin beam combiners using waveguides
  2. Eye box vs. field of view trade-off
  3. Eye tracking
  4. Chromatic aberrations
  5. Occlusions; difficulty: need to block real light

Only a few mm of physical display displacement results in a large change of the perceived virtual image