Thursday, April 13, 2017

Computational Imaging for Robust Sensing and Vision

In the early days of digital imaging, we were excited about having the images in numerical form and not being bound by the laws of physics. We had big ideas and quickly ran for their realization. However, we immediately reached the boundaries of the digital world: the computers of the day were too slow to process images, did not have enough memory, and the I/O was inadequate (from limited sensors to non-existing color printers).

Now has finally come the time when these dreams can be realized and computational color imaging has become possible, thanks to good sensors and displays, and racks full of general purpose graphical processing units (GPGPUs) with hundred of gigabytes of primary memory and petabytes of secondary storage. All this, at an affordable price.

Wednesday, 12 April 2017, Felix Heide gave a talk at The Stanford Center for Image Systems Engineering (SCIEN) with the title Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision. He presented three implementations.

One application is image classification. In the last couple of years we have seen what is possible with deep learning when you have a big Hadoop server farm and millions of users who provide large data sets they carefully label, creating gigantic training sets for machine learning. Felix Heide uses Bayesian inference to implement a much better system that is robust and fast. It better leverages the available ground-truth and uses proximal optimization to reduce the computational cost.

To facilitate the development of new algorithms, Felix Heide has created the ProxImaL Python-embedded modeling language for image optimization problems, available from www.proximal-lang.org.

computational imaging

No comments:

Post a Comment