Computational Mirrors: Blind Inverse Light Transport
by Deep Matrix Factorization

Teaser
Input
Our result
Ground truth

Looking at the play of shadows in the clutter scene on the left, can you tell what is happening behind the camera?

From this input, our method estimates a video of the hidden scene, seen in the center.

For reference, the ground truth video is shown on the right. The input video on the left was recorded while this video was being projected on the hidden back wall of the room. See our supplemental video for more results and an overview of the method.

Abstract

We recover a video of the motion taking place in a hidden scene by observing changes in indirect illumination in a nearby uncalibrated visible region. We solve this problem by factoring the observed video into a matrix product between the unknown hidden scene video and an unknown light transport matrix. This task is extremely ill-posed, as any non-negative factorization will satisfy the data. Inspired by recent work on the Deep Image Prior, we parameterize the factor matrices using randomly initialized convolutional neural networks trained in a one-off manner, and show that this results in decompositions that reflect the true motion in the hidden scene.

Downloads

  • Code: GitHub
  • Data: dots-sequence raw data. Other datasets are available by request.
    Place the folder in './data/light_transport/' to be consistent with instructions on github.

Acknowledgements

This work was supported, in part, by DARPA under Contract No. HR0011-16-C-0030, and by NSF under Grant No. CCF-1816209.