We present a method to incrementally generate complete 2D or 3D scenes with the following properties: (a) it is globally consistent at each step according to a learned scene prior, (b) real observations of a scene can be incorporated while observing global consistency, (c) unobserved regions can be hallucinated locally in consistence with previous observations, hallucinations and global priors, and (d) hallucinations are statistical in nature, i.e., different scenes can be generated from the same observations. To achieve this, we model the virtual scene, where an active agent at each step can either perceive an observed part of the scene or generate a local hallucination. The latter can be interpreted as the agent's expectation at this step through the scene and can be applied to autonomous navigation. In the limit of observing real data at each point, our method converges to solving the SLAM problem. It can otherwise sample entirely imagined scenes from prior distributions. Besides autonomous agents, applications include problems where large data is required for building robust real-world applications, but few samples are available. We demonstrate efficacy on various 2D as well as 3D data.
Benjamin Planche (Siemens Corporate Technology)
Benjamin Planche is a passionate Ph.D. student at the University of Passau and Siemens Corporate Technology. He has been working for more than five years in the fields of computer vision and deep learning, in various research labs around the world (LIRIS in France, Mitsubishi Electric in Japan, Siemens in Germany). Benjamin has a double Master's degree with first-class honors from INSA-Lyon in France and the University of Passau in Germany. His research efforts are focused on developing smarter visual systems with less data, targeting industrial applications. Benjamin is also sharing his knowledge and experience on online platforms such as StackOverflow or applying them to the creation of aesthetic demos.