
Lightfield Simulator
A plenoptic camera is a device which samples the light field[1] using a microlens grid positioned in front of the CCD sensor. This grid consists of several microscopic lenses (often of the order of 100 000) with tiny focal lengths. The microlens divides what would become a 2D pixel in an ordinary camera to a set of rays of light just before reaching the sensor. The raw image that results (i.e. the light field) is a composition of truncated and shifted tiny pictures of the photographed scene.
Using some suitable algorithms[2,3], it is possible to reconstruct from this field various images of the initial scene corresponding to different focal planes, or even new perspectives by shifting the point of observation. This is particularly useful for simulating 3D views of the imaged scene or computing its depth map using classical algorithms in Computer Vision.
However, this “new” apparatus produces images that are way below the standard resolutions found in nowadays cameras. In short, this is mainly due to the (poor) sampling of the light field on the CCD sensor. The story, however, is a bit more complicated than that. Indeed, one cannot just increase the resolution of the CCD sensor to get a better-sampled light field... the physics of light diffraction is the limit here. To address this problem, different approaches reported in the literature[2,3,4] suggest using specific layouts or positioning of the microlens lattice to improve the sampling.
In this perspective, and as it is very expensive and cumbersome to change the geometry or the position of the microlens grid in such a device, I set myself to design a plenoptic camera simulator based on the theory of Wave Optics. This in order to study the effect of different types of microlenses (shape, f-number,…) and their geometrical layouts on the sampling of the light field
The following images show some initial results produced by the simulator. Even though the code is running on a 4-GHz Quad-Core it is still time consuming, so the simulation is restricted to one channel only for now (here, the red channel).
Since this is work in progress, this page will be updated accordingly…
[1] M.Levoy and P. Hanrahan. “Light Field Rendering”. Proc. ACM SIGGRAPH'96, pages 31-42.
[2] R. Ng. “Fourier Slice Photography”. Proc. ACM SIGGRAPH’05, pages 735-744.
[3] A. Lumsdaine, T. Georgiev. “The focused plenoptic camera”. IEEE International Conference on Computational Photography (ICCP) 2009.
[4] C. Perwass, L. Wietzke. “Single lens 3D-camera with extended depth-of-field”. Proc. SPIE 8291, Human Vision and Electronic Imaging XVII, 829108, February 2012.



