top of page
Shape from Shading : A Neuro-Geometrical Approach

Shape-from-Shading is a Computer Vision technique that aims to infer shape from the brightness map (i.e. intensity values) of the image assuming some simple mathematical models for the interaction between the light source, the surface shape, and the viewer’s position.

 

On the other hand, Image Synthesis, which is in some way the direct problem of Shape-from-Shading, uses quite complex procedures for the computation of the brightness map when rendering realistic images of three-dimensional scenes.

This approach allowed overcoming the source-shape ambiguity problem encountered in previous Shape-From-Shading techniques, and consequently permitted the development of a new estimator for the light source direction.

 

The contribution in this work to the resolution of the Shape-From-Shading problem is threefold. First, the proposed gradual method solves the ambiguity problem between the brightness map and the source-shape information. Second, an outcome of the adopted approach has seen the emergence of a nice technique that permits the recovery of the light source direction on every valid pixel of the image. Third, the developed neural networks do not depend on any particular reflectance model.

 

The battery of tests carried on this novel algorithm shows good results for simple synthetic images. In fact, excellent achievements have been obtained on the recovery of the source-shape information from the mirror maps. There is, however, more work to be done in order to improve the quality of the reconstruction for the whole approach, particularly on real images.

For instance, JENSEN in his book entitled “Realistic Image Synthesis Using Photon Mapping” discusses techniques that simulate global illumination in complex scenes like the “shimmering waves at the bottom of a swimming pool”, diffuse inter-reflections, and “participating media such as clouds or smoke”.

 

These differences in the modelling of the surface-light-viewer interaction between the direct and inverse problems probably explain why Shape-from-Shading techniques (the inverse problem, here) struggle to deliver proper three-dimensional information of the photographed scene.

 

The way the biological system successfully overcomes these difficulties and recovers the depths and shapes of objects remains intriguing.

 

Let us put it this way: from a two-dimensional map (i.e. the retinal image), the brain successfully recovers at least the vector fields of the light source and the surface normals beside other useful information like surface characteristics, texture patterns, relative depth between objects, etc. This constitutes a massive quantity of information to infer from a mere brightness map!

 

The above observations suggest some scheme that successfully encodes this gigantic data within the brightness values. The brain on the other side perfectly descrambles and filters the received signal (the image) for exploitation (identification of objects, avoidance of obstacles, shape recovery …).

 

The work carried during my PhD proposes an original approach to infer shape from the brightness map that is more in pace with the way the biological system works. In short, the resolution of the Shape-from-Shading problem is achieved through a two-stage process. First, the algorithm recovers the mirror directions of reflected light from the brightness map of the image using heuristic neural networks. Second, developed mathematical procedures exploit these intermediate results (i.e. the mirror directions) and work out the coveted source-shape information.

© 2015 by Adnane Benhadid

bottom of page