�� Camillo J. Taylor �� (University of Pennsylvania, Computer and Information Science Dept.)
The goal of most image based rendering systems can be stated as follows: given a set of pictures taken from various vantage points, synthesize the image that would be obtained from a novel viewpoint. This talk will present a novel approach to view synthesis which hinges on the observation that human viewers tend to be quite sensitive to the motion of features in the image corresponding to intensity discontinuities or edges. Our system focuses its efforts on recovering the 3D position of these features so that their motions can be synthesized correctly. In the current implementation these feature points are recovered from image sequences by employing the epipolar plane image (EPI) analysis techniques proposed by Bolles, Baker, and Marimont. The output of this procedure resembles the output of an edge extraction system where the edgels are augmented with accurate depth information. This method has the advantage of producing accurate depth estimates for most of the salient features in the scene including those corresponding to occluding contours.
The talk will describe a principled approach to reasoning about the 3D structure of the scene based on the quasi-sparse features returned by the reconstruction system. This analysis elucidates an important constraint on the structure of the depth maps that can be produced by a solid object. This constraint can also be used to refine the results produced by standard stereo and structure from motion techniques. Importantly, this 3D analysis allows us to correctly reproduce occlusion and disocclusion effects in the synthetic views.
The talk will describe work done in collaboration with David Jelinek and Sang-Hack Jung.
Computer and Information Science Dept.
University of Pennsylvania