For video footage from past you can visit the individual event pages, or go to our YouTube Channel.
To filter by event category, click on the event category link in the table below or use the menu on the right.
List of Past Events
Learning on Riemannian Manifolds for Interpretation of Visual Environments
Monday, February 11, 2008, 01:00pm - 02:00pm
Rutgers University, Computer Science & ECE Department,
Classical machine learning techniques provide effective methods for analyzing data when
the parameters of the underlying process lie in a Euclidean space. However, various
parameter spaces commonly occurring in computer vision problems violate this
assumption. We derive novel learning methods for parameter spaces having Riemannian
manifold structure and apply to three important computer vision problems: motion
estimation, object detection, and tracking.
We start with a modified version of the mean shift algorithm which can find the modes of
a distribution having matrix Lie group structure. We demonstrate the superior
performance of the proposed algorithm on multiple 3D motion estimation problem where
the points are sampled rigid motion matrices from noisy point correspondences. The
algorithm successfully finds the number of motions in the scene and the corresponding
motion parameters in the presence of large amount of outliers.
Next we present a new algorithm to detect humans in still images utilizing covariance
matrices as object descriptors. The space of d-dimensional nonsingular covariance
matrices can be represented as a connected Riemannian manifold. A novel classification
algorithm is derived by incorporating the a priori information about the geometry of the
space. The algorithm was tested on INRIA human database where superior detection
rates are observed over the previous approaches.
Finally we describe a novel learning based tracking model combined with object
detection. The motion model is learned on the Lie algebra of the transformation group
and the formulation minimizes a first order approximation to the sum of squared geodesic
error. The motion estimator is then integrated to an existing pose dependent object
detector and a pose invariant object detection algorithm is developed. The proposed
model can accurately detect objects in various poses and the size of the search space is
only a fraction compared to the existing object detection methods.
Related Works:� http://www.caip.rutgers.edu/riul/research.html