|Title||:||Manifold Based Unsupervised Domain Adaptation for Computer Vision Applications|
|Speaker||:||Suranjana Samanta (IITM)|
|Details||:||Tue, 19 May, 2015 3:00 PM @ BSB 361|
|Abstract:||:||A basic assumption in many machine learning tasks is that the distribution of the training and test samples are identical. For many real-world problems, the training set does not well represent the class separability information present in the test samples. This is particularly true when the training and the test samples are obtained from different environments, thus producing dissimilar underlying distributions. The training samples are obtained from the `source domainí, while the test samples are obtained from the `target domainí. Domain adaptation (DA) is a specific type of transfer learning, where one can use abundant training samples available in source domain to aid a statistical learning task on test samples present in the target domain. To aid the process of knowledge transfer, a few training samples are also used from the target domain. In this talk, we focus on the problem of `unsupervised domain adaptationí, where the class labels of the training samples from the target domain are not available. We focus on finding the optimal Domain Invariant Sub-spaces, where the discrepancy of distributions of the two domains are minimized.
A Domain Invariant Sub-space can be estimated using the concept of Maximum Mean Discrepancy (MMD). First of our proposed methods of DA uses this concept along with manifold alignment to find an optimal sub-space. Here, the transformation of the instances from both the domains, produce similar distributions as well as similar underlying manifolds in both the domains. In another proposed method, instead of finding a one-shot transformation, we obtain a sequentially ordered chain of distributions linking the two domains. We sample Domain Invariant Sub-spaces from a path on the Grassmann Manifold, which joins the sub-spaces spanning the source and target domains. Projection of the training samples onto each of these sub-spaces captures the sequential change in the properties of the two domains. This helps to derive domain invariant features, from the projections of training samples onto each of these sub-spaces, to be used for classification. The final proposed method describes a new method of DA which transforms the source domain using the Eigenvalues and Eigenvectors of the two domains. We prove that the transformed source domain data now has Eigen-properties similar to that of the target domain. Results on real-world benchmark datasets show that our proposed methods of DA improves the classification accuracy when used for the tasks of object categorization and event categorization in videos.