We consider the problem of analyzing data for which no straight forward and meaningful Euclidean representation is available. Specifically, we would like to perform dimensionality reduction to such data for visualization and as preprocessing for clustering. In these cases, an appropriate assumption would be that the data lies on a statistical manifold, or a manifold of probability density functions (PDFs). In this paper we propose using the properties of information geometry in order to define similarities between data sets. This has been done using the Fisher information distance, which requires knowledge of the parametrization of the manifold; knowledge which is usually unavailable. We will show this metric can be approximated using entirely non-parametric methods. Furthermore, by using multi-dimensional scaling (MDS) methods, we are able to embed the corresponding PDFs into a low-dimensional Euclidean space. We illustrate these methods on simulated data generated by known statistical manifolds. Rather than as an analytic or quantitative study, we present this framework as a proof of concept, demonstrating our methods which are immediately applicable to problems of practical interest.
Kevin M. Carter, Raviv Raich, and Alfred O. Hero III, “Learning on statistical manifolds for clustering and visualization,” in Proc. of 45th Annual Allerton Conference on Communication, Control, and Computing, Sept. 2007. (.pdf)
Figure 1. Convergence of Kullback-Leibler geodesic approximation to true Fisher information distance on manifold of univariate Gaussian distributions
Figure 2. 2D embedding of univariate Gaussian manifold with Classical MDS and Isomap
Figure 3. 3D embedding of Swiss roll as a statistical manifold, with each point on the manifold corresponding to a normal PDF
Figure 4. Clustering with statistical manifolds. PDFs defined by the ‘Swiss roll’ and ‘S-Curve’ surfaces defined in Euclidean space