Ensemble estimators for multivariate entropy estimation
Kumar Sricharan, Dennis Wei, Alfred O. Hero III
Abstract
The problem of estimation of entropy functionals of probability densities has received much attention in the information theory, machine learning and statistics communities. Kernel density plug-in estimators are simple, easy to implement and widely used for estimation of entropy. However, kernel plug-in estimators suffer from the curse of dimensionality, wherein the MSE rate of convergence is glacially slow – of order O(T−γ/d), where T is the number of samples, and γ>0 is a rate parameter. In this paper, it is shown that for sufficiently smooth densities, an ensemble of kernel plug-in estimators can be combined via a weighted convex combination, such that the resulting weighted estimator has a superior parametric MSE rate of convergence of order O(T−1). Furthermore, it is shown that these optimal weights can be determined by solving a convex optimization problem which does not require training data or knowledge of the underlying density, and therefore can be performed offline. This novel result is remarkable in that, while each of the individual kernel plug-in estimators belonging to the ensemble suffer from the curse of dimensionality, by appropriate ensemble averaging we can achieve parametric convergence rates.
Reference
This work has been published in the IEEE Transactions on Information Theory 2013 and in NIPS 2012. Arxiv reference: http://arxiv.org/abs/1203.5829
Matlab Code
Please find the code for weighted ensemble entropy estimation attached: Weighted_ensembles_uniform-kernel_entropy-estimation