menu MENU

Sparse image reconstruction for molecular imaging

Michael Ting, Raviv RaichAlfred O. Hero III

Abstract

The application that motivates this paper is molecular imaging at the atomic level. When discretized at sub-atomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. The paper therefore does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein’s unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

Paper

M. Ting, R. Raich and A.O. Hero, “Sparse image reconstruction for molecular imaging,” IEEE Trans. on Image Processing, vol. 18, no. 6, pp. 1215-1227, June 2009. (.pdf)

Matlab Code

Download the source files. (.tar.bz2)

The data files have to be separately downloaded.

Associated sparse image toolbox (written by M. Ting in 2014. The source code was used to generate the results of the work Molecular Imaging in Nano MRI by Michael Ting, Wiley 2014. The latest version is 0.0.1.

https://github.com/mt94/Sparse-Image-Reconstruction

Tested Configurations

The Matlab code has been tested on Matlab 7.1 (R14) on Linux. For instructions on how to generate the results of the paper, refer to the following file. (.pdf)

Figures

  • Figure 1. Hybrid thresholding rule.
  • Figure 2. Illustration of the two types of θ used in the simulations; as well, the Gaussian blur psf is shown.
  • Figure 3. Reconstructed images for the binary-valued θ under an SNR of 1.76 dB for SBL, StOMP (CFAR), MAP2 (g*=2-1/2), and lasso-SURE.
  • Figure 4. Performance vs. SNR for Landweber iterations, MAP1, MAP2, LASSO-SURE, and H-SURE when applied to the binary-valued θ.
  • Figure 5. Image θ and noiseless projection Hθ used in the MRFM reconstruction example.
  • Figure 6. Landweber reconstruction of the MRFM example under a SNR of 4.77 dB.
  • Figure 7. LASSO-SURE reconstruction of the MRFM example under a SNR of 4.77 dB.
  • Figure 8. Histogram of <latex>\hat{\theta}_i</latex> for the Landweber and LASSO-SURE estimator.

References

  1. Y. Modis, S. Ogata, D. Clements, and S. C. Harrison, “Structure of the dengue virus envelope protein after membran fusion,” Nature, vol. 427, pp. 313-319, 2004.
  2. D. J. Müller, D. Fotiadis, S. Scheuring, S. A. Müller, and A. Engel, “Electrostatically balanced subnanometer imaging of biological specimens by atomic force microscopy,” Biophysical Journal, vol. 76, pp. 1101-1111, 1999.
  3. Y. G. Kuznetsov, A. J. Malkin, R. W. Lucas, M. Plomp, and A. McPherson, “Imaging of viruses by atomic force microscopy,” Journal of Generical Virology, vol. 82, pp. 2025-2034, 2001.
  4. D. Rugar, R. Budakian, H. J. Mamin, and B. W. Chui, “Single spin detection by magnetic resonance force microscopy,” Nature, vol. 430, no. 6997, pp. 329-332, 2004.
  5. C. L. Degen, M. Poggio, H. J. Mamin, C. T. Rettner, and D. Rugar, “Magnetic resonance imaging of a biological sample with nanometer resolution,” Science, submitted.
  6. P. Markiewicz and M. C. Goh, “Atomic force microscope tip deconvolution using calibration arrays,” Rev. Sci. Instrum., vol. 66, pp. 3186-3190, 1995.
  7. J. A. Tropp, “Greed is good: algorithmic results for sparse approximation,” IEEE Trans. Inform. Theory, vol. 50, no. 10, pp. 2231-2241, 2004.
  8. J. J. Fuchs, “Recovery of exact sparse representations in the presence of bounded noise,” IEEE Trans. Inform. Theory, vol. 51, no. 10, pp. 3601-3608, 2005.
  9. D. L. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Trans. Inform. Theory, vol. 52, no. 1, pp. 6-18, 2006.
  10. J. A. Tropp, “Just Relax: convex programming methods for identifying sparse signals in noise,” IEEE Trans. Inform. Theory, vol. 52, no. 3, pp. 1030-1051, 2006.
  11. D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, “Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit,” Stanford University, Tech. Rep., 2006.
  12. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society, Series B, vol. 58, no. 1, pp. 267-288, 1996.
  13. S. Alliney and S. A. Ruzinsky, “An algorithm for the minimization of mixed l1 and l2 norms with application to Bayesian estimation,” IEEE Trans. Signal Processing, vol. 42, no. 3, pp. 618-627, 1994.
  14. D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Trans. Signal Processing, vol. 52, no. 8, pp. 2153-2164, 2004.
  15. I. M. Johnstone and B. W. Silverman, “Needles and straw in haystacks: empirical Bayes estimates of possibly sparse sequences,” The Annals of Statistics, vol. 32, no. 4, pp. 1594-1649, 2004.
  16. M. A. T. Figueiredo and R. D. Nowak, “An EM Algorithm for Wavelet-Based Image Restoration,” IEEE Trans. Image Processing, vol. 12, no. 8, pp. 906-916, 2003.
  17. I. Daubechies, M. Defrise, and C. de Mol, “An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413-1457, 2004.
  18. H.-Y. Gao and A. G. Bruce, “Waveshrink with firm shrinkage,” Statistica Sinica, vol. 7, pp. 855-874, 1997.
  19. H.-Y. Gao, “Wavelet shrinkage denoising using the non-negative garrote,” Journal of Computational and Graphical Statistics, vol. 7, no. 4, pp. 469-488, 1998.
  20. D. L. Donoho and I. M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” Journal of the American Statistical Association, vol. 90, no. 423, pp. 1200-1224, 1995.
  21. L. Ng and V. Solo, “Optical flow estimation using adaptive wavelet zeroing,” in Proceedings of the IEEE Intl. Conf. on Image Processing, vol. 3, 1999, pp. 722-726.
  22. M. Yuan and Y. Lin, “Efficient empirical Bayes variable selection and estimation in linear models,” Journal of the American Statistical Association, vol. 100, no. 472, pp. 1215-1225, 2005.
  23. A. M. Thompson, J. C. Brown, J. W. Kay, and D. M. Titterington, “A study of methods of choosing the smoothing parameter in image restoration by regularization,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 4, pp. 326-339, 1991.
  24. M. Ting, R. Raich, and A. O. Hero, “Sparse image reconstruction using sparse priors,” in Proc. of the IEEE Intl. Conf. on Image Processing, 2006, pp. 1261-1264.
  25. J. A. Fessler, “Image Reconstruction: Algorithms and Analysis,” draft of book.
  26. A. Antoniadis and J. Fan, “Regularization of Wavelet Approximations,” Journal of the American Statistical Association, vol. 96, no. 455, pp. 939-967, 2001.
  27. C. M. Stein, “Estimation of the mean of a multivariate normal distribution,” The Annals of Statistics, vol. 9, no. 6, pp. 1135-1151, 1981.
  28. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression,” The Annals of Statistics, vol. 32, no. 2, pp. 407-499, 2004.
  29. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103-120, 2004.
  30. E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math, vol. 59, pp. 1207-1223, 2005.
  31. K. K. Herrity, A. C. Gilbert, and J. A. Tropp, “Sparse approximation via iterative thresholding,” in Proceedings of the IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, 2006.
  32. M. Y. J. Ting, “Signal processing for magnetic resonance force microscopy,” Ph.D. dissertation, The University of Michigan, 2006.
  33. D. J. C. Mackay, “Comparison of Approximate Methods for Handling Hyperparameters,” Neural Computation, vol. 11, pp. 1035-1068, 1999.
  34. S. H. Chou, L. Zhu, and B. R. Redi, “The Unusual Structure of the Human Centromere (GGA)2 Motif: Unpaired Guanosine Residues Stacked Between Sheared G·A Pairs,” J. Molecular Biology, vol. 244, no. 3, pp. 259-268, 1994.
  35. B. Efron, “Selection criteria for scatterplot smoothers,” The Annals of Statistics, vol. 29, no. 2, pp. 470-504, 2001.
  36. K. Lange, D. R. Hunter, and I. Yang, “Optimization transfer using surrogate objective functions,” Journal of Computational and Graphical Statistics, vol. 9, no. 1, pp. 1-20, 2000.
  37. V. Solo, “A sure-fired way to choose smoothing parameters in ill-conditioned inverse problems,” in Proceedings of the IEEE Intl. Conf. on Image Processing, vol. 3, 1996, pp. 89-92.

Contact