Computational imaging refers to the use of data from a sensor to produce an image from the very large set of data that results. A cellphone camera does this, as do collections of radio telescopes that create a composite image of a black hole, and MRI machines that compose diagnostic images. Doing this requires a good deal of computation, and this book describes algorithms that accomplish this.

Computational imaging as a field of its own is barely twenty years old. The author of this book has been with it from the beginning. His book grew out of a graduate class he has taught for several years.

Real imaging systems almost never directly generate usable images. A cellphone camera typically creates data in the form of a two-dimensional array, and then – after extensive processing - produces the kind of image that you expect to see. This usually involves at least color processing, deblurring, removal of noise, and then some image registration before a recognizable image can appear. The mathematical foundations of what happens between image acquisition and presentation of that image is the subject here.

The author treats the process of getting an image from data as an inverse problem. How do you reconstruct an original image \( X \) from a physical sensing system that provides measurements \( Y \) that depend on that unknown image \( X \)? The measurements \( Y \) are very redundant and noisy. The data representing the significant parts of an actual image constitute a very thin set immersed in another very large set.

This book takes a model-based approach. Both the physical system and the image \( X \) are treated as random quantities. A forward model of the system is described by \( p(y|x) \), the conditional distribution of the data \( Y \) given the unknown image \( X \). The prior model is an assumed prior distribution \( p(x)\). The idea is that \( p(y|x) \) describes how the observations are related to the unknown image, and is intended to incorporate both the deterministic characteristics of the imaging system and probabilistic elements such as noise. This is a natural setup for Bayesian analysis. The goal is to compute a final estimate that balances a fit of the forward model \( p(y|x) \) with the expectations of the prior model \( p(x) \).

Much of the book considers possible algorithms for model-based reconstruction of images. Many alternatives are considered. A major constraint is that real images cannot be modeled well using Gaussian distributions or causal models. Markov random field models are among the best alternatives. The author considers these more extensively, but also explores other alternatives: one-dimensional versus two-dimensional analysis, Gaussian and non-Gaussian distributions, and continuous and discrete time models. Basic methods of optimization for model-based image processing are also described.

In many ways, this is a classic text, though it seems odd to say that about a field that is so (relatively) young. It is a very complete treatment of a complex subject with good exercises and an extensive bibliography. It tends heavily toward foundational tools and their theoretical aspects, and is best suited to students with a strong background in probability and statistics.

This is also not the last word in computational imaging. More recent mathematical work with compressed sensing, for example, has led to impressive advances in MRI imaging, for example, leading to significantly reduced scan times and much less time in the chamber for patients. Of course, much of the author’s approach remains relevant, but the data collection burden may be considerably less.

Bill Satzer (

bsatzer@gmail.com), now retired from 3M Company, spent most of his career as a mathematician working in industry on a variety of applications. He did his PhD work in dynamical systems and celestial mechanics or