Inverse problems such as the ones discussed in this book use data recovered from indirect measurements to reconstruct the parameters of the physical system from which they arose. Such problems are called inverse because they turn around the usual question of predicting measurements using a known system model with its associated parameters. The goal in solving inverse problems is thus to achieve a kind of reconstruction of causes from effects.

Inverse problems occur commonly in applications such as CAT scans where an image is computed from a collection of x-ray scans, or when the structure of the earth’s subsurface is determined from surface measurements of the gravity field. The full range of inverse problems extends across the domain of science and engineering from astronomy to nondestructive testing and even to data analysis and machine learning.

Inverse problems are often ill-posed or ill-conditioned, so that solutions don’t exist, are not unique if they do exist, or don’t depend continuously on the data. Many approaches to inverse problems use a regularization technique to render the problem well posed. Bayesian statistical techniques are also used, but they come with a significant computational burden.

In this book the author limits himself to linear inverse problems where unique solutions exist but do not depend continuously on data. In this setting he considers the general problem \(Kf = g\) where \(K\) is a compact linear operator between Hilbert spaces and where \(g\) represents data and \(f\) represents unknown system parameters. The operator \(K\) is assumed injective and its range dense in the target Hilbert space.

The author begins with a discussion of ill-posed problems. He chooses first to consider an introductory example of numerical differentiation of a \(C^1\) function corrupted by noise because he argues that the observations made here are relevant throughout the book. After that he formulates the inverse problem in terms of a compact operator from one Hilbert space to another and offers extended examples in computerized tomography, electrocardiography and electric impedance tomography.

The main part of the book describes two regularization techniques; one is due to Tikhonov and the other is a method that relies on Landweber iteration. Tikhonov regularization introduces a regularizing operator and an additional parameter, a kind of penalty term, whose value is optimized to minimize the influence of noise in the measurements and achieve consistency between measurements and system parameters. The Landweber method of regularization can be thought of as minimizing the norm of the difference between data and model prediction iteratively using a relaxation parameter.

The author says that he intends the book to be accessible to mathematics and engineering students with background in undergraduate mathematics “enriched by some basic knowledge of elementary Hilbert space theory”. The book’s real requirements go well beyond that. In the U.S. the actual prerequisites would include at least a first graduate course in real analysis and a fair amount of functional analysis as well.

I would not recommend this book to someone looking for a sense of what inverse problems are about. Despite its offer of a “taste” it delves much too quickly into the details and never provides adequate background or rationale for readers new to the subject. An alternative for anyone curious about inverse problems is *Parameter Estimation and Inverse Problems* by Aster et al.

Bill Satzer (bsatzer@gmail.com) was a senior intellectual property scientist at 3M Company. His training is in dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics.