You are here

Introduction to Uncertainty Quantification

T. J. Sullivan
Publisher: 
Springer
Publication Date: 
2016
Number of Pages: 
342
Format: 
Hardcover
Series: 
Texts in Apllied Mathematics 63
Price: 
79.99
ISBN: 
9783319233949
Category: 
Textbook
[Reviewed by
William J. Satzer
, on
02/12/2016
]

This book aims to provide an introduction to the mathematics of the quantification of uncertainty. It is intended for students in mathematics and statistics. In the US this would be a graduate level textbook. The author says in the preface that his aim “has been to give a survey of the main objectives in the field … and a few of the mathematical methods by which they can be achieved.”

Two broad classes of uncertainty are considered. The first is uncertainty arising from lack of knowledge; it’s about things that we could know in principle, but don’t know in practice. The author calls this epistemic uncertainty. Such uncertainty could arise when a system to be modeled incorporates unknown and therefore unmodeled phenomena or when necessary measurements or initial conditions are not sufficiently precise. The other class, not entirely distinct, is statistical uncertainty where random effects and inherently variable phenomena are prominent. This the author calls aleatoric uncertainty. The general direction of work in uncertainty quantification seems to involve reducing epistemic uncertainties to aleatoric ones.

A typical objective in uncertainty quantification is a forward propagation of uncertainty. Here individual sources of uncertainty in the model of a system are propagated through the model to predict the overall uncertainty in the system response. Another objective is to solve a kind of inverse problem — to identify and estimate model and parameter uncertainty using test data that should correspond to the output of the model.

The author notes that this is an emerging field without any kind of overarching theory. His emphasis is very much on mathematical methods. Much of what he offers here are tools or methods that could be used to quantify uncertainty.

The development begins with measure theory and probability, continues with Hilbert and Banach spaces, and goes on to optimization theory. This is followed by a brief discussion of measures of uncertainty that includes interval bounds, information theoretic measures such as Shannon information and entropy, and approaches for computing distances between probability measures.

The author continues in this vein with a mix of chapters on general tools (orthogonal polynomials, numerical integration, spectral expansions, stochastic Galerkin methods) and material more directed toward uncertainty (Bayesian inverse problems, sensitivity analysis and model reduction, distributional uncertainty).

Examples are sparse and tend to be very general. This appears to be a deliberate choice. Once again, from the author’s preface: “… practical examples almost always require some ad hoc combination of multiple techniques… Such compound examples have been omitted in the interests of keeping the presentation of the mathematical ideas clean, and in order to focus on examples that will be more useful to instructors and students.”

The problem with this is that the approach is so clean that it gives the reader very little idea of what the real issues are or what quantification of uncertainty might really involve. The author is correct; ad hoc is the order of the day. One hopes with a book like this that there would have been an attempt to apply the techniques he describes to help introduce some rigor into this world of the ad hoc.

In one chapter the author briefly discusses the Kalman filter. This might have been a good opportunity to explore some of the real issues. The Kalman filter combines a model (typically a set of differential equations) and measurements. It is assumed that both the model and measurements are imperfect, so uncertainty is expressed in system noise (to represent unmodeled effects) and measurement noise. The algorithm developed by Kalman is an optimum combination of model and measurements in the presence of uncertainty provided everything is linear. (Nonlinear systems are handled by linearizing; this of course introduces other uncertainties.) Even with linearity, there is a good deal of art involved in getting good results: estimating the level of system noise and measurement error and deciding when the model needs refinement. Practitioners become very accomplished at the art of setting up and running a Kalman filter, but a good deal of the process is ad hoc. What do the methods of uncertainty quantification have to offer here?

The author has not written a book to address issues like this. He describes and develops some wonderful, if rather general, tools. My concern is that a reader could easily finish the book yet have no idea what uncertainty quantification really means.


Bill Satzer (wjsatzer@mmm.com) is a senior intellectual property scientist at 3M Company, having previously been a lab manager at 3M for composites and electromagnetic materials. His training is in dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics.

See the table of contents in the publisher's webpage.