It has been sixteen years since the last edition of this well-known text on matrix computations. This new edition is about 25% longer that the previous one and much has been added: new topics, expanded treatments, and substantial upgrades. The book is in part a kind of tribute from Van Loan to his co-author Gene Golub, who died in 2007.

This is not an introductory textbook in numerical linear algebra. It is comprehensive and sophisticated, intended for graduate students in mathematics and computer science or for use as a reference. The book makes considerable demands on the reader. For example, Chapter 1 begins a little deceptively with a definition of matrix multiplication and an example with 2×2 matrices. Then, within a few pages, one finds oneself deep in an analysis of Strassen’s algorithm for matrix multiplication with reduced operation count. Those looking for an introduction to the subject would do better to start with Stewart’s *Introduction to Matrix Computation*, for example.

Some experience with numerical computation is also an important prerequisite. The subtleties and pitfalls of floating point arithmetic are pervasive themes. Although the book has a mixture of theory and computation, it is unabashedly focused on computation, especially high-performance computing. This is evident from the beginning. The first chapter has sections on fast vector-matrix products, vectorization of operations and parallel matrix multiplication. Operation counts are important, but equally important (for matching algorithms to computing environments) is how the arithmetic units of a computer system interact with the underlying memory. It’s no good having an algorithm with ultrafast arithmetic operations if there’s a significant bottleneck in getting access to memory.

This may seem a little intense for someone just looking for a decent algorithm to compute eigenvalues. Yet this book certainly has that as well. High performance applications have been emphasized more and more in successive editions of this book, probably in part because of the growing emphasis on “big data”.

The book’s major topics are linear systems (general and special), orthogonalization and least squares, eigenvalue problems (symmetric and unsymmetric), functions of matrices (exponential, square root, log), and large sparse problems (both linear systems and eigenvalue problems). A final chapter takes up some special topics. One subject with a lot of current interest is the numerical treatment of tensors: tensor unfolding, contractions and decompositions. A natural way that tensors can arise in this context is by a stacking of matrices — where the each matrix in the stack might represent quantities determined at one time in a sequence of discrete times.

Among the notable new topics treated in this edition are singular value decomposition (SVD) of large sparse matrices, fast transforms (FFT, sine and cosine, Haar wavelet) and structured eigenvalue problems. The treatment of floating point arithmetic has been nicely enhanced. I would strongly recommend the short section called “Become a Floating Point Thinker” to anyone interested in the art of computation.

The annotated bibliographies and list of related books on the subject are comprehensive and impressive. The exercises are relatively few. Someone teaching a course with this text would need to develop a broader and more various set.

This is probably a better textbook in its latest version. The organization is clearer, the visual presentation of material is more attractive, and important ideas come through more clearly.

Bill Satzer (wjsatzer@mmm.com) is a senior intellectual property scientist at 3M Company, having previously been a lab manager at 3M for composites and electromagnetic materials. His training is in dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics.