You are here

Practical Linear Algebra: A Geometry Toolbox

Gerald Farin and Dianne Hansford
Publisher: 
A. K. Peters
Publication Date: 
2005
Number of Pages: 
384
Format: 
Hardcover
Price: 
67.00
ISBN: 
1-56881-234-5
Category: 
Textbook
[Reviewed by
Warren Johnson
, on
01/21/2006
]

This is an interesting and unusual linear algebra textbook. The authors intend it "to be used at the freshman/sophomore undergraduate level. It serves as an introduction to Linear Algebra for engineers or computer scientists, as well as a general introduction to geometry," and it is largely motivated by their "observations of how little engineers and scientists (non-math majors) retain from classical linear algebra classes." One can only wonder whether most math majors are retaining much more.

The book has eighteen chapters, more than could be covered in one semester. The authors give two possible syllabi, one for a basic linear algebra course (or as close to a basic course as one could give from this book) and one specifically tailored to computer graphics; the former covers thirteen of the chapters and the latter fourteen. In the first half of the book everything is done in two dimensions only. Each syllabus includes seven of these first nine chapters, and about one per week is probably not an unreasonable pace; moreover, although it is on both syllabi, chapter 1 could well be skipped.

Chapters 10–13 are specifically on three dimensions. Chapter 11 consists mostly of material on the geometry of lines and planes that is often done somewhere in the calculus sequence instead. Perhaps for this reason, it is not included in the basic syllabus, although it fits into this sort of course at least as well as it does in calculus. The Gram-Schmidt method appears at the end of chapter 11. We finally reach n dimensions in chapters 14 and 15. Chapter 15 is not included in the graphics syllabus, but in the basic syllabus one would see Gram-Schmidt in section 15.4. Chapter 16 discusses numerical methods. The last two chapters are on computer graphics and would not be done in the basic course.

The preface starts out talking about computer animation in a general way, and then continues "But this is not a book on entertainment. We start with the fundamentals of Linear Algebra and proceed to various applications. So it doesn't become too dry, we replaced mathematical proofs with motivations, examples, or graphics. For a beginning student, this will result in a deeper level of understanding than standard theorem-proof approaches." The preface is capable of giving the impression that the authors don't prove anything, but this would be an overstatement. Symmetric matrices are a typical example: at the top of page 131 the authors state that a (real) symmetric matrix has real eigenvalues, giving no hint of the proof, but on page 132 they show that if a 2x2 symmetric matrix has distinct eigenvalues, then the eigenvectors are orthogonal. (The book is in general more readable for a student than most linear algebra books, but here a few more words explaining how their equation (7.7) follows from (7.5) would be helpful.) At the bottom of page 273 they state that these facts are also true in more than two dimensions. Hermitian matrices are never mentioned.

It may be worth noting that this book is in sympathy with several of the sentiments expressed by Jeffrey Stuart in his review of some other current linear algebra textbooks in the American Mathematical Monthly in March 2005, pages 281–288 (which is not to say that I think he would endorse it). At 332 pages of text plus 52 pages of appendix, solutions and index it is shorter than any of the four books reviewed by Stuart. There are no exercises whose only aim is to find a reduced row echelon form, and indeed those words are barely even mentioned. I count two exercises where one is supposed to find an inverse matrix by row operations, a 2x2 in chapter 5 and a 3x3 (with five 1's and four 0's) in chapter 14. There are only 189 exercises in the book overall, and five of them are about PostScript, which is the subject of the appendix.

To borrow two of Stuart's phrases, only someone who does not believe that "the abandonment of determinants seems extreme" could think that this book devotes "far too much time" to them. They occur first in section 4.10, as the signed area of a parallelogram determined by two vectors, and the authors infer that the determinant of a product of two (2x2) matrices must be the product of the determinants. They sketch a lovely geometric derivation of the 2x2 case of Cramer's Rule in section 5.3, though an instructor would have to flesh it out a little for most students to understand it. The cross product appears in section 10.2; the authors use a wedge instead of the standard times symbol, reserving the latter for ordinary multiplication. The scalar triple product is in section 10.6, and 3x3 determinants are foreshadowed there. They are defined in section 12.7 as an alternating sum of 2x2 determinants, but the connection with volume is mentioned. The nxn determinant is defined in section 14.3 as the product of the diagonal entries after the matrix has been upper triangularized (with a correction for row exchanges) but cofactor expansion is also mentioned, and Cramer's Rule is revisited there. The book does a nice job of blending determinants into the rest of linear algebra, but it proves very little about them.

As advertised, the strongest feature of the book is its geometric emphasis. The basic elimination step of adding a multiple of one row to another is consistently referred to as a shear and interpreted geometrically, and similarly the multiplication of a row by a constant is always thought of as scaling. Reflections, rotations and projections appear in chapter 4 and are revisited later in three dimensions, though not more; the general form of a projection matrix is not mentioned, nor the connection between projection and least squares. (Incidentally, why don't more linear algebra books talk about general reflection matrices? If P projects onto a subspace, then R = 2P – I reflects through the same subspace. The R's are precisely the orthogonal matrices that are also symmetric — a gorgeous class of matrices, one of my favorite sources of exam problems.)

There is a nice discussion of 2-dimensional affine maps in chapter 6, and an interesting treatment of conic sections in chapter 9 — maybe a course like this is the right place for them. I was also pleasantly surprised to see the general 3x3 rotation matrix at the bottom of page 210. For an elementary derivation see Roger Alperin's note in the College Mathematics Journal, vol. 20 (1989), page 230, which is reprinted in the MAA book Linear Algebra Gems .

The book has a few minor gaffes. We find "Schwartz" in the Cauchy-Schwarz inequality on page 29 and in the index; this is all the more odd in that one of the authors was educated in Germany. On page 195 we have "eluded" in place of "alluded", and on page 244 the sentence "Each of these are an elementary row operation." Figure 3.4 is nice, but in concert with the brief discussion of overdetermined systems on pages 253–256 it could give a misleading impression of least squares.

The subsection on positive definite symmetric matrices on pages 134–135 may be the worst in the book. There we find first the statement that for A to be positive definite we need (xT)Ax > 0 for all nonzero vectors x in R2 , and that it is not sufficient to have it just for all unit vectors. Although the condition is usually stated for all nonzero vectors, and a reduction to unit vectors doesn't really help, it is easy to see that we may make it without loss of generality. Later we find the definition of the symmetric part of A — the average of A and its transpose — and then the statement that a real symmetric matrix is positive definite if and only if the eigenvalues of its symmetric part are positive. Though technically correct, this is silly, since when A is symmetric it equals its symmetric part.

There are hardly any historical remarks in the book. A curious exception is on page 61 where, without any reference or discussion, it says that "Matrices were first introduced by H. Grassmann in 1844." Although a similar statement has been made by no less an authority than Josiah Willard Gibbs, Grassmann might better have been mentioned, along with Hamilton, in connection with the dot and cross products. (Gibbs's vice-presidential address on this topic at a meeting of the American Association for the Advancement of Science in 1886 is well worth reading even today; see the Proceedings of the AAAS, vol. 35, pages 37–66.) Readers interested in the history of matrix theory should consult a trilogy of articles published in the 1970s by Thomas Hawkins, or an abridged version in the Proceedings of the International Congress of Mathematicians (Vancouver, 1974), pages 561–570. Matrices appear in at least two of the twenty-five papers published in Crelle's Journal by Eisenstein in 1844, in the same year as the first edition of Grassmann's Ausdehnungslehre . Incidentally, I happened to read an excellent biography of Hamilton by Thomas Hankins just after I received the book under review. Hankins points out that Eisenstein visited Hamilton in the summer of 1843, and quite naturally wonders whether they might have discussed noncommutative multiplication. Hamilton discovered the quaternions on October 16 of that year.

I would rather use Strang's book or Bretscher's with the types of students I have been fortunate enough to encounter in linear algebra, but I liked this book more than I expected to. Its strengths are readability and concreteness. I think many students would acquire a good intuition for and working knowledge of linear algebra, at least in 2 and 3 dimensions, by following it. For some people, much of the beauty of the subject stems from the ease with which the basic ideas extend to more than three dimensions, and if you are one then this is not the right book for you. But if you want to give a course that emphasizes geometry and is not too abstract, and if you don't mind supplementing the exercises, then you might really like it.


Warren Johnson (warren.johnson@conncoll.edu) is visiting assistant professor of mathematics at Connecticut College.

The table of contents is not available.