Recent years have seen the development of many open source mathematical texts. The American Institute of Mathematics, for instance, maintains a list of high quality open textbooks on their website. These open textbooks have many advantages over traditionally published texts. First and foremost, they are free. Secondly, while most are available as a downloadable PDF file, many also have very inexpensive print versions that can be purchased by students desiring a physical copy. On the other hand, many open textbooks lack the polish of traditionally published texts and are more likely to contain errors or typesetting issues.

*Tea Time Linear Algebra* is Leon Brin's most recent open text. (He is also the author of *Tea Time Numerical Analysis*; see our review. A PDF version can be downloaded for free from the author's website, while a printed copy can be purchased from Print-On-Demand publisher Lulu for only $22, nearly a tenth the price of some better known linear algebra textbooks. Additionally, Brin has integrated his textbook with the MyOpenMath course management system, allowing instructors using the book to easily assign and collect homework that is synchronized with the text. The words *tea time* are meant to convey the text's informal, conversational style. The book is meant for classes that are not proof based. Indeed, relatively few proofs are given in the text at all. In fact, Brin eschews not only the traditional *theorem, proof, theorem, proof,* ... style of writing, but also the highlighted, boxed-off definitions followed by streams of examples that one expects to find in a textbook. Instead, definitions and statements of theorems are woven into the informal discussion that each section is comprised of.

Although the topics covered by *Tea Time Linear Algebra* are fairly typical, the book's organization is not. The first chapter of the book is a very fast- paced introduction to matrix computations that covers matrix arithmetic, inverses, determinants, orthogonality, as well as eigenvalues and eigenvectors. All in less than 50 pages! The sections in the first chapter tend to be very short and focus on a single computational method, giving few examples and essentially none of the accompanying theory that one would normally expect to find. Later chapters return to these topics in order to amplify the discussion and fill in some of the gaps in the original treatment. As an example, eigenvalues and eigenvectors first appear at the end of Chapter 1. They reappear at the beginning of Chapter 3, where it is shown that if \(v\) is an eigenvector of a matrix \(A\) with eigenvalue \(\lambda\), then so is any scalar multiple of \(v\). They appear yet again in Chapter 5 during the discussion of diagonalization, and a handful of times in the final two chapters on applications.

After the fast-paced first chapter comes a chapter on row operations, row reduction, and echelon forms. This chapter is very readable, and it is here that the author's conversational style is most effective. Chapter 3, entitled Matrix Algebra, covers matrix equations, linear independence, and revisits the concepts that were described in the computational first chapter in order to add more depth. These three chapters form the book's first part, *Matrix Mechanics*.

Chapters 4 and 5 form the second part of the book, *Matrix Abstraction*, and concern vector spaces and inner product spaces. The topics covered here are standard, and while some results are stated for general vector spaces, most concern \(\mathbf{R}^n\) or perhaps the vector space of all real polynomials of degree \( n\) or less. Abstract vector spaces are difficult for many students when they first encounter them, and it is here that I think the author's informal style of writing is least effective. As an example, the author states (in Chapter 4.1) that "Any subset of a vector space that is itself a vector space is called a **subspace**." Because formal definitions are not given, this is the definition of a subspace that the reader is left with. The problem is that it is not stated that a vector space and a subspace need to have the same definitions of scalar multiplication and vector addition. This is not mere pedantry but rather illustrates the types of important nuances that can get lost in such an informal exposition.

*Applications* is the final part of the book and consists of two chapters. Chapter 6 covers mathematical applications (LU factorization, the power method, the geometry of determinants and eigenvalues, and approximation), while Chapter 7 covers further applications (e.g., linear regression, Markov chains, discrete dynamical systems). In the table of contents, the titles of the sections covering applications very usefully include any previous sections that will be referred to. For example, the section on the power method is listed as "The Power Method [3.5]" because the discussion will refer to material that was covered in Chapter 3.5 (on determinants). Unfortunately, the running header in the section itself is "The Power Method [??]". All of the other sections that have titles including chapter numbers are similarly garbled.

Throughout the text Brin does a wonderful job of incorporating the SAGE computer algebra system into his exposition and into the exercises. The PDF of the textbook contains hundreds of useful SageMathCell links that readers can click on and which take them to a website preloaded with the SAGE code needed to perform certain computations. This is an extraordinary feature of the textbook and saves readers from the tedium of data entry that so often comes with matrix computation exercises.

*Tea Time Linear Algebra* is an interesting book. I could see it being used as the textbook in a class meant to introduce the fundamentals of matrix algebra to students interested in learning computational techniques without getting too bogged down in the associated theory. The book is completely free and has an exposition that wonderfully integrates a useful computer algebra system. As for myself, while reading the text I often found myself looking for more formal, detailed explanations of concepts than were being offered. Additionally, I was never able to get fully behind the book's quirky structure. To return to my eigenvalue / eigenvector example, if eigenvalues and eigenvectors are first defined in Chapter 1, then the reader shouldn't have to wait until the end of Chapter 5 (in the section on Similarity and Diagonalization) to begin to learn why they are important and worth studying.