This book gives the first comprehensive introduction to lattice algebra from the point of view of applications in image analysis and artificial neural networks.

Roughly half of the book is devoted to the detailed mathematical description of lattice semi-rings and lattice semi-fields, which are put into the foundation of lattice-based vector spaces. The second half is then devoted to applications of this toolbox to artificial intelligence with a focus on pattern recognition. Throughout the book, many examples and exercises are given. Solutions to the exercises are provided on an associated website.

The authors give a self-contained account of algebraic concepts that allow researchers and students with a computer science background to learn the necessary mathematics. But with respect to applications, the reader would need some prior knowledge on the topics of neural networks and artificial intelligence because the goal is to show how lattice algebra works there and not to introduce said topics to a novice.

The book is comprised of 9 chapters. After the first two introductory chapters comes Chapter 3, which is devoted to lattice theory, with a minimal necessary account that allows one to operate on the important example \( R \) of real numbers and their natural order, with the possible addition of \( -\infty \) and \( +\infty \). Overall, the use of lattice theory happens mostly in the framework of complete lattice operations acting in the components of \( R^{n} \).

Chapter 4 is a central point of mathematical foundation for the book. It takes the approach of replacing some ring operations with join and meet lattice operations, which are equivalent to max and min operations when considered on real numbers. Then linear algebra is developed, mimicking matrix operations, where the lattice operations of join or meet are replacing standard addition. The main goal is to describe a minimax span of a compact subset of \( R^{n} \) as well as to understand the geometric shape of such a span as a special type of polyhedron known as a polytopic beam.

In Chapter 5 these ideas are applied to associative memories, like those being developed in image processing. The basis and dual basis of a minimax span are generated in order to recognize patterns corrupted by different types of noise, such as erosive or dilative noise. The authors pay considerable attention to the kernel method, wherein two bases are used in succession. This allows one to recall patterns using only small portions of said patterns.

Chapter 6 focuses on the computation of extreme points of lattice polytopes. Several algorithms are listed for space dimensions greater than 3. They are employed in Chapter 7, where vectors in the data set carrying spectrum and location information are represented as a linear sum of extreme points in a convex polytope. This is the so-called WM-method for linear unmixing of hyperspectral images.

The last two chapters 8 and 9 are devoted to biomimetic neural networks. In chapter 8 such networks are described as mimicking biological neuron systems. The combined values that are passed to neurons in the pathway in such a system may be computed by lattice operations. Chapter 9 considers a training system in the network, based on methods of elimination or merging. The final section is focused on the multi-layer network, where the learning is based on the valuation similarity measure.

The book provides a comprehensive list of references, most of which were written since the introduction of lattice-based algebra in AI and neural networks in the late 1990s.

Kira Adaricheva is a Professor of Mathematics at Hofstra University.