You are here

Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms

Jan A. Snyman
Springer Verlag
Publication Date: 
Number of Pages: 
Applied Optimization 97
[Reviewed by
Brian Borchers
, on

Practical Mathematical Optimization is an unusual book both in terms of its content and the arrangement of the material. Rather than producing a conventional textbook on nonlinear programming, the author has combined an introductory textbook with a research monograph describing some of his own research results. Furthermore, the author has chosen an unusual structure in which the discussion of algorithms for optimization problems is separated from examples and from formal statements of theorems and their proofs. The author's choices have resulted in a book that will have a hard time finding an audience.

The book begins with a chapter on basic concepts of nonlinear programming, including a review of important concepts from vector calculus and a discussion of necessary and sufficient conditions for optimality of unconstrained problems. This chapter concludes with a small collection of exercises. However, there are no exercises for other chapters. Traditional material on steepest descent, conjugate gradient methods, Newton's method, and quasi-Newton methods is presented in chapter two. The author does not cover many important topics including truncated Newton methods, limited memory quasi-Newton methods, or trust region approaches. In chapter three, the author discusses the theory of constrained problems, including the KKT conditions and Lagrangian duality. Methods for solving constrained problems, including barrier methods, multiplier methods and sequential quadratic programming methods are introduced.

The presentation in the first three chapters is stripped down, with very few examples, and without formal statements of theorems or proofs. Although many important ideas are present in the text, the text reads more like a set of lecture notes than a carefully written textbook. Examples of the solution of small optimization problems are presented in chapter five. Formal statements and proofs of theorems from chapters one through three are given in chapter six. Students will have to jump back and forth between these chapters to fully understand the material. Chapters one through three and five through six cover much of what one would expect in an introductory textbook on nonlinear programming.

Chapter four contains a discussion of the author's research into gradient based methods for optimization that attempt to overcome some practical problems associated with the methods presented in chapters one through three. Individual sections in this chapter are based on journal papers that the author has published.

This book would make a poor textbook choice for a first course in optimization, because of its unusual organization, the lack of good exercises, and the limited coverage of many topics. The material presented in chapter four is not appropriate for students in a first course in optimization. On the other hand, more advanced students and researchers who might be interested in chapter four will have little interest in the textbook material presented in other chapters of the book.

Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems. 

Preface.- Table of Notation.- Introduction.- Line Search Descent Methods for Unconstrained Minimization.- Standard Methods for Constrained Optimization.- New Gradient-Based Trajectory and Approximation Methods.- Example Problems.- Some Theorems.- The Simplex Method for Linear Programming Problems.- Bibliography.- Index.