You are here

Introduction to Nonlinear and Global Optimization

Eligius M. T. Hendrix and Boglárka G.-Tóth
Publisher: 
Springer
Publication Date: 
2010
Number of Pages: 
208
Format: 
Hardcover
Series: 
Springer Optimization and Its Applications 37
Price: 
49.95
ISBN: 
9780387886695
Category: 
Textbook
[Reviewed by
Brian Borchers
, on
08/28/2010
]

In nonlinear optimization we consider the problem of minimizing an objective function f(x), possibly under constraints that define a set of feasible solutions. These constraints are typically expressed in terms of inequalities of the form g(x) ≤ 0 or h(x) = 0. If the objective and constraint functions are assumed to be continuously differentiable, then it is relatively easy to develop necessary and sufficient conditions for a solution to be a local minimum of the constrained optimization problem. The Karush-Kuhn-Tucker conditions are the basis for a theory of optimality. On the computational front, derivative based algorithms such as the method of steepest descent and Newton’s method have been developed to find a local minimum of a nonlinear optimization problem.

These algorithms and optimality conditions are inherently local in nature because they make use only of derivative information at distinct points. In some cases it is possible to prove “global convergence” of an optimization algorithm, but this is really “convergence to a local minimum from an arbitrary starting point” rather than “convergence to a global minimum.” If it happens that the feasible set is convex and the objective function to be minimized is convex, then any local minimum is also a global minimum so algorithms that find a local minimum are sufficient. However, there are many situations in which we need to find a global minimum of a nonconvex nonlinear minimization problem and more complicated approaches are needed.

Methods for global optimization are typically treated as an advanced topic that is introduced to students only after an introductory course on optimality conditions and algorithms for finding local minima. The authors have written a textbook that is rather unusual in attempting to introduce both local methods and methods for global optimization in a first course on optimization.

After a brief introduction, the book begins with a chapter that gives several examples of optimization problems. The classical theory of Lagrange multipliers and the Karush-Kuhn-Tucker conditions is covered in chapter 3. Chapter 4 provides an interesting discussion of various ways of measuring the effectiveness and efficiency of optimization algorithms. Techniques for local optimization are presented in chapter 5, including the method of Nelder and Mead, steepest descent, conjugate gradients, and Newton’s method. This first half of the book provides brief coverage of the main topics that are traditionally included in a first course on nonlinear optimization.

The most interesting material appears in the final two chapters of the book. Deterministic algorithms for global optimization are presented in chapter 6. All of these methods require some special structure in the function to be minimized. For example, if we can assume Lipschitz continuity of the objective function, then branch and bound methods can be used to obtain a globally optimal solution. Stochastic algorithms for global optimization are the subject of chapter 7. These algorithms do not provide guaranteed convergence to a global minimum, but they can be very effective heuristics.

At just over 200 pages, this book provides a concise introduction to many of the important ideas in nonlinear and global optimization. These ideas are well illustrated with many small scale computational examples. Although the authors carefully state theorems and definitions, there are very few proofs. This textbook may be of interest to instructors who want to introduce global optimization in a first course and focus on basic concepts and algorithms rather than a deeper theoretical treatment of a smaller set of topics.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.

Preface.- 1. Introduction.- 2. Mathematical Modeling, Cases.- 3. NLP Optimality Conditions.- 4. Goodness of Optimization Algorithms.- 5. Nonlinear Programming Algorithms.- 6. Deterministic GO Algorithms.- 7. Stochastic GO Algorithms.- References.- Index.