You are here

Dynamic Optimization for Beginners

Piermarco Cannarsa and Filippo Gazzola
Publisher: 
European Mathematical Society
Publication Date: 
2021
Number of Pages: 
360
Format: 
Hardcover
Price: 
59.00
ISBN: 
978-3985470129
Category: 
Textbook
[Reviewed by
Bill Satzer
, on
05/2/2024
]
“Dynamic optimization”, in general terms, usually refers to optimization of quantities in systems that change in time. The authors never define it explicitly, but the book’s contents suggest that to them it means primarily a combination of the calculus of variations and control theory.
 
The authors say that they are addressing two different sets of readers: mathematics students and those who need dynamic optimization for applied sciences such as economics and data science. They describe their book as self-contained with no prerequisites, and this is consistent with the “for Beginners” part of their title. That, however, requires some interpretation and is addressed further below.
 
The book’s topics are similar to those described in many comparable texts on the calculus or variations and control theory. The text begins with three background chapters on ordinary differential equations, functional analysis, and first order partial differential equations. The first of these is intended to remind readers of things they’ve learned before and need to remember, and the next two are lighter treatments of subjects required as background for their discussion of dynamical optimization.
 
The treatment of the calculus of variations focuses on the basics. It confines itself to the classical function spaces – continuous and continuously differentiable functions. No Sobolev spaces or anything more exotic show up here. The primary purpose of this part is to provide the tools necessary later on. The authors then begin their discussion of the major elements of control theory by focusing first on controllability, and then moving to optimal control for general nonlinear systems. They emphasize necessary conditions for optimality and use the Pontryagin minimum (or maximum) principle for determining optimal controls. A couple of extended application examples are noteworthy. These include the minimal fuel moon lander problem and optimal lockdown strategies for pandemics.
 
Finally, the authors take up dynamic programming and treat it at a fairly basic level and emphasize applications to optimal control. They connect their approach to the Pontryagin minimum principle via Bellman’s work and what they call the Hamilton-Jacobi-Bellman theorem.
 
Simple exercises – some with hints - are scattered throughout the book. The final chapter of the book has a more extensive collection of exercises and their solutions that correspond to each of its chapters. These would likely be especially helpful to readers of the corresponding chapters who might be struggling with the more difficult theoretical material.
 
Prerequisites for the book include a strong background in calculus and ordinary differential equations, at least basic linear algebra, and some comfort with the tools and arguments of real analysis. The “for Beginners” of the book’s title must refer to those with this background who are beginning their study of dynamic optimization.
Bill Satzer (bsatzer@gmail.com), now retired from 3M Company, spent most of his career as a mathematician working in industry on a variety of applications. He did his PhD work in dynamical systems and celestial mechanics.