# AN OPTIMIZATION PRIMER

## Roger J-B Wets

### PREFACE

This book is designed to serve as an introduction to mathematical programming for students who have a background roughly equivalent to a bachelor's degree in science, engineering or mathematics. More specifically, it is expected that the reader has a good foundation in Differential Calculus, Linear Algebra, and is familiar with the abstract notion of function. The presentation includes a very elementary introduction to Probability that will allow the non-initiated reader to follow the sections dealing with stochastic programming. There are 16 chapters, each one corresponds to what could be covered in about a week's lectures, 3-to-4 hours. Proofs have been provided so that (1) an instructor doesn't have to give them in glorious detail but can limit the discussion to the main idea(s), and (2) so that they can serve as guide to solving the exercises. An appendix provides a review of some standard notation and terminology as well as of basic results in analysis that might not be familiar to students that, after all, are bound to have somewhat different backgrounds.

The overall approach, if not innovative, is certainly non-standard in terms of present day textbooks. The emphasis is on setting up the tools to deal with a broad class of applications that come up in engineering, ecology, management, biology, economics, and so on, that are large scale in nature because they either have a `dynamic' or `uncertainty component. We don't start with linear programming and build up to global optimization, passing through quadratic programming, convex programming, integer programming, large scale programming, etc. Moreover, the accent is on the formulation and the analysis of optimization problems, and their solutions, rather than on the analysis of the solutions procedures. To achieve this, we proceed more or less, on three parallel tracks: formulation, theory and description of solution procedures.

A particular feature of this book is that it makes stochastic programming an integral part of the presentation. There are three basic reasons for this. First, stochastic optimization problems motivate the study of non-linear optimization, large scale optimization, non-differentiable functions and variational analysis. On the other hand, any instructor attempting to teach stochastic programming, quickly realizes that it's basically necessary to cover all the basics of deterministic programming. And finally, very few important decision problems, that can be modeled as optimization problems, do not involve some uncertainty about some of the parameters of the problem: uncertain data, future prospects, etc. It's thus imperative, that from the outset, the student be aware of the potential pitfalls of simplifying a model so as to skirt `uncertainty.' Making decisions under uncertainty is integral to life! Choosing a restaurant, buying life insurance or not, choosing a spouse, a career, a diet, an investment strategy, etc. are all decisions under uncertainty! We even make such decisions subconsciously when confronted with a `risky' environment. For example, when changing lanes in heavy traffic, we `calculate' the associated risk, and decide to change lanes or not, without actually knowing precisely what the drivers next to us are going to do. Once the decision is made and we observe the future unfold, we usually make adjustments (new decisions) that take into account the changed environment. All such decision problems fit under the general heading of decision making under uncertainty. A number of such decision problems are quantifiable and can be formulated as stochastic programming problems. A wide number of examples will be provided in the text. However, the probabilistic content will be naive. In fact, just a modicum of probability theory is all what's needed and allows us to include uncertainty in the formulation of our optimization models.

Of course, this doesn't mean that the book provides a comprehensive treatment, that's not possible without relying much more extensively on variational and functional analyis, `real' probability theory, the theory and numerics of ordinary and partial differential equations, stochastic dynamics and mathematical statistics.

Although, we end up describing a significant number of algorithmic procedures, we don't concern ourselves directly with implementation issues. This is best dealt with in specialized courses and related books. The goal here is not to educate algorithm-builders but to make sure that intelligent use can be made of the existing procedures and build the capability to modify such procedures to fit the applications. For example, in the case of linear programming, there is eventually a description of both of the simplex and the interior point methods, but from the outset on it's assumed that packages to solve mathematical programs, of various types including linear programs, are available (C-Plex, IBM Solutions, LOQO, ...) To allow for some experimentation with these solution procedures, it's assumed that the reader has access to Matlab (distributed by The MathWorks, Inc), in particular to the functionalities found in the Matlab Optimization Toolbox. Some of the examples and exercises were solved using these Matlab functions and whenever appropriate the corresponding `m-file' has been supplied.