Lecturer: Kimmo Berg
Assistant: Joonas Laihanen
Wed 15.02.2017 13-16
Thu 6.4.2017 9-12
Why this course?
Many real-life problems involve nonlinearities: many mechanical and chemical systems are nonlinear (drag force is nonlinear in the velocity), people's attitude towards risk is nonlinear, and shipping/ticket costs are nonlinear in weight/lenght of the trip. This course presents theoretical foundation and methods for solving the nonlinear programming (NLP) problems.
Solving NLP problems is hard, and we discuss why. Linear programming (LP) problems are usually considered as the "easy" problems and they have many important applications in real-life, but on this course we show that the class of easy problems (at least in theory) should be convex problems. We introduce the basics of convex analysis, which is itself one beautiful and complete field of mathematics. We develop and prove optimality conditions by starting from a geometric condition "there is no better solution nearby" and end up with algebraic equations that determine whether the point in question can or cannot be the optimum. On this road, we learn nice generally useful concepts and results from linear algebra and convex analysis.
"...in fact, the great watershed in optimization isn't between linearity and nonlinearity, but convexity and nonconvexity." R.T.Rockafellar 1993
" to solve a linear programming problem efficiently, you turn it into nonlinear problem using logarithmic barrier function and solve it using interior-point methods." anon.
On this course, the student will learn
- optimization theory. Necessary and sufficient optimality conditions for different problems and assumptions behind them. There can be many local and global optima.
- numerical algorithms for solving unconstrained and constrained problems. comparing algorithms in speed, convergence, memory requirements.
- formulate optimization problems. How to build a model that can be efficiently solved? What type of problems are easy and what are hard?
- convexity and its importance to optimization. convexity guarantees that local optima are global. basics of convex analysis. convex sets and functions. how to prove convexity.
- duality. for any optimization problem, there is another problem called dual problem that gives some information about the original problem.
- general mathematical issues. how to handle if the function is not differentiable or continuous. Axioms of choice. Implicit function theorem. connection to Hahn-Banach theorem. interpretation of Lagrange multipliers. connection to Lagrange/Hamiltionian mechanics, matrix inversion, exploiting structure in optimization/linear algebra, eigenvalues and ellipsoids, ...
Teaching: Lectures (24h) and exercise sessions (24h)
Assessment methods: Exam (75%), assignments (25%), extra points from homework and exercises
Grading scale: 0-5
Study material: Lecture slides and exercises. Book (Nonlinear Programming, Theory and Algorithms by Bazaraa, Sherali and Shetty.
Language of instruction: English
Prerequisites: MS-C2105 Optimoinnin perusteet (or equivalent)