CS5020 - Non-linear Optimization : Theory and Algorithms

Course Data :

Course Objectives

Optimisation problems arise in a variety of disciplines, for e.g., machine learning, reinforcement learning, signal processing, networks. This course covers the core concepts of continuous optimisation and includes a thorough introduction to unconstrained and constrained optimisation problems. Popular algorithms such as gradient and Newton methods will be introduced and analysed for both convex and non-convex optimisation settings.

Learning outcomes

  • Develop an understanding of the foundations of classic continuous optimisation problems, in particular identifying convexity, smoothness, feasible region and dual reformulation.
  • Develop familiarity with first and second-order optimisation algorithms.
  • Gain practical knowledge by implementing the algorithms introduced in the course.

Course Contents

  • Part 0: Recap of real analysis and linear algebra (Lectures 1-3)
    • Sets, functions, sequences, continuity, differentiability, gradients, Taylor series expansion.
    • Vectors, matrices, norms, symmetric matrices, eigenvalue decomposition, positive semidefinite and positive definite matrices.
  • Part I: Convex sets and functions (Lectures 4 – 11)
    • Convex sets, examples and properties.
    • Convex functions, strict and strong convexity, examples, and convexity preserving operations
    • Equivalent definitions of convexity under differentiability assumptions.
  • Part II: Unconstrained optimisation (Lectures 12-20)
    • Maxima, minima, stationary point, saddle point, local and global maximum/minimum.
    • First order and second order conditions for optimality.
    • Linear, quadratic and convex optimisation problems, examples.
    • Benefits of convexity.
  • Part III: Constrained optimisation (Lectures 21-29)
    • Constrained optimisation problem, feasible set.
    • Lagrangian, KKT conditions
    • Linear and quadratic optimisation
    • Duality for convex optimisation — theorem of alternatives, Farka’s lemma.
  • Part IV: Algorithms for optimisation (Lectures 30-40)
    • Gradient descent with fixed step size, line search and Armijo-Goldstein rule.
    • Newton method and variations.
    • Conjugate gradient and Quasi-newton methods.
    • Algorithms for constrained optimisation: Projected gradient descent, dual ascent, alternating direction method of multipliers.
  • Part V: Applications (Lectures 41-43)
    • Applications in statistics, machine learning and computer science.

Text Books

  • Boyd, Stephen, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
  • Luenberger, David G., and Yinyu Ye. Linear and nonlinear programming. 4th edition. Springer, 2015.

References

  • Bertsekas, Dimitri P. Nonlinear programming. Belmont: Athena scientific, 1999.
  • Nocedal, Jorge and Wright, Stephen. Numerical optimization. Springer, 1999

Pre-Requisites

    None

Parameters

Credits Type Date of Introduction
3-1-0-0-0-8-12 Elective Mar 2018

Previous Instances of the Course


© 2016 - All Rights Reserved - Dept of CSE, IIT Madras
Website Credits