Optimization is a useful paradigm for modeling data science problems and solving them using advanced algorithms. On the other hand, data science has brought new paradigms to various areas of computational science, such as linear algebra and partial differential equations. This course will review recent results on exploiting learning techniques as a tool for solving difficult optimization problems. The first part of the course will discuss how machine learning can be used in the context of combinatorial optimization techniques. We will then shift our focus to continuous relaxations of combinatorial problems, and explain how learning tools can be integrated with solvers in that space. Finally, we will present regression techniques and bandit approaches that are used in derivative-free optimization.

 

References:

  • Y. Bengio, A. Lodi, A. Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon, European Journal on Operations Research (2021)
  • S. Jegelka. Theory of Graph Neural Networks: Representation and Learning (2022)
  • J. Larson, M. Menickelly, S. M. Wild. Derivative-free optimization methods, Acta Numerica (2019)
  • P.-W. Wang, P. Donti, B. Wilder, J. Z. Kolter. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver, ICML (2019)