Document

Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates

About this Digital Document

The primary focus of this dissertation is the design, analysis, and implementation of numerical methods to enhance Sequential Quadratic Optimization (SQO) methods for solving nonlinear constrained optimization problems. These enhancements address issues that challenge the practical limitations of SQO methods. The first part of this dissertation presents a penalty SQO algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A two-phase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first-phase subproblem determines the reduction that can be obtained in a local model of constraint violation. The second-phase subproblem seeks to minimize a local model of a penalty function. The solutions of both subproblems are then combined to form the search direction, in such a way that it yields a reduction in the local model of constraint violation that is proportional to the reduction attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for constrained optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under reasonable assumptions and numerical results are presented for a large set of test problems.In the second part of this dissertation, two matrix-free methods are presented for approximately solving exact penalty subproblems of large scale. The first approach is a novel iterative re-weighting algorithm (IRWA), which iteratively minimizes quadratic models of relaxed subproblems while simultaneously updating a relaxation vector. The second approach recasts the subproblem into a linearly constrained nonsmooth optimization problem and then applies alternating direction augmented Lagrangian (ADAL) technology to solve it. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions, which can be performed matrix-free. Both algorithms are proved to be globally convergent under loose assumptions, and each requires at most $O(1/\varepsilon^2)$ iterations to reach $\varepsilon$-optimality of the objective function. Numerical experiments exhibit the ability of both algorithms to efficiently find inexact solutions. Moreover, in certain cases, IRWA is shown to be more reliable than ADAL. In the final part of this dissertation, we focus on the design of the penalty parameter updating strategy in penalty SQO methods for solving large-scale nonlinear optimization problems. As the most computationally demanding aspect of such an approach is the computation of the search direction during each iteration, we consider the use of matrix-free methods for solving the direction-finding subproblems within SQP methods. This allows for the acceptance of inexact subproblem solutions, which can significantly reduce overall computational costs. In addition, such a method can be plagued by poor behavior of the global convergence mechanism, for which we consider the use of an exact penalty function. To confront this issue, we propose a dynamic penalty parameter updating strategy to be employed within the subproblem solver in such a way that the resulting search direction predicts progress toward both feasibility and optimality. We present our penalty parameter updating strategy and prove that does not decrease the penalty parameter unnecessarily in the neighborhood of points satisfying certain common assumptions. We also discuss two matrix-free subproblem solvers in which our updating strategy can be readily incorporated.

Full Title
Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates
Publisher
Lehigh University
Date Issued
2015-05
Language
English
Type
Form
electronic documents
Department name
Industrial Engineering
Digital Format
electronic documents
Media type
Creator role
Graduate Student
Identifier
923791750
https://asa.lib.lehigh.edu/Record/10612975
Wang, . H. (2015). Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates (1–). https://preserve.lehigh.edu/lehigh-scholarship/graduate-publications-theses-dissertations/theses-dissertations/practical-1
Wang, Hao. 2015. “Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates”. https://preserve.lehigh.edu/lehigh-scholarship/graduate-publications-theses-dissertations/theses-dissertations/practical-1.
Wang, Hao. Practical Enhancements in Sequential Quadratic Optimization: Infeasibility Detection, Subproblem Solvers, and Penalty Parameter Updates. May 2015, https://preserve.lehigh.edu/lehigh-scholarship/graduate-publications-theses-dissertations/theses-dissertations/practical-1.