Date

9-1-2018

Document Type

Dissertation

Degree

Doctor of Philosophy

Department

Information and Systems Engineering

First Adviser

Takac, Martin

Abstract

The scale of modern datasets necessitates the development of efficient distributed opti- mization methods for composite problems, which have numerous applications in the field of machine learning. A critical challenge in realizing this promise of scalability is to develop efficient methods for communicating and coordinating information between distributed ma- chines, taking into account the specific needs of machine learning algorithms. Recent work in this area has been limited by focusing heavily on developing highly specific methods for the distributed environment. These special-purpose methods are often unable to fully leverage the competitive performance of their well-tuned and customized single machine counterparts. Further, they are unable to easily integrate improvements that continue to be made to single machine methods. To this end, we present a framework for distributed optimization in Chapter 2 and its accelerated version in Chapter 3 that allow the flexibil- ity of arbitrary solvers to be used on each machine locally, and yet maintains competitive performance against other distributed methods. We give strong primal-dual convergence rate guarantees for our framework that hold for arbitrary local solvers. We demonstrate the impact of local solver selection both theoretically and in an extensive experimental comparison. Further, in Chapter 4 we proposed algorithmic modifications to an existed distributed inexact dumped Newton method, which lead to less round of communications and better load-balancing.In Chapter 5, we introduce the concept of an Underestimate Sequence (UES), which is a natural extension of Nesterov’s estimate sequence. Our definition of a UES utilizes three sequences, one of which is a lower bound of the objective function. The question of how to construct an appropriate sequence of lower bounds is also addressed, and we present lower bounds for strongly convex smooth functions and for strongly convex composite functions, which adhere to the UES framework. Further, we propose several first order methods for minimizing strongly convex functions in both the smooth and composite cases. The algorithms, based on efficiently updating lower bounds on the objective functions, have natural stopping conditions, which provides the user with a certificate of optimality. Convergence of all algorithms is guaranteed through the UES framework, and we show that all presented algorithms converge linearly, with the accelerated variants enjoying the optimal linear rate of convergence.

Share

COinS