Accelerated Primal-Dual Methods for Saddle Point Problems
In this work, we consider a convex-concave saddle point problem which subsumes convex constrained optimization problems as a special case. I will present an accelerated primal-dual (APD) algorithm with a novel momentum term leading to an optimal convergence rate for the class of problems considered. In first-order methods, step sizes depend on quantities usually not known in practice or hard to estimate, to facilitate the practical implementation of the algorithm, I will discuss a new backtracking technique with theoretical guarantees. The significance of this work is mainly due to 1) the simplicity of the proposed method, and 2) its ability to use larger step-sizes compared to the other related work. In the second part of the talk, I will present a distributed variant of the APD algorithm for solving distributed convex resource allocation problems in a decentralized manner over a communication network of collaborative agents. In this setup, we assume that the agents can only communicate with their neighbors over a time-varying directed network. The proposed algorithm, involving multiple rounds of communication at each iteration, enables the agents to find the optimal solution with an optimal rate of convergence. Finally, I will conclude with some future avenues based on addressing nonconvexity and synchronous updates of iterates in a distributed setting.