Filtered Anderson acceleration for nonlinear PDEs
Anderson acceleration (AA) is a popular extrapolation technique used to accelerate the convergence of fixed-point iterations. It requires the storage of a (usually) small number of solution and update vectors, and the solution of an optimization problem that is generally posed as least-squares and solved efficiently by a thin QR decomposition. First developed in 1965 in the context of integral equations, this method has recently been increasing in popularity as a Jacobian-free approach to converging to discrete nonlinear PDE solutions, not to mention applications in optimization and machine learning. The convergence behavior of AA is still not fully understood, and its dependence on the selection of parameters including the algorithmic depth remains an active field of research. In this talk we will discuss understanding and improving the behavior of the algorithm using standard tools and techniques from numerical linear algebra. We will also numerically demonstrate how the filtering and dynamic depth selection procedures developed concurrently with recent theory can be used to control the condition number of the least-squares problem and improve both the efficiency and robustness of accelerated solves.
Place: Math, 402 and