Solving Systems of Linear Equations: Locally and Asynchronously

Year
2014
Type(s)
Author(s)
C.E. Lee, A. Ozdaglar, D. Shah
Source
Computing Research Repository, 2014
Url
https://pdfs.semanticscholar.org/b4c6/ac60094d3987ef26080533eea48acb821bf4.pdf

We consider approximating a single component of the solution to a system of linear equations Ax = b, where A is an invertible real matrix and b ∈ R n. If A is either diagonally dominant or positive definite, we can equivalently solve for xi in x = Gx + z for some G and z such that spectral radius ρ(G) < 1. For example, if A is symmetric positive definite, there is a transformation such that ρ(G) = (κ(A) − 1)/(κ(A) + 1), where κ(A) is the condition number of A. Existing algorithms either focus on computing the full vector x or use Monte Carlo methods to estimate a component xi under the condition kGk∞ < 1. We consider the setting where n is large, yet G is sparse, i.e., each row has at most d nonzero entries. We present synchronous and asynchronous randomized variants of a local algorithm which relies on the Neumann series characterization of the component xi , e T i P∞ k=0 Gk z, and allows us to limit the sparsity of the vectors involved in the computation, leading to improved convergence rates. Both variants of our algorithm produce an estimate ˆxi such that |xˆi − xi | ≤ ǫkxk2, and we provide convergence guarantees when kGk2 < 1, thus encompassing a larger class of systems. We prove that the synchronous local algorithm uses at most O(min(dǫln(d)/ ln(kGk2) , dn ln(ǫ)/ ln(kGk2))) multiplications. The asynchronous local algorithm adaptively samples one coordinate to update among the nonzero coordinates of the current iterate in each time step. We prove with high probability that the error contracts by a time varying factor in each step, guaranteeing that the algorithm converges to the correct solution. With probability at least 1 − δ, the asynchronous randomized algorithm uses at most O(min(d(ǫ p δ/5)−d/(1−kGk2) , −dn ln(ǫ √ δ)/(1 − kGk2))) multiplications. Thus our algorithms obtain an approximation for xi in constant time with respect to the size of the matrix when d = O(1) and 1/(1 − kGk2) = O(1) as a function of n, which holds for sparse expanders.