# Computing the Stationary Distribution Locally

Year
2013
Type(s)
Author(s)
C.E. Lee, A. Ozdaglar, D. Shah
Source
Advances in Neural Information Processing Systems, pp. 1376-1384, 2013
Url
http://papers.nips.cc/paper/5009-computing-the-stationary-distribution-locally.pdf

Computing the stationary distribution of a large finite or countably infinite state space Markov Chain (MC) has become central in many problems such as statistical inference and network analysis. Standard methods involve large matrix multiplications as in power iteration, or simulations of long random walks to sample states from the stationary distribution, as in Markov Chain Monte Carlo (MCMC). However these methods are computationally costly; either they involve operations at every state or they scale (in computation time) at least linearly in the size of the state space. In this paper, we provide a novel algorithm that answers whether a chosen state in a MC has stationary probability larger than some Δ(0,1). If so, it estimates the stationary probability. Our algorithm uses information from a local neighborhood of the state on the graph induced by the MC, which has constant size relative to the state space. We provide correctness and convergence guarantees that depend on the algorithm parameters and mixing properties of the MC. Simulation results show MCs for which this method gives tight estimates.