Approximating the Stationary Probability of a Single State in a Markov chain

Year
2013
Type(s)
Author(s)
C.E. Lee, A. Ozdaglar, D. Shah
Source
arXiv preprint arXiv:1312.1986
Url
https://arxiv.org/pdf/1312.1986.pdf

In this paper, we present a novel iterative Monte Carlo method for approximating the stationary probability of a single state of a positive recurrent Markov chain. We utilize the characterization that the stationary probability of a state i is inversely proportional to the expected return time of a random walk beginning at i. Our method obtains an ϵ-multiplicative close estimate with probability greater than 1α using at most Õ (tmixln(1/α)/πiϵ2) simulated random walk steps on the Markov chain across all iterations, where tmix is the standard mixing time and πi is the stationary probability. In addition, the estimate at each iteration is guaranteed to be an upper bound with high probability, and is decreasing in expectation with the iteration count, allowing us to monitor the progress of the algorithm and design effective termination criteria. We propose a termination criteria which guarantees a ϵ(1+4ln(2)tmix) multiplicative error performance for states with stationary probability larger than Δ, while providing an additive error for states with stationary probability less than Δ(0,1). The algorithm along with this termination criteria uses at mostÕ (ln(1/α)ϵ2min(tmixπi,1ϵΔ)) simulated random walk steps, which is bounded by a constant with respect to the Markov Chain. We provide a tight analysis of our algorithm based on a locally weighted variant of the mixing time. Our results naturally extend for countably infinite state space Markov chains via Lyapunov function analysis.