Skip to yearly menu bar Skip to main content


Poster

Fast Bidirectional Probability Estimation in Markov Models

Siddhartha Banerjee · Peter Lofgren

210 C #67

Abstract: We develop a new bidirectional algorithm for estimating Markov chain multi-step transition probabilities: given a Markov chain, we want to estimate the probability of hitting a given target state in steps after starting from a given source distribution. Given the target state t, we use a (reverse) local power iteration to construct an expandedtartdistribution,whichhasthesamemeanasthequantitywewantestimate,butasmalrvariance--thiscanthenbesampdefficientlybyaMonteCarloalgorithm.OurmethodextendsanyMarkovchaonadiscrete(fiteorcountab)state-space,andcanbeextendedcomputefunctionsofμ<i-steptransitionprobabilitiessuchasPaRank,graphdusions,higreturn×,etc.Ourmarest̲ist^sparse' Markov Chains -- wherein the number of transitions between states is comparable to the number of states -- the running time of our algorithm for a uniform-random target node is order-wise smaller than Monte Carlo and power iteration based algorithms; in particular, our method can estimate a probability p using only O(1/p) running time.

Live content is unavailable. Log in and register to view live content