Deep Learning for Solving Linear Integral Equations Associated with Markov Chains
Abstract
Linear integral equations are central to the analysis of general state-space Markov chains; solving them leads to Lyapunov functions (drift equation), the central limit theorem (Poisson’s equation), and stationary distributions (global balance equation). This paper develops a simple, simulator-based procedure that solves such equations by training a neural network to minimize a squared residual estimated via an unbiased “double-sample’’ loss of one-step transitions. The method does not require access to stationary distributions or long trajectories, and it extends to non-compact state spaces through a first-return decomposition that localizes training. A queueing case study demonstrates accuracy and robustness relative to Monte Carlo baselines.