Timezone: »

Towards Safe Policy Improvement for Non-Stationary MDPs
Yash Chandak · Scott Jordan · Georgios Theocharous · Martha White · Philip Thomas

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1073

Many real-world sequential decision-making problems involve critical systems with financial risks and human-life risks. While several works in the past have proposed methods that are safe for deployment, they assume that the underlying problem is stationary. However, many real-world problems of interest exhibit non-stationarity, and when stakes are high, the cost associated with a false stationarity assumption may be unacceptable. We take the first steps towards ensuring safety, with high confidence, for smoothly-varying non-stationary decision problems. Our proposed method extends a type of safe algorithm, called a Seldonian algorithm, through a synthesis of model-free reinforcement learning with time-series analysis. Safety is ensured using sequential hypothesis testing of a policy’s forecasted performance, and confidence intervals are obtained using wild bootstrap.

Author Information

Yash Chandak (University of Massachusetts Amherst)
Scott Jordan (University of Massachusetts Amherst)
Georgios Theocharous (Adobe Research)
Martha White (University of Alberta)
Philip Thomas (University of Massachusetts Amherst)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors