This paper proposes a novel non-parametric multidimensional convex regression estimator which is designed to be robust to adversarial perturbations in the empirical measure. We minimize over convex functions the maximum (over Wasserstein perturbations of the empirical measure) of the absolute regression errors. The inner maximization is solved in closed form resulting in a regularization penalty involves the norm of the gradient. We show consistency of our estimator and a rate of convergence of order $ \widetilde{O}\left( n^{-1/d}\right) $, matching the bounds of alternative estimators based on square-loss minimization. Contrary to all of the existing results, our convergence rates hold without imposing compactness on the underlying domain and with no a priori bounds on the underlying convex function or its gradient norm.
Author Information
Jose Blanchet (Stanford University)
Peter W Glynn (Stanford University)
Peter W. Glynn is the Thomas Ford Professor in the Department of Management Science and Engineering (MS&E) at Stanford University, and also holds a courtesy appointment in the Department of Electrical Engineering. He received his Ph.D in Operations Research from Stanford University in 1982. He then joined the faculty of the University of Wisconsin at Madison, where he held a joint appointment between the Industrial Engineering Department and Mathematics Research Center, and courtesy appointments in Computer Science and Mathematics. In 1987, he returned to Stanford, where he joined the Department of Operations Research. He was Director of Stanford's Institute for Computational and Mathematical Engineering from 2006 until 2010 and served as Chair of MS&E from 2011 through 2015. He is a Fellow of INFORMS and a Fellow of the Institute of Mathematical Statistics, and was an IMS Medallion Lecturer in 1995 and INFORMS Markov Lecturer in 2014. He was co-winner of the Outstanding Publication Awards from the INFORMS Simulation Society in 1993, 2008, and 2016, was a co-winner of the Best (Biannual) Publication Award from the INFORMS Applied Probability Society in 2009, and was the co-winner of the John von Neumann Theory Prize from INFORMS in 2010. In 2012, he was elected to the National Academy of Engineering. He was Founding Editor-in-Chief of Stochastic Systems and is currently Editor-in-Chief of Journal of Applied Probability and Advances in Applied Probability. His research interests lie in simulation, computational probability, queueing theory, statistical inference for stochastic processes, and stochastic modeling.
Jun Yan (Stanford)
Zhengqing Zhou (Stanford University)
More from the Same Authors
-
2019 Poster: Batched Multi-armed Bandits Problem »
Zijun Gao · Yanjun Han · Zhimei Ren · Zhengqing Zhou -
2019 Poster: Learning in Generalized Linear Contextual Bandits with Stochastic Delays »
Zhengyuan Zhou · Renyuan Xu · Jose Blanchet -
2019 Spotlight: Learning in Generalized Linear Contextual Bandits with Stochastic Delays »
Zhengyuan Zhou · Renyuan Xu · Jose Blanchet -
2019 Oral: Batched Multi-armed Bandits Problem »
Zijun Gao · Yanjun Han · Zhimei Ren · Zhengqing Zhou -
2019 Poster: Online EXP3 Learning in Adversarial Bandits with Delayed Feedback »
Ilai Bistritz · Zhengyuan Zhou · Xi Chen · Nicholas Bambos · Jose Blanchet -
2019 Poster: Semi-Parametric Dynamic Contextual Pricing »
Virag Shah · Ramesh Johari · Jose Blanchet -
2018 Poster: Learning in Games with Lossy Feedback »
Zhengyuan Zhou · Panayotis Mertikopoulos · Susan Athey · Nicholas Bambos · Peter W Glynn · Yinyu Ye -
2017 Poster: Countering Feedback Delays in Multi-Agent Learning »
Zhengyuan Zhou · Panayotis Mertikopoulos · Nicholas Bambos · Peter W Glynn · Claire Tomlin -
2017 Poster: Stochastic Mirror Descent in Variationally Coherent Optimization Problems »
Zhengyuan Zhou · Panayotis Mertikopoulos · Nicholas Bambos · Stephen Boyd · Peter W Glynn