Timezone: »

 
Poster
Diminishing Returns Shape Constraints for Interpretability and Regularization
Maya Gupta · Dara Bahri · Andrew Cotter · Kevin Canini

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #134

We investigate machine learning models that can provide diminishing returns and accelerating returns guarantees to capture prior knowledge or policies about how outputs should depend on inputs. We show that one can build flexible, nonlinear, multi-dimensional models using lattice functions with any combination of concavity/convexity and monotonicity constraints on any subsets of features, and compare to new shape-constrained neural networks. We demonstrate on real-world examples that these shape constrained models can provide tuning-free regularization and improve model understandability.

Author Information

Maya Gupta (Google)
Dara Bahri (Google AI)
Andy Cotter (Google)
Kevin Canini (Google)

More from the Same Authors