Skip to yearly menu bar Skip to main content


Poster

Diminishing Returns Shape Constraints for Interpretability and Regularization

Maya Gupta · Dara Bahri · Andrew Cotter · Kevin Canini

Room 517 AB #134

Keywords: [ Denoising ] [ Regularization ] [ Fairness, Accountability, and Transparency ] [ Regression ] [ Optimization ]


Abstract:

We investigate machine learning models that can provide diminishing returns and accelerating returns guarantees to capture prior knowledge or policies about how outputs should depend on inputs. We show that one can build flexible, nonlinear, multi-dimensional models using lattice functions with any combination of concavity/convexity and monotonicity constraints on any subsets of features, and compare to new shape-constrained neural networks. We demonstrate on real-world examples that these shape constrained models can provide tuning-free regularization and improve model understandability.

Live content is unavailable. Log in and register to view live content