Timezone: »
Poster
Estimating Learnability in the Sublinear Data Regime
Weihao Kong · Gregory Valiant
We consider the problem of estimating how well a model class is capable of fitting a distribution of labeled data. We show that it is often possible to accurately estimate this ``learnability'' even when given an amount of data that is too small to reliably learn any accurate model. Our first result applies to the setting where the data is drawn from a $d$-dimensional distribution with isotropic covariance, and the label of each datapoint is an arbitrary noisy function of the datapoint. In this setting, we show that with $O(\sqrt{d})$ samples, one can accurately estimate the fraction of the variance of the label that can be explained via the best linear function of the data.
We extend these techniques to a binary classification, and show that the prediction error of the best linear classifier can be accurately estimated given $O(\sqrt{d})$ labeled samples. For comparison, in both the linear regression and binary classification settings, even if there is no noise in the labels, a sample size linear in the dimension, $d$, is required to \emph{learn} any function correlated with the underlying model. We further extend our estimation approach to the setting where the data distribution has an (unknown) arbitrary covariance matrix, allowing these techniques to be applied to settings where the model class consists of a linear function applied to a nonlinear embedding of the data. We demonstrate the practical viability of our approaches on synthetic and real data. This ability to estimate the explanatory value of a set of features (or dataset), even in the regime in which there is too little data to realize that explanatory value, may be relevant to the scientific and industrial settings for which data collection is expensive and there are many potentially relevant feature sets that could be collected.
Author Information
Weihao Kong (Stanford University)
Gregory Valiant (Stanford University)
More from the Same Authors
-
2022 Poster: Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model »
Pranjal Awasthi · Abhimanyu Das · Weihao Kong · Rajat Sen -
2022 Poster: DP-PCA: Statistically Optimal and Differentially Private PCA »
Xiyang Liu · Weihao Kong · Prateek Jain · Sewoong Oh -
2021 Poster: Robust and differentially private mean estimation »
Xiyang Liu · Weihao Kong · Sham Kakade · Sewoong Oh -
2020 Poster: Robust Meta-learning for Mixed Linear Regression with Small Batches »
Weihao Kong · Raghav Somani · Sham Kakade · Sewoong Oh -
2020 Poster: Worst-Case Analysis for Randomly Collected Data »
Justin Chen · Gregory Valiant · Paul Valiant -
2020 Oral: Worst-Case Analysis for Randomly Collected Data »
Justin Chen · Gregory Valiant · Paul Valiant -
2019 Poster: A Polynomial Time Algorithm for Log-Concave Maximum Likelihood via Locally Exponential Families »
Brian Axelrod · Ilias Diakonikolas · Alistair Stewart · Anastasios Sidiropoulos · Gregory Valiant -
2017 Poster: Learning Populations of Parameters »
Kevin Tian · Weihao Kong · Gregory Valiant