Poster
Computing Approximate Sensitivities
Swati Padmanabhan · David Woodruff · Richard Zhang
Great Hall & Hall B1+B2 (level 1) #1224
Abstract:
Recent works in dimensionality reduction for regression tasks have introduced the notion of sensitivity, an estimate of the importance of a specific datapoint in a dataset, offering provable guarantees on the quality of the approximation after removing low-sensitivity datapoints via subsampling. However, fast algorithms for approximating sensitivities, which we show is equivalent to approximate regression, are known for only the setting, in which they are popularly termed leverage scores. In this work, we provide the first efficient algorithms for approximating sensitivities and other summary statistics of a given matrix. In particular, for a given matrix, we compute -approximation to its sensitivities at the cost of sensitivity computations. For estimating the total sensitivity (i.e. the sum of sensitivities), we provide an algorithm based on importance sampling of Lewis weights, which computes a constant factor approximation at the cost of roughly sensitivity computations, with no polynomial dependence on . Furthermore, we estimate the maximum sensitivity up to a factor in sensitivity computations. We also generalize these results to norms. Lastly, we experimentally show that for a wide class of structured matrices in real-world datasets, the total sensitivity can be quickly approximated and is significantly smaller than the theoretical prediction, demonstrating that real-world datasets have on average low intrinsic effective dimensionality.
Chat is not available.