Poster
Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise
Shuyao Li · Sushrut Karmalkar · Ilias Diakonikolas · Jelena Diakonikolas
West Ballroom A-D #5605
Abstract:
We study the problem of learning a single neuron with respect to the -loss in the presence of adversarial distribution shifts, where the labels can be arbitrary, and the goal is to find a "best-fit" function.More precisely, given training samples from a reference distribution , the goal is to approximate the vector which minimizes the squared loss with respect to the worst-case distribution that is close in -divergence to .We design a computationally efficient algorithm that recovers a vector satisfying , where is a dimension-independent constant and is the witness attaining the min-max risk.Our algorithm follows the primal-dual framework and is designed by directly bounding the risk with respect to the original, nonconvex loss.From an optimization standpoint, our work opens new avenues for the design of primal-dual algorithms under structured nonconvexity.
Chat is not available.