Skip to yearly menu bar Skip to main content


Contributed talks
in
Workshop: OPT 2023: Optimization for Machine Learning

Contributed Talks 3: *Dueling Optimization with a Monotone Adversary* and *High-Dimensional Prediction for Sequential Decision Making*

Naren Manoj · Georgy Noarov · Cristóbal Guzmán

[ ]
Fri 15 Dec 12:30 p.m. PST — 1 p.m. PST

Abstract:

Two 15 minute talks:

  1. Dueling Optimization with a Monotone Adversary, Naren Sarayu Manoj
  • We introduce and study the problem of \textit{dueling optimization with a monotone adversary}, which is a generalization of (noiseless) dueling convex optimization. The goal is to design an online algorithm to find a minimizer \mathbf{x}^{\star} for a function f\colon \mathcal{X} \to \mathbb{R}, where \mathcal{X} \subseteq \mathbb{R}^d. In each round, the algorithm submits a pair of guesses, i.e., \mathbf{x}^{(1)} and \mathbf{x}^{(2)}, and the adversary responds with \textit{any} point in the space that is at least as good as both guesses. The cost of each query is the suboptimality of the worse of the two guesses; i.e., {\max} \left( f(\mathbf{x}^{(1)}), f(\mathbf{x}^{(2)}) \right) - f(\mathbf{x}^{\star}). The goal is to minimize the number of iterations required to find an \varepsilon-optimal point and to minimize the total cost (regret) of the guesses over many rounds. Our main result is an efficient randomized algorithm for several natural choices of the function f and set \mathcal{X} that incurs cost O(d) and iteration complexity O(d\log(1/\varepsilon)^2). Moreover, our dependence on d is asymptotically optimal, as we show examples in which any randomized algorithm for this problem must incur \Omega(d) cost and iteration complexity.
  1. High-Dimensional Prediction for Sequential Decision Making, Georgy Noarov
  • We study the problem of making predictions of an adversarially chosen high dimensional state that are \emph{unbiased} subject to an arbitrary collection of conditioning events, with the goal of tailoring these events to downstream decision makers. We give efficient algorithms for solving this problem, as well as a number of applications that stem from choosing the set of conditioning events appropriately. For example, we can efficiently produce predictions targeted at any polynomial number of decision makers, such that if they best respond to our predictions, each of them has diminishing swap regret at the optimal rate. We then generalize this to the online combinatorial optimization problem, where the decision makers have large action spaces corresponding to structured subsets of a set of base actions: We give the first algorithms that can guarantee (to any polynomial number of decision makers) regret to the best fixed action, not just overall, but on any polynomial number of subsequences that can depend on the actions chosen as well as any external context. We show how playing in an extensive form game can be cast into this framework, and use these results to give efficient algorithms for obtaining subsequence regret in extensive form games, which gives a new family of efficiently obtainable regret guarantees that captures and generalizes previously studied notions like regret to informed causal deviations, and is generally incomparable to other known families of efficiently obtainable guarantees. We then turn to uncertainty quantification in machine learning, and consider the problem of producing \emph{prediction sets} for multiclass and multilabel classification problems. We show how to produce class scores that have \emph{transparent coverage guarantees} --- they can be used to produce prediction sets that cover the true labels at the rate that they would have had the scores been true conditional probabilities. Moreover, we show how to do this such that the scores have improved Brier score (or cross-entropy loss) compared to any collection of benchmark models. Compared to conformal prediction techniques, this both gives increased flexibility and eliminates the need to choose a non-conformity score.

Chat is not available.