Workshop
Relations between machine learning problems - an approach to unify the field
Robert Williamson · John Langford · Ulrike von Luxburg · Mark Reid · Jennifer Wortman Vaughan
Melia Sierra Nevada: Dilar
Thu 15 Dec, 10:30 p.m. PST
What:
The workshop proposes to focus on relations between machine learning problems. We use “relation” quite generally to include (but not limit ourselves to) notions such as: one type of problem being viewed special case of another type (e.g., classification as thresholded probability estimation); reductions between learning problems (e.g., transforming ranking problems into classification problems); and the use of surrogate losses (e.g., replacing misclassification loss with some other, convex loss). We also include relations between sets of learning problems, such as those studied in the (old) theory of “comparison of experiments”, as well as recent connections between machine learning problems and what could be construed as "economic learning problems" such as prediction markets and forecast elicitation.
Why: The point of studying relations between machine learning problems is that it stands a reasonable chance of being a way to be able to understand the field of machine learning as a whole. It could serve to prevent re-invention, and rapidly facilitate the growth of new methods. The motivation is not dissimilar to Hal Varian’s notion of combinatorial innovation. Another analogy is to consider the development of function theory in the 19th century and observe the rapid advances made possible by the development of functional analysis, which, rather than studying individual functions, studied operators that transformed one function to another.
Much recent work in machine learning can be interpreted as relations between problems. For example:
• Surrogate regret bounds (bound the performance attained for one learning problem in terms of that obtained for another) [Bartlett et al, 2007]
• Relationships between binary classification problems and distances between probability distributions [Reid and Williamson 2011]
• Reductions from class probability estimation to classification, or reinforcement learning to classification [Langford et al; 2005-]
More recently there have been connections to problems that do not even seem to be about machine learning, such as the equivalence between
• Cost-function based prediction markets and no-regret learning [Chen and Wortman-Vaughan 2010]
• Elicitability of properties of distributions and proper losses [Lambert 2011]
In fact some older work in machine learning can be viewed as relations between problems:
• Learning with real-valued functions in the presence of noise can be reduced to multiclass classification [Bartlett, Long & Williamson 1996]
• Comparison of Experiments [Blackwell 1955]
If one attempts to construct a catalogue of machine learning problems at present one is rapidly overwhelmed by the complexity. And it is not at all clear (on the basis of the usual description of them) whether or not two problems with different names are really different. (If the reader is unconvinced, consider the following partial list: batch, online, transductive, off-training set, semi-supervised, noisy (label, attribute, constant noise / variable noise, data of variable quality), data of different costs, weighted loss functions, active, distributed, classification (binary weighted binary multi-class), structured output, probabilistic concepts / scoring rules, class probability estimation, learning with statistical queries, Neyman-Pearson classification, regression, ordinal regression, ranked regression, ranking, ranking the best, optimising the ROC curve, optimising the AUC, regression, selection, novelty detection, multi-instance learning, minimum volume sets, density level sets, regression level sets, sets of quantiles, quantile regression, density estimation, data segmentation, clustering, co-training, co-validation, learning with constraints, conditional estimators, estimated loss, confidence / hedging estimators, hypothesis testing, distributional distance estimation, learning relations, learning total orders, learning causal relationships, and estimating performance (cross validation)!
Specific topics: We would solicit contributions on novel relations between machine learning problems, as well as theoretical and practical frameworks to construct such relations. We are not restricting the workshop to pure theory, although it seems natural the workshop will have a theoretical bent.
Who: We believe the workshop will be of considerable interest to theoretically inclined machine learning researchers, as it offers a new view as to how to situate one’s work. Furthermore we also believe it should be of interest to practitioners because being able to relate a new problem to an old one can save substantial work in having to construct a new solution.
Outcomes:
• New relations between learning problems – not individual solutions to individual problems
• Visibility and promulgation of the “meme” of relating problems;
• We believe the nature of the workshop would suit the publication of workshop proceedings.
• Potential agreement to a shared community effort to build a comprehensive map of the relations between machine learning problems.
Live content is unavailable. Log in and register to view live content