Timezone: »

 
Poster
Learning to Approximate a Bregman Divergence
Ali Siahkamari · XIDE XIA · Venkatesh Saligrama · David Castañón · Brian Kulis

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1727
Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning. In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations. We develop a formulation and algorithm for learning arbitrary Bregman divergences based on approximating their underlying convex generating function via a piecewise linear function. We provide theoretical approximation bounds using our parameterization and show that the generalization error $O_p(m^{-1/2})$ for metric learning using our framework matches the known generalization error in the strictly less general Mahalanobis metric learning setting. We further demonstrate empirically that our method performs well in comparison to existing metric learning methods, particularly for clustering and ranking problems.

Author Information

Ali Siahkamari (Boston University)

I'm a PhD candidate at Boston University, EE department. I'm graduating May 2021. My research focuses on statistical machine learning and algorithms. I have proposed a general piecewise linear modeling framework that can be implemented efficiently in high dimensions via GPUs. This framework enjoys statistical guarantees. Applications include regression, classification, anomaly detection and active learning.

XIDE XIA (Boston University)
Venkatesh Saligrama (Boston University)
David Castañón (Boston University)
Brian Kulis (Boston University and Amazon)

More from the Same Authors