Timezone: »

Differentiable Top-k with Optimal Transport
Yujia Xie · Hanjun Dai · Minshuo Chen · Bo Dai · Tuo Zhao · Hongyuan Zha · Wei Wei · Tomas Pfister

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1128

Finding the k largest or smallest elements from a collection of scores, i.e., top-k operation, is an important model component widely used in information retrieval, machine learning, and data mining. However, if the top-k operation is implemented in an algorithmic way, e.g., using bubble algorithm, the resulted model cannot be trained in an end-to-end way using prevalent gradient descent algorithms. This is because these implementations typically involve swapping indices, whose gradient cannot be computed. Moreover, the corresponding mapping from the input scores to the indicator vector of whether this element belongs to the top-k set is essentially discontinuous. To address the issue, we propose a smoothed approximation, namely SOFT (Scalable Optimal transport-based diFferenTiable) top-k operator. Specifically, our SOFT top-k operator approximates the output of top-k operation as the solution of an Entropic Optimal Transport (EOT) problem. The gradient of the SOFT operator can then be efficiently approximated based on the optimality conditions of EOT problem. We then apply the proposed operator to k-nearest neighbors algorithm and beam search algorithm. The numerical experiment demonstrates their achieve improved performance.

Author Information

Yujia Xie (Georgia Institute of Technology)
Hanjun Dai (Google Brain)
Minshuo Chen (Georgia Tech)
Bo Dai (Google Brain)
Tuo Zhao (Georgia Tech)
Hongyuan Zha (Georgia Tech)
Wei Wei (Google Inc.)
Tomas Pfister (Google)

More from the Same Authors