Timezone: »
The computational cost of training with softmax cross entropy loss grows linearly with the number of classes. For the settings where a large number of classes are involved, a common method to speed up training is to sample a subset of classes and utilize an estimate of the loss gradient based on these classes, known as the sampled softmax method. However, the sampled softmax provides a biased estimate of the gradient unless the samples are drawn from the exact softmax distribution, which is again expensive to compute. Therefore, a widely employed practical approach involves sampling from a simpler distribution in the hope of approximating the exact softmax distribution. In this paper, we develop the first theoretical understanding of the role that different sampling distributions play in determining the quality of sampled softmax. Motivated by our analysis and the work on kernel-based sampling, we propose the Random Fourier Softmax (RF-softmax) method that utilizes the powerful Random Fourier Features to enable more efficient and accurate sampling from an approximate softmax distribution. We show that RF-softmax leads to low bias in estimation in terms of both the full softmax distribution and the full softmax gradient. Furthermore, the cost of RF-softmax scales only logarithmically with the number of classes.
Author Information
Ankit Singh Rawat (Google Research)
Jiecao Chen (Google Research)
Felix Xinnan Yu (Google Research)
Ananda Theertha Suresh (Google)
Sanjiv Kumar (Google Research)
More from the Same Authors
-
2020 Poster: Why are Adaptive Methods Good for Attention Models? »
Jingzhao Zhang · Sai Praneeth Karimireddy · Andreas Veit · Seungyeon Kim · Sashank Reddi · Sanjiv Kumar · Suvrit Sra -
2020 Poster: Multi-Stage Influence Function »
Hongge Chen · Si Si · Yang Li · Ciprian Chelba · Sanjiv Kumar · Duane Boning · Cho-Jui Hsieh -
2020 Poster: O(n) Connections are Expressive Enough: Universal Approximability of Sparse Transformers »
Chulhee Yun · Yin-Wen Chang · Srinadh Bhojanapalli · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2020 Poster: Robust large-margin learning in hyperbolic space »
Melanie Weber · Manzil Zaheer · Ankit Singh Rawat · Aditya Menon · Sanjiv Kumar -
2020 Poster: Adversarial robustness via robust low rank representations »
Pranjal Awasthi · Himanshu Jain · Ankit Singh Rawat · Aravindan Vijayaraghavan -
2020 Poster: Learning discrete distributions: user vs item-level privacy »
Yuhan Liu · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Michael D Riley -
2019 Poster: Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces »
Chuan Guo · Ali Mousavi · Xiang Wu · Daniel Holtmann-Rice · Satyen Kale · Sashank Reddi · Sanjiv Kumar -
2019 Poster: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Spotlight: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Poster: Differentially Private Anonymized Histograms »
Ananda Theertha Suresh -
2018 Poster: A Practical Algorithm for Distributed Clustering and Outlier Detection »
Jiecao Chen · Erfan Sadeqi Azer · Qin Zhang -
2018 Poster: Adaptive Methods for Nonconvex Optimization »
Manzil Zaheer · Sashank Reddi · Devendra Sachan · Satyen Kale · Sanjiv Kumar -
2018 Poster: Tight Bounds for Collaborative PAC Learning via Multiplicative Weights »
Jiecao Chen · Qin Zhang · Yuan Zhou -
2018 Poster: Data Amplification: A Unified and Competitive Approach to Property Estimation »
Yi Hao · Alon Orlitsky · Ananda Theertha Suresh · Yihong Wu -
2018 Poster: cpSGD: Communication-efficient and differentially-private distributed SGD »
Naman Agarwal · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Brendan McMahan -
2018 Spotlight: cpSGD: Communication-efficient and differentially-private distributed SGD »
Naman Agarwal · Ananda Theertha Suresh · Felix Xinnan Yu · Sanjiv Kumar · Brendan McMahan -
2017 Poster: Multiscale Quantization for Fast Similarity Search »
Xiang Wu · Ruiqi Guo · Ananda Theertha Suresh · Sanjiv Kumar · Daniel Holtmann-Rice · David Simcha · Felix Yu -
2017 Poster: Model-Powered Conditional Independence Test »
Rajat Sen · Ananda Theertha Suresh · Karthikeyan Shanmugam · Alexandros Dimakis · Sanjay Shakkottai -
2016 Poster: Orthogonal Random Features »
Felix Xinnan Yu · Ananda Theertha Suresh · Krzysztof M Choromanski · Daniel Holtmann-Rice · Sanjiv Kumar -
2016 Oral: Orthogonal Random Features »
Felix Xinnan Yu · Ananda Theertha Suresh · Krzysztof M Choromanski · Daniel Holtmann-Rice · Sanjiv Kumar -
2016 Poster: Communication-Optimal Distributed Clustering »
Jiecao Chen · He Sun · David Woodruff · Qin Zhang -
2015 Workshop: Learning and privacy with incomplete data and weak supervision »
Giorgio Patrini · Tony Jebara · Richard Nock · Dimitrios Kotzias · Felix Xinnan Yu -
2015 Workshop: The 1st International Workshop "Feature Extraction: Modern Questions and Challenges" »
Dmitry Storcheus · Sanjiv Kumar · Afshin Rostamizadeh -
2015 Poster: Spherical Random Features for Polynomial Kernels »
Jeffrey Pennington · Felix Yu · Sanjiv Kumar -
2015 Spotlight: Spherical Random Features for Polynomial Kernels »
Jeffrey Pennington · Felix Yu · Sanjiv Kumar -
2015 Poster: Structured Transforms for Small-Footprint Deep Learning »
Vikas Sindhwani · Tara Sainath · Sanjiv Kumar -
2015 Spotlight: Structured Transforms for Small-Footprint Deep Learning »
Vikas Sindhwani · Tara Sainath · Sanjiv Kumar -
2014 Session: Oral Session 8 »
Sanjiv Kumar -
2014 Poster: Discrete Graph Hashing »
Wei Liu · Cun Mu · Sanjiv Kumar · Shih-Fu Chang -
2014 Spotlight: Discrete Graph Hashing »
Wei Liu · Cun Mu · Sanjiv Kumar · Shih-Fu Chang -
2012 Poster: Angular Quantization based Binary Codes for Fast Similarity Search »
Yunchao Gong · Sanjiv Kumar · Vishal Verma · Svetlana Lazebnik -
2009 Poster: Ensemble Nystrom Method »
Sanjiv Kumar · Mehryar Mohri · Ameet S Talwalkar