Timezone: »

 
Poster
Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products
Tharun Kumar Reddy Medini · Qixuan Huang · Yiqiu Wang · Vijai Mohan · Anshumali Shrivastava

Tue Dec 10 05:30 PM -- 07:30 PM (PST) @ East Exhibition Hall B + C #37
In the last decade, it has been shown that many hard AI tasks, especially in NLP, can be naturally modeled as extreme classification problems leading to improved precision. However, such models are prohibitively expensive to train due to the memory bottleneck in the last layer. For example, a reasonable softmax layer for the dataset of interest in this paper can easily reach well beyond 100 billion parameters (> 400 GB memory). To alleviate this problem, we present Merged-Average Classifiers via Hashing (MACH), a generic $K$-classification algorithm where memory provably scales at $O(\log K)$ without any assumption on the relation between classes. MACH is subtly a count-min sketch structure in disguise, which uses universal hashing to reduce classification with a large number of classes to few embarrassingly parallel and independent classification tasks with a small (constant) number of classes. MACH naturally provides a technique for zero communication model parallelism. We experiment with 6 datasets; some multiclass and some multilabel, and show consistent improvement in precision and recall metrics compared to respective baselines. In particular, we train an end-to -end deep classifier on a private product search dataset sampled from Amazon Search Engine with 70 million queries and 49.46 million documents. MACH outperforms, by a significant margin, the state-of-the-art extreme classification models deployed on commercial search engines: Parabel and dense embedding models. Our largest model has 6.4 billion parameters and trains in less than 35 hrs on a single p3.16x machine. Our training times are 7-10x faster, and our memory footprints are 2-4x smaller than the best baselines. This training time is also significantly lower than the one reported by Google’s mixture of experts (MoE) language model on a comparable model size and hardware.

Author Information

Tharun Medini (Rice University)

I'm a 3rd year PhD student at Rice University working with Prof.Anshumali Shrivastava. I primarily work on scaling up Deep Learning using Hashing techniques. I'm currently interning at Amazon Search in Palo Alto.

Qixuan Huang (Rice University)
Yiqiu Wang (Massachusetts Institute of Technology)
Vijai Mohan (www.amazon.com)
ANSHUMALI Shrivastava (Rice University)

More from the Same Authors