Goto

Collaborating Authors

 amazon search



Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

Neural Information Processing Systems

In the last decade, it has been shown that many hard AI tasks, especially in NLP, can be naturally modeled as extreme classification problems leading to improved precision. However, such models are prohibitively expensive to train due to the memory bottleneck in the last layer. For example, a reasonable softmax layer for the dataset of interest in this paper can easily reach well beyond 100 billion parameters (> 400 GB memory). To alleviate this problem, we present Merged-Average Classifiers via Hashing (MACH), a generic $K$-classification algorithm where memory provably scales at $O(\log K)$ without any assumption on the relation between classes. MACH is subtly a count-min sketch structure in disguise, which uses universal hashing to reduce classification with a large number of classes to few embarrassingly parallel and independent classification tasks with a small (constant) number of classes.



Reviews: Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

Neural Information Processing Systems

This paper studies the task of extreme classification with a large amount of target categories. It developed a hashing-based algorithm, MACH. Then a classifier is trained and applied for each hash mapping, on the reduced problem with much smaller amount of target classes. The prediction results of the sub-classifiers are then combined to re-constructed the final output. The proposed methods are demonstrated to be both efficient and effective in multiple datasets.


Reviews: Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

Neural Information Processing Systems

The paper presents a method for scaling up classifiers for tasks with extremely large number of classes, with memory requirements scaling with O(logK) for K classes. The proposed model is uses count-min sketch to transform a very large classification problem to a small number of classification tasks with a fixed small number of classes. Each of these models can be trained independently and in parallel. Experimental results on a number of multi-class and multi-label classification tasks shows that it either performs as well as other more resource-demanding approaches or it outperforms them, The methodological contribution is significant and it would be the baseline of future studies.


Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

Neural Information Processing Systems

In the last decade, it has been shown that many hard AI tasks, especially in NLP, can be naturally modeled as extreme classification problems leading to improved precision. However, such models are prohibitively expensive to train due to the memory bottleneck in the last layer. For example, a reasonable softmax layer for the dataset of interest in this paper can easily reach well beyond 100 billion parameters ( 400 GB memory). To alleviate this problem, we present Merged-Average Classifiers via Hashing (MACH), a generic K -classification algorithm where memory provably scales at O(\log K) without any assumption on the relation between classes. MACH is subtly a count-min sketch structure in disguise, which uses universal hashing to reduce classification with a large number of classes to few embarrassingly parallel and independent classification tasks with a small (constant) number of classes.


Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products

Medini, Tharun Kumar Reddy, Huang, Qixuan, Wang, Yiqiu, Mohan, Vijai, Shrivastava, Anshumali

Neural Information Processing Systems

In the last decade, it has been shown that many hard AI tasks, especially in NLP, can be naturally modeled as extreme classification problems leading to improved precision. However, such models are prohibitively expensive to train due to the memory bottleneck in the last layer. For example, a reasonable softmax layer for the dataset of interest in this paper can easily reach well beyond 100 billion parameters ( 400 GB memory). To alleviate this problem, we present Merged-Average Classifiers via Hashing (MACH), a generic $K$-classification algorithm where memory provably scales at $O(\log K)$ without any assumption on the relation between classes. MACH is subtly a count-min sketch structure in disguise, which uses universal hashing to reduce classification with a large number of classes to few embarrassingly parallel and independent classification tasks with a small (constant) number of classes.


Amazon SEO: How to Rank Highly for Amazon Searches

#artificialintelligence

All too often, when we think of SEO, we only think of Google. And of course you want great rankings in the search engines. However, your website isn't the only place on the web where you may be selling your product. If you have a product page on Amazon, you want it to be found by customers just as you would want your site to show up on the first search engine results page (SERP) for your industry keywords. Failure to do Amazon SEO right, just like with regular SEO, will result in less traffic and fewer sales.