Learning to rank via combining representations
Helm, Hayden S., Basu, Amitabh, Athreya, Avanti, Park, Youngser, Vogelstein, Joshua T., Winding, Michael, Zlatic, Marta, Cardona, Albert, Bourke, Patrick, Larson, Jonathan, White, Chris, Priebe, Carey E.
Learning to rank - producing a ranked list of items specific to a query and with respect to a set of supervisory items - is a problem of general interest. The setting we consider is one in which no analytic description of what constitutes a good ranking is available. Instead, we have a collection of representations and supervisory information consisting of a (target item, interesting items set) pair. We demonstrate - analytically, in simulation, and in real data examples - that learning to rank via combining representations using an integer linear program is effective when the supervision is as light as "these few items are similar to your item of interest." While this nomination task is of general interest, for specificity we present our methodology from the perspective of vertex nomination in graphs. The methodology described herein is model agnostic. Introduction Given a query, a collection of items, and supervisory information, producing a ranked list relative to the query is of general interest. In particular, learning to rank [1] and algorithms from related problem settings [2] have been used to improve popular search engines and recommender systems and, impressively, aid in the identification of human traffickers [3]. When learning to rank, for each training query researchers typically have access to (feature vector, ordinal) pairs that are used to learn an ordinal regressor via fitting a model under a set of probabilistic assumptions [4] or via deep learning techniques [5] that generalize to ranking items for never-beforeseen queries.
Aug-25-2020