order thing
Learning to Order Things
There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference Junction, of the form PREF( u, v), which indicates whether it is advisable to rank u before v. New instances are then ordered so as to maximize agreements with the learned preference func(cid:173) tion. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation.
How to stop Google Home or Amazon Echo from making unwanted online purchases
There's no denying that Google Home and Amazon Echo (or the less-expensive Echo Dot, if you're not using it for music) have changed the way we interact with our homes. Turning on the lights has never been easier, nor has it been simpler to field the latest traffic report or order delivery for dinner. The future is here, and we're reveling in it! But the proliferation of these devices around our homes leaves room for error. Google's and Amazon's connected speakers must always listen for us to utter their magic "wake" words--OK Google or Alexa respectively--in order to perform their tasks.
- Information Technology (0.74)
- Consumer Products & Services (0.74)
- Retail > Online (0.50)
Learning to Order Things
Cohen, W. W., Schapire, R. E., Singer, Y.
There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order instances given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a binary preference function indicating whether it is advisable to rank one instance before another. Here we consider an on-line algorithm for learning preference functions that is based on Freund and Schapire's 'Hedge' algorithm. In the second stage, new instances are ordered so as to maximize agreement with the learned preference function. We show that the problem of finding the ordering that agrees best with a learned preference function is NP-complete. Nevertheless, we describe simple greedy algorithms that are guaranteed to find a good approximation. Finally, we show how metasearch can be formulated as an ordering problem, and present experimental results on learning a combination of 'search experts', each of which is a domain-specific query expansion strategy for a web search engine.