Ranking a set of classifiers based on metrics with differing units • /r/MachineLearning
Note: I posted this question to stackoverflow as well. Support Vector Machines, k-Neighbors Classifiers, Neural Networks, Decision Trees, ...) on the same training set and collects a bunch of performance metrics for each model. Now, most of these are your standard run-of-the-mill metrics like precision, recall, overall accuracy and all that, but some are more complex (or should I say "different"?), for example: I want to find a good way of ranking these models based on user-specified weights for a subset of the aforementioned performance metrics. If a user's goal was to find the model that was least "complex" while still achieving reasonable precision, they would likely assign a higher weight to the "no. of preprocessing steps" attribute and see which model gets ranked highest (probably model 2, but it really depends on the concrete values of the weights of course). So, in short, I am faced with a so-called Multiple-criteria decision-making (MCDM) problem, and I need to solve it.
Jul-12-2016, 15:22:27 GMT
- Technology: