How to test classifier better than chance using k-fold cross-validation? • /r/MachineLearning
I have 400 units and 10 groups, and I'm classifying the units' group membership using a discriminant function analysis or linear discriminant analysis. During cross-validation, I want to test that my solution is doing a better job at classifying them than chance (10%). I can get an error rate, but don't know how to statistically compare. With the hold-out approach, I can test it using Press' Q statistic or Maximum Chance Criterion. But with k-fold I don't think I can use this approach.
May-15-2016, 21:10:46 GMT
- Technology: