Evaluating a Binary Classifier
The following discusses using cross-validation to evaluate the classifier we built in the previous post, which classifies images from the MNIST dataset as either five or not five. Let's take a brief look at the problem that cross-validation solves. When building a model, we risk overfitting the model on the test set when evaluating different hyperparameters. This is because we can tweak the hyperparameters until the model performs optimally. In overfitting, knowledge about the test set "leaks" into the model, and evaluation metrics no longer report on generalization.
Jun-13-2022, 01:15:19 GMT
- Technology: