Making Predictive Models Robust: Holdout vs Cross-Validation

@machinelearnbot 

When evaluating machine learning models, the validation step helps you find the best parameters for your model while also preventing it from becoming overfitted. Two of the most popular strategies to perform the validation step are the hold-out strategy and the k-fold strategy. Pros of the hold-out strategy: Fully independent data; only needs to be run once so has lower computational costs. Cons of the hold-out strategy: Performance evaluation is subject to higher variance given the smaller size of the data. K-fold validation evaluates the data across the entire training set, but it does so by dividing the training set into K folds – or subsections – (where K is a positive integer) and then training the model K times, each time leaving a different fold out of the training data and using it instead as a validation set.