Cross-Validation in Machine Learning
The model performance is based on dividing the known data into two parts, one to train the model and the other to test the prediction performance, thus obtaining the model accuracy and adjusting it according to the results. However, accuracy depends on how we slip the data, which can lead to possible biases in the model that prevent accuracy from generalizing to unseen data. Cross-validation is used to combat the random split of the data. This is a method that allows testing the performance of a predictive machine learning model, based on the same principle of the Train-Test split technique but with the difference that it must be performed k times and obtain the accuracy of each attempt. This technique is known as k-folds, where each fold is a specific division of the data different from the rest.
Aug-13-2022, 00:45:08 GMT
- Technology: