Understanding the Bias-Variance Tradeoff: An Overview
While this will serve as an overview of Scott's essay, which you can read for further detail and mathematical insights, we will start by with Fortmann-Roe's verbatim definitions which are central to the piece: Error due to Bias: The error due to bias is taken as the difference between the expected (or average) prediction of our model and the correct value which we are trying to predict. Again, imagine you can repeat the entire model building process multiple times. Fortmann-Roe ends the section on over- and under-fitting by pointing to another of his great essays (Accurately Measuring Model Prediction Error), and then moving on to the highly-agreeable recommendation that "resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria." I recommend reading Scott Fortmann-Roe's entire bias-variance tradeoff essay, as well as his piece on measuring model prediction error.
May-18-2017, 18:05:33 GMT