Machine Learning Theory - Part 2: Generalization Bounds

#artificialintelligence 

Last time we concluded by noticing that minimizing the empirical risk (or the training error) is not in itself a solution to the learning problem, it could only be considered a solution if we can guarantee that the difference between the training error and the generalization error (which is also called the generalization gap) is small enough. That is if this probability is small, we can guarantee that the difference between the errors is not much, and hence the learning problem can be solved. In this part we'll start investigating that probability at depth and see if it indeed can be small, but before starting you should note that I skipped a lot of the mathematical proofs here. You'll often see phrases like "It can be proved that …", "One can prove …", "It can be shown that …", … etc without giving the actual proof. This is to make the post easier to read and to focus all the effort on the conceptual understanding of the subject.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found