Geometric Priors I
In the last post on high-dimensional learning, we saw that learning in high dimensions is impossible without assumptions due to the curse of dimensionality, i.e., the number of samples required in our learning problem grows exponentially with dimensions. We also introduced the main geometric function spaces, in which our points in high-dimensional space can be considered as signals over the low-dimensional geometric domain. From this assumption, and to make learning tractable, I will present symmetry (in this post) and scale separation (in the next one). In addition, we also discussed the three kinds of errors we need to be aware of, namely, approximation error, statistical error, and optimization error. The approximation error increases if our function class decreases (the true function that we are trying to estimate is far outside of this class), which suggests having a large function class. In contrast, the statistical error implies we are unlikely to find the true function based on a finite number of data points. This error increases as the function class grows.
Mar-28-2022, 16:00:12 GMT
- Technology: