Reviews: Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels
–Neural Information Processing Systems
In particular, [1, Algorithm 3] propose an approach for minimization of the expected loss of a linear predictor that aims at finding a good' sparse solution. The main idea of the algorithm from [1] is to iteratively add features by picking a previously unselected feature that amounts to the largest reduction in the expected risk. Then, a linear model is trained using the extended feature representation and afterwards the whole process is repeated. The authors use pretty much the same idea and take a large dictionary of features to represent the data. Following this, they run Algorithm 3 from [1] to pick informative' features and generate a sparse feature representation.
Neural Information Processing Systems
Oct-7-2024, 15:45:58 GMT
- Technology: