I'm definitely interested in this question too, as someone who is self-taught about machine learning and skipped over a lot of the mathematical theory. I know that certain areas are very important, such as matrix factorisation methods which are huge. And algorithms such as neural networks rely heavily on lots of aspects of linear algebra (see https://www.utdallas.edu/ The kernel trick in SVM is also founded in linear algebra (the dot product). It'd be interesting to hear an expert elaborate on this.
Mar-31-2016, 09:41:09 GMT