It is well-known that the precision of data, hyperparameters, and internal representations employed in learning systems directly impacts its energy, throughput, and latency. The precision requirements for the training algorithm are also important for systems that learn on-the-fly. Prior work has shown that the data and hyperparameters can be quantized heavily without incurring much penalty in classification accuracy when compared to floating point implementations. These works suffer from two key limitations. First, they assume uniform precision for the classifier and for the training algorithm and thus miss out on the opportunity to further reduce precision. Second, prior works are empirical studies. In this article, we overcome both these limitations by deriving analytical lower bounds on the precision requirements of the commonly employed stochastic gradient descent (SGD) on-line learning algorithm in the specific context of a support vector machine (SVM). Lower bounds on the data precision are derived in terms of the the desired classification accuracy and precision of the hyperparameters used in the classifier. Additionally, lower bounds on the hyperparameter precision in the SGD training algorithm are obtained. These bounds are validated using both synthetic and the UCI breast cancer dataset. Additionally, the impact of these precisions on the energy consumption of a fixed-point SVM with on-line training is studied.
Easylearning guru is a leading provider of professional certification courses. We offers training from best experts in the industry to meet the unique learning needs. We are the pioneer's in online education and training, and aims to offer our professionals flexible & an integrated model of training to meet their needs & requirements.
The things that the world's highest achievers spent their entire lives discovering, that no professor or teacher will ever tell you. Because when I was in college, I was mad. I'd just read a book and everything inside was the opposite of what I was learning in all my classes. So I ran into the dean's office and said "I'm literally learning more from the books I get on Amazon for five bucks than these classes that cost thousands of dollars each!" And all she had to tell me is...they're working on it! So when I walked out that day, I swore I'd teach myself the things I should have learned in school.
In his 10-week course Ng takes a an engineering-oriented approach to Machine Learning that concentrates on statistical models. If you are looking for an alternative Coursera also has Neural Networks for Machine Learning, a class taught by University of Toronto professor, Geoffry Hinton who is a leading proponent in the field from a cognitive science perspective. His eight-week course sets out to teach students artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion. Its prerequisites are programming proficiency in Matlab, Octave or Python, plus knowledge of calculus, linear algebra and probability theory.