Goto

Collaborating Authors

Statistical Learning: Instructional Materials


ML for Business Managers: Build Regression model in R Studio

#artificialintelligence

In this section we will learn - What does Machine Learning mean. What are the meanings or different terms associated with machine learning? You will see some examples so that you understand what machine learning actually is. It also contains steps involved in building a machine learning model, not just linear models, any machine learning model.


Logistic Regression in SPSS for Social Science Research

#artificialintelligence

Logistic Regression in SPSS for Social Science Research Complete step by step guide on logistic regression in SPSS including interpretation and visualization New What you'll learn Social research with Logistic Regression in SPSS: A Complete Guide for the Social Sciences The only course on Udemy that shows you how to perform, interpret and visualize logistic regression in SPSS, using a real world example, using the quantitative research process. Follow along with me as I talk you through everything you need to know to become confident in using regression analysis in your quantitative research report, dissertation or thesis. Perfect for those studying social science subjects or want to increase their statistical confidence and literacy. Don't fall for other courses that are over-technical, math's based and heavy on statistics! This course cuts all that out and explains in a way that is easy to understand! Course outcomes On completion of the course you will fully understand: Logistics regression is a statistical model that is used to predict the probability of a certain outcome or event occurring, when that outcome or event is binary (such as pass/fail, true/false, healthy/sick).


Complete Machine Learning and Data Science: Zero to Mastery

#artificialintelligence

This is a brand new Machine Learning and Data Science course just launched January 2020 and updated this month with the latest trends and skills! Become a complete Data Scientist and Machine Learning engineer! Join a live online community of 270,000 engineers and a course taught by industry experts that have actually worked for large companies in places like Silicon Valley and Toronto. Graduates of Andrei's courses are now working at Google, Tesla, Amazon, Apple, IBM, JP Morgan, Facebook, other top tech companies. Learn Data Science and Machine Learning from scratch, get hired, and have fun along the way with the most modern, up-to-date Data Science course on Udemy (we use the latest version of Python, Tensorflow 2.0 and other libraries).


A Tutorial on Sparse Gaussian Processes and Variational Inference

arXiv.org Machine Learning

Gaussian processes (GPs) provide a framework for Bayesian inference that can offer principled uncertainty estimates for a large range of problems. For example, if we consider regression problems with Gaussian likelihoods, a GP model enjoys a posterior in closed form. However, identifying the posterior GP scales cubically with the number of training examples and requires to store all examples in memory. In order to overcome these obstacles, sparse GPs have been proposed that approximate the true posterior GP with pseudo-training examples. Importantly, the number of pseudo-training examples is user-defined and enables control over computational and memory complexity. In the general case, sparse GPs do not enjoy closed-form solutions and one has to resort to approximate inference. In this context, a convenient choice for approximate inference is variational inference (VI), where the problem of Bayesian inference is cast as an optimization problem -- namely, to maximize a lower bound of the log marginal likelihood. This paves the way for a powerful and versatile framework, where pseudo-training examples are treated as optimization arguments of the approximate posterior that are jointly identified together with hyperparameters of the generative model (i.e. prior and likelihood). The framework can naturally handle a wide scope of supervised learning problems, ranging from regression with heteroscedastic and non-Gaussian likelihoods to classification problems with discrete labels, but also multilabel problems. The purpose of this tutorial is to provide access to the basic matter for readers without prior knowledge in both GPs and VI. A proper exposition to the subject enables also access to more recent advances (like importance-weighted VI as well as inderdomain, multioutput and deep GPs) that can serve as an inspiration for new research ideas.


Code Adam Gradient Descent Optimization From Scratch

#artificialintelligence

Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A limitation of gradient descent is that a single step size (learning rate) is used for all input variables. Extensions to gradient descent like AdaGrad and RMSProp update the algorithm to use a separate step size for each input variable but may result in a step size that rapidly decreases to very small values. The Adaptive Movement Estimation algorithm, or Adam for short, is an extension to gradient descent and a natural successor to techniques like AdaGrad and RMSProp that automatically adapts a learning rate for each input variable for the objective function and further smooths the search process by using an exponentially decreasing moving average of the gradient to make updates to variables. In this tutorial, you will discover how to develop gradient descent with Adam optimization algorithm from scratch.


A Unified Framework for Online Trip Destination Prediction

arXiv.org Artificial Intelligence

Trip destination prediction is an area of increasing importance in many applications such as trip planning, autonomous driving and electric vehicles. Even though this problem could be naturally addressed in an online learning paradigm where data is arriving in a sequential fashion, the majority of research has rather considered the offline setting. In this paper, we present a unified framework for trip destination prediction in an online setting, which is suitable for both online training and online prediction. For this purpose, we develop two clustering algorithms and integrate them within two online prediction models for this problem. We investigate the different configurations of clustering algorithms and prediction models on a real-world dataset. By using traditional clustering metrics and accuracy, we demonstrate that both the clustering and the entire framework yield consistent results compared to the offline setting. Finally, we propose a novel regret metric for evaluating the entire online framework in comparison to its offline counterpart. This metric makes it possible to relate the source of erroneous predictions to either the clustering or the prediction model. Using this metric, we show that the proposed methods converge to a probability distribution resembling the true underlying distribution and enjoy a lower regret than all of the baselines.


Educational Content Linking for Enhancing Learning Need Remediation in MOOCs

arXiv.org Artificial Intelligence

Since its introduction in 2011, there have been over 4000 MOOCs on various subjects on the Web, serving over 35 million learners. MOOCs have shown the ability to democratize knowledge dissemination and bring the best education in the world to every learner. However, the disparate distances between participants, the size of the learner population, and the heterogeneity of the learners' backgrounds make it extremely difficult for instructors to interact with the learners in a timely manner, which adversely affects learning experience. To address the challenges, in this thesis, we propose a framework: educational content linking. By linking and organizing pieces of learning content scattered in various course materials into an easily accessible structure, we hypothesize that this framework can provide learners guidance and improve content navigation. Since most instruction and knowledge acquisition in MOOCs takes place when learners are surveying course materials, better content navigation may help learners find supporting information to resolve their confusion and thus improve learning outcome and experience. To support our conjecture, we present end-to-end studies to investigate our framework around two research questions: 1) can manually generated linking improve learning? 2) can learning content be generated with machine learning methods? For studying the first question, we built an interface that present learning materials and visualize the linking among them simultaneously. We found the interface enables users to search for desired course materials more efficiently, and retain more concepts more readily. For the second question, we propose an automatic content linking algorithm based on conditional random fields. We demonstrate that automatically generated linking can still lead to better learning, although the magnitude of the improvement over the unlinked interface is smaller.


Machine Learning Towards Intelligent Systems: Applications, Challenges, and Opportunities

arXiv.org Artificial Intelligence

The emergence and continued reliance on the Internet and related technologies has resulted in the generation of large amounts of data that can be made available for analyses. However, humans do not possess the cognitive capabilities to understand such large amounts of data. Machine learning (ML) provides a mechanism for humans to process large amounts of data, gain insights about the behavior of the data, and make more informed decision based on the resulting analysis. ML has applications in various fields. This review focuses on some of the fields and applications such as education, healthcare, network security, banking and finance, and social media. Within these fields, there are multiple unique challenges that exist. However, ML can provide solutions to these challenges, as well as create further research opportunities. Accordingly, this work surveys some of the challenges facing the aforementioned fields and presents some of the previous literature works that tackled them. Moreover, it suggests several research opportunities that benefit from the use of ML to address these challenges.


Residual Energy-Based Models for Text

arXiv.org Machine Learning

Current large-scale auto-regressive language models (Radford et al., 2019; Liu et al., 2018; Graves, 2013) display impressive fluency and can generate convincing text. In this work we start by asking the question: Can the generations of these models be reliably distinguished from real text by statistical discriminators? We find experimentally that the answer is affirmative when we have access to the training data for the model, and guardedly affirmative even if we do not. This suggests that the auto-regressive models can be improved by incorporating the (globally normalized) discriminators into the generative process. We give a formalism for this using the Energy-Based Model framework, and show that it indeed improves the results of the generative models, measured both in terms of perplexity and in terms of human evaluation.


Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

arXiv.org Machine Learning

We propose an information-theoretic technique for analyzing privacy guarantees of online algorithms. Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences. Our technique relies on generalizing the Dobrushin's contraction coefficient for total variation distance to an $f$-divergence known as $E_\gamma$-divergence. $E_\gamma$-divergence, in turn, is equivalent to approximate differential privacy. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.