Neural Educational Recommendation Engine (NERE)

arXiv.org Machine Learning

Quizlet is the most popular online learning tool in the United States, and is used by over 2/3 of high school students, and 1/2 of college students. With more than 95% of Quizlet users reporting improved grades as a result, the platform has become the de-facto tool used in millions of classrooms. In this paper, we explore the task of recommending suitable content for a student to study, given their prior interests, as well as what their peers are studying. We propose a novel approach, i.e. Neural Educational Recommendation Engine (NERE), to recommend educational content by leveraging student behaviors rather than ratings. We have found that this approach better captures social factors that are more aligned with learning. NERE is based on a recurrent neural network that includes collaborative and content-based approaches for recommendation, and takes into account any particular student's speed, mastery, and experience to recommend the appropriate task. We train NERE by jointly learning the user embeddings and content embeddings, and attempt to predict the content embedding for the final timestamp. We also develop a confidence estimator for our neural network, which is a crucial requirement for productionizing this model. We apply NERE to Quizlet's proprietary dataset, and present our results. We achieved an R^2 score of 0.81 in the content embedding space, and a recall score of 54% on our 100 nearest neighbors. This vastly exceeds the recall@100 score of 12% that a standard matrix-factorization approach provides. We conclude with a discussion on how NERE will be deployed, and position our work as one of the first educational recommender systems for the K-12 space.


AI Nanodegree Program Syllabus: Term 2 (Deep Learning), In Depth

#artificialintelligence

Here at Udacity, we are tremendously excited to announce the kick-off of the second term of our Artificial Intelligence Nanodegree program. Because we are able to provide a depth of education that is commensurate with university education; because we are bridging the gap between universities and industry by providing you with hands-on projects and partnering with the top industries in the field; and last but certainly not least, because we are able to bring this education to many more people across the globe, at a cost that makes a top-notch AI education realistic for all aspiring learners. During the first term, you've enjoyed learning about Game Playing Agents, Simulated Annealing, Constraint Satisfaction, Logic and Planning, and Probabilistic AI from some of the biggest names in the field: Sebastian Thrun, Peter Norvig, and Thad Starner. Term 2 will be focused on one of the cutting-edge advancements of AI -- Deep Learning. In this Term, you will learn about the foundations of neural networks, understand how to train these neural networks with techniques such as gradient descent and backpropagation, and learn different types of architectures that make neural networks work for a variety of different applications.


Coursera Coupons Min 10% off 100% Free Courses Student Offer

#artificialintelligence

Learn Machine Learning Stanford University Professor and earn certification to full proof your career. Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.


Data Mining with Rattle Udemy

@machinelearnbot

Data Mining with Rattle is a unique course that instructs with respect to both the concepts of data mining, as well as to the "hands-on" use of a popular, contemporary data mining software tool, "Data Miner," also known as the'Rattle' package in R software. Rattle is a popular GUI-based software tool which'fits on top of' R software. The course focuses on life-cycle issues, processes, and tasks related to supporting a'cradle-to-grave' data mining project. These include: data exploration and visualization; testing data for random variable family characteristics and distributional assumptions; transforming data by scale or by data type; performing cluster analyses; creating, analyzing and interpreting association rules; and creating and evaluating predictive models that may utilize: regression; generalized linear modeling (GLMs); decision trees; recursive partitioning; random forests; boosting; and/or support vector machine (SVM) paradigms. It is both a conceptual and a practical course as it teaches and instructs about data mining, and provides ample demonstrations of conducting data mining tasks using the Rattle R package.


The Many Faces of Exponential Weights in Online Learning

arXiv.org Machine Learning

A standard introduction to online learning might place Online Gradient Descent at its center and then proceed to develop generalizations and extensions like Online Mirror Descent and second-order methods. Here we explore the alternative approach of putting exponential weights (EW) first. We show that many standard methods and their regret bounds then follow as a special case by plugging in suitable surrogate losses and playing the EW posterior mean. For instance, we easily recover Online Gradient Descent by using EW with a Gaussian prior on linearized losses, and, more generally, all instances of Online Mirror Descent based on regular Bregman divergences also correspond to EW with a prior that depends on the mirror map. Furthermore, appropriate quadratic surrogate losses naturally give rise to Online Gradient Descent for strongly convex losses and to Online Newton Step. We further interpret several recent adaptive methods (iProd, Squint, and a variation of Coin Betting for experts) as a series of closely related reductions to exp-concave surrogate losses that are then handled by Exponential Weights. Finally, a benefit of our EW interpretation is that it opens up the possibility of sampling from the EW posterior distribution instead of playing the mean. As already observed by Bubeck and Eldan, this recovers the best-known rate in Online Bandit Linear Optimization.