Goto

Collaborating Authors

Artificial Intelligence for Social Good: A Survey

arXiv.org Artificial Intelligence

Its impact is drastic and real: Youtube's AIdriven recommendation system would present sports videos for days if one happens to watch a live baseball game on the platform [1]; email writing becomes much faster with machine learning (ML) based auto-completion [2]; many businesses have adopted natural language processing based chatbots as part of their customer services [3]. AI has also greatly advanced human capabilities in complex decision-making processes ranging from determining how to allocate security resources to protect airports [4] to games such as poker [5] and Go [6]. All such tangible and stunning progress suggests that an "AI summer" is happening. As some put it, "AI is the new electricity" [7]. Meanwhile, in the past decade, an emerging theme in the AI research community is the so-called "AI for social good" (AI4SG): researchers aim at developing AI methods and tools to address problems at the societal level and improve the wellbeing of the society.


What Should I Learn First: Introducing LectureBank for NLP Education and Prerequisite Chain Learning

arXiv.org Machine Learning

Recent years have witnessed the rising popularity of Natural Language Processing (NLP) and related fields such as Artificial Intelligence (AI) and Machine Learning (ML). Many online courses and resources are available even for those without a strong background in the field. Often the student is curious about a specific topic but does not quite know where to begin studying. To answer the question of "what should one learn first," we apply an embedding-based method to learn prerequisite relations for course concepts in the domain of NLP. We introduce LectureBank, a dataset containing 1,352 English lecture files collected from university courses which are each classified according to an existing taxonomy as well as 208 manually-labeled prerequisite relation topics, which is publicly available. The dataset will be useful for educational purposes such as lecture preparation and organization as well as applications such as reading list generation. Additionally, we experiment with neural graph-based networks and non-neural classifiers to learn these prerequisite relations from our dataset.


Deep Learning Prerequisites: Logistic Regression in Python

#artificialintelligence

Online Courses Udemy | Deep Learning Prerequisites: Logistic Regression in Python Data science techniques for professionals and students - learn the theory behind logistic regression and code in Python BESTSELLER Created by Lazy Programmer Inc.  English [Auto-generated], Portuguese [Auto-generated], 1 more Students also bought Natural Language Processing with Deep Learning in Python Data Science: Natural Language Processing (NLP) in Python Deep Learning: Advanced Computer Vision (GANs, SSD, +More!) Unsupervised Machine Learning Hidden Markov Models in Python Modern Deep Learning in Python Preview this course GET COUPON CODE 100% Off Udemy Coupon . Free Udemy Courses . Online Classes


15 Best Machine Learning Course in 2019 MLAIT

#artificialintelligence

Below is the 15 best machine learning course to accelerate your ML journey this year. The holy grail of machine learning online course, Machine Learning by Stanford is considered as the best machine learning course by many. This course is prepared and maintained by Andrew Ng, pioneer machine learning scientist who've led ML research projects for both Google and Chinese giant Baidu. Although the course requires a paid subscription, you can ask for financial aid if you're a student. This online machine learning course from DataCamp is the best machine learning course with a primary emphasis on statistics – the de facto requirement for effective data science projects.


Time-varying Learning and Content Analytics via Sparse Factor Analysis

arXiv.org Machine Learning

We propose SPARFA-Trace, a new machine learning-based framework for time-varying learning and content analytics for education applications. We develop a novel message passing-based, blind, approximate Kalman filter for sparse factor analysis (SPARFA), that jointly (i) traces learner concept knowledge over time, (ii) analyzes learner concept knowledge state transitions (induced by interacting with learning resources, such as textbook sections, lecture videos, etc, or the forgetting effect), and (iii) estimates the content organization and intrinsic difficulty of the assessment questions. These quantities are estimated solely from binary-valued (correct/incorrect) graded learner response data and a summary of the specific actions each learner performs (e.g., answering a question or studying a learning resource) at each time instance. Experimental results on two online course datasets demonstrate that SPARFA-Trace is capable of tracing each learner's concept knowledge evolution over time, as well as analyzing the quality and content organization of learning resources, the question-concept associations, and the question intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable or better performance in predicting unobserved learner responses than existing collaborative filtering and knowledge tracing approaches for personalized education.