Artificial Intelligence (AI) technology is increasingly prevalent in our everyday lives. It has uses in a variety of industries from gaming, journalism/media, to finance, as well as in the state-of-the-art research fields from robotics, medical diagnosis, and quantum science. In this course you'll learn the basics and applications of AI, including: machine learning, probabilistic reasoning, robotics, computer vision, and natural language processing.
When you ask Siri for directions, peruse Netflix's recommendations or get a fraud alert from your bank, these interactions are led by computer systems using large amounts of data to predict your needs. The market is only going to grow. By 2020, the research firm IDC predicts that AI will help drive worldwide revenues to over $47 billion, up from $8 billion in 2016. Still, Coursera co-founder ANDREW NG, adjunct professor of computer science, says fears that AI will replace humans are misplaced: "Despite all the hype and excitement about AI, it's still extremely limited today relative to what human intelligence is." Ng, who is chief scientist at Baidu Research, spoke to the Graduate School of Business community as part of a series presented by the Stanford MSx Program, which offers experienced leaders a one-year, full-time learning experience.
WWTS (What Would Turing Say?) Turing's Imitation Game was a brilliant Turing was heavily influenced by the World War II "game" If Turing were alive today, what sort of test might he propose? If a machine could fool interrogators as often as a typical man, then one would have to conclude that that machine, as programmed, was as intelligent as a person (well, as intelligent as men.) As Judy Genova (1994) puts it, Turing's originally proposed game involves not a question of species, but one of gender. The current version, where the interrogator is told he or she needs to distinguish a person from a machine, is (1) much more difficult to get a program to pass, and (2) almost all the added difficulties are largely irrelevant to intelligence! And it's possible to muddy the waters even more by some programs appearing to do well at it due to various tricks, such as having the interviewee program claim to be a 13-year-old Ukrainian who doesn't speak English well (University of Reading 2014), and hence having all its wrong or bizarre responses excused due to cultural, age, or language issues.
Tomar, Gaurav Singh (Carnegie Mellon University) | Sankaranarayanan, Sreecharan (Carnegie Mellon University) | Rosé, Carolyn Penstein (Carnegie Mellon University)
Artificially intelligent conversational agents have been demonstrated to positively impact team based learning in classrooms and hold even greater potential for impact in the now widespread Massive Open Online Courses (MOOCs) if certain challenges can be overcome. These challenges include team formation, coordination and management of group processes in teams working together while distributed both in time and space. Our work begins with an architecture for orchestrating conversational agent based support for group learning called Bazaar, which has facilitated numerous successful studies of learning in the past including some early investigations in MOOC contexts. In this paper, we briefly describe our experience in designing, developing and deploying agent supported collaborative learning activities in 3 different MOOCs in three iterations. Findings from this iterative design process provide an empirical foundation for a reusable framework for facilitating similar activities in future MOOCs.
Lan, Andrew S., Vats, Divyanshu, Waters, Andrew E., Baraniuk, Richard G.
While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.
Maddison, Chris J., Tarlow, Daniel
We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.
Singh, Rishabh, Gulwani, Sumit, Solar-Lezama, Armando
We present a new method for automatically providing feedback for introductory programming problems. In order to use this method, we need a reference implementation of the assignment, and an error model consisting of potential corrections to errors that students might make. Using this information, the system automatically derives minimal corrections to student's incorrect solutions, providing them with a quantifiable measure of exactly how incorrect a given solution was, as well as feedback about what they did wrong. We introduce a simple language for describing error models in terms of correction rules, and formally define a rule-directed translation strategy that reduces the problem of finding minimal corrections in an incorrect program to the problem of synthesizing a correct program from a sketch. We have evaluated our system on thousands of real student attempts obtained from 6.00 and 6.00x. Our results show that relatively simple error models can correct on average 65% of all incorrect submissions.
Hoffman, Matthew, Bach, Francis R., Blei, David M.
We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time.