Goto

Collaborating Authors

 Massachusetts Institute of Technology


Expressive Recommender Systems through Normalized Nonnegative Models

AAAI Conferences

We introduce normalized nonnegative models (NNM) for explorative data analysis. NNMs are partial convexifications of models from probability theory. We demonstrate their value at the example of item recommendation. We show that NNM-based recommender systems satisfy three criteria that all recommender systems should ideally satisfy: high predictive power, computational tractability, and expressive representations of users and items. Expressive user and item representations are important in practice to succinctly summarize the pool of customers and the pool of items. In NNMs, user representations are expressive because each user's preference can be regarded as normalized mixture of preferences of stereotypical users. The interpretability of item and user representations allow us to arrange properties of items (e.g., genres of movies or topics of documents) or users (e.g., personality traits) hierarchically.


Turing Questions: A Test for the Science of (Human) Intelligence

AI Magazine

For this reason we propose a stronger version of the original Turing test. In particular, we describe here an open-ended set of Turing Questions that we are developing at the Center for Brains, Minds and Machines at MIT -- that is questions about an image. The term Turing is to emphasize that our goal is understanding human intelligence at all Marr's levels -- from the level of the computations to the level of the underlying circuits. These requirements are thus well beyond the original Turing test.


Turing++ Questions: A Test for the Science of (Human) Intelligence

AI Magazine

It is becoming increasingly clear that there is an infinite number of definitions of intelligence. Machines that are intelligent in different narrow ways have been built since the 50s. We are entering now a golden age for the engineering of intelligence and the development of many different kinds of intelligent machines. At the same time there is a widespread interest among scientists in understanding a specific and well defined form of intelligence, that is human intelligence. For this reason we propose a stronger version of the original Turing test. In particular, we describe here an open-ended set of Turing++ Questions that we are developing at the Center for Brains, Minds and Machines at MIT โ€” that is questions about an image. Questions may range from what is there to who is there, what is this person doing, what is this girl thinking about this boy and so on.ย  The plural in questions is to emphasize that there are many different intelligent abilities in humans that have to be characterized, and possibly replicated in a machine, from basic visual recognition of objects, to the identification of faces, to gauge emotions, to social intelligence, to language and much more. The term Turing++ is to emphasize that our goal is understanding human intelligence at all Marrโ€™s levels โ€” from the level of the computations to the level of the underlying circuits. Answers to the Turing++ Questions should thus be given in terms of models that match human behavior and human physiology โ€” the mind and the brain. These requirements are thus well beyond the original Turing test. A whole scientific field that we call the science of (human) intelligence is required to make progress in answering our Turing++ Questions. It is connected to neuroscience and to the engineering of intelligence but also separate from both of them.


Combining Human and Artificial Intelligence for Analyzing Health Data

AAAI Conferences

Artificial intelligence (AI) systems are increasingly capable of analyzing health data such as medical images (e.g., skin lesions) and test results (e.g., ECGs). However, because it can be difficult to determine when an AI-generated diagnosis should be trusted and acted uponโ€”especially when it conflicts with a human-generated oneโ€”many AI systems are not utilized effectively, if at all. Similarly, advances in information technology have made it possible to quickly solicit multiple diagnoses from diverse groups of people throughout the world, but these technologies are underutilized because it is difficult to determine which of multiple diagnoses should be trusted and acted upon. Here, I propose a method of soliciting and combining multiple diagnoses that will harness the collective intelligence of both human and artificial intelligence for analyzing health data.


Towards Interpretable Explanations for Transfer Learning in Sequential Tasks

AAAI Conferences

People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user's ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care.


Research Priorities for Robust and Beneficial Artificial Intelligence

AI Magazine

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls.


Research Priorities for Robust and Beneficial Artificial Intelligence

AI Magazine

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.


A Data-Driven Approach for Computationally Modeling Players' Avatar Customization Behaviors

AAAI Conferences

Avatar customization systems enable players to represent themselves virtually in many ways. Research has shown that players exhibit different preferences and motivations in how they customize their avatars. In this paper, we present a data-driven analytical approach to modeling player behavioral patterns exhibited during the avatar customization process.ย We used our data mining tool \textit{AIRvatar} to analyze telemetry data obtained from 190 players using an avatar creator of our own design. Using non-negative matrix factorization (NMF) and N-gram models, we demonstrate how our approach computationally models behavioral patterns exhibited by players such as "regular shopping," "engaged shopping," or "bored browsing". Our models obtained significant effect sizes (0.12 <= R^2 <= 0.54) when validated with multiple linear regressions for players' time spent engaging in activities within the avatar creator. The NMF model had comparably high performance and ease of interpretation compared to control models.


Learning Supervised Topic Models from Crowds

AAAI Conferences

The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this paper, we propose a supervised topic model that accounts for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state of the art approaches.


Exploring the Use of Role Model Avatars in Educational Games

AAAI Conferences

Research has indicated that role models have the potential to boost academic performance. In this paper, we describe an experiment exploring role models as game avatars in an educational game. Of particular interest are the effects of these avatars on players' performance and engagement. Participants were randomly assigned to a condition: a) user selected role model avatar, or b) user selected shape avatar. Results suggest that role models are heavily preferred. African American participants had higher game affect in the role model condition. South Asian participants had higher self-reported engagement in the role model condition. Participants that completed <= 1 levels had higher performance in the role model condition. General trends suggest that the role model's gender and racial closeness with the player, could play a role in player performance and self-reported engagement as consistent with the social science literature.