If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We present a brief overview of learning dynamics for anti-coordination in ad-hoc scenarios. Specifically, we consider multi-armed bandit algorithms, reinforcement learning, and symmetric strategies for the repeated resource allocation game. In a multi-agent system with dynamic population where every agent is able to learn, the anti-coordination problem exhibits unique challenges. Thus, it is essential for the success of a joint plan that the agents can quickly and robustly learn their optimal behavior. In this work we will focus on convergence rate, efficiency, and fairness in the final outcome.
Zhou, Yiren (Singapore University of Technology and Design) | Moosavi-Dezfooli, Seyed-Mohsen (École Polytechnique Fédérale de Lausanne) | Cheung, Ngai-Man (Singapore University of Technology and Design) | Frossard, Pascal (École Polytechnique Fédérale de Lausanne)
In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy.
Wars and conflicts have constituted major events throughout history. Despite their importance, the general public typically learns about such events only indirectly, through the lens of news media, which necessarily select and distort events before relaying them to readers. Quantifying these processes is important, as they are fundamental to how we see the world, but the task is difficult, as it requires working with large and representative datasets of unstructured news text in many languages. To address these issues, we propose a set of unsupervised methods for compiling and analyzing a multilingual corpus of millions of online news documents about armed conflicts. We then apply our methods to answer a number of research questions: First, how widely are armed conflicts covered by online news media in various languages, and how does this change as conflicts progress? Second, what role does the level of violence of a conflict play? And third, how well informed is a reader when following a limited number of online news sources? We find that coverage levels are different across conflicts, but similar across languages for a given conflict; that Middle Eastern conflicts receive more attention than African conflicts, even when controlling for the level of violence; and that for most languages and conflicts, following very few sources is enough to stay continuously informed. Finally, given the prominence of conflicts in the Middle East, we further analyze them in a detailed case study.
One of the key information pieces in improving energy efficiency of buildings is the appliance level breakdown of energy consumption. Energy disaggregation is the process of obtaining this breakdown from a building level aggregate data using computational techniques. Most of the current research focuses on residential buildings, obtaining this information from a single smart meter and often relying on high frequency data. This work is directed at commercial buildings equipped with building management and automation systems providing low frequency operational and contextual data. This paper presents a machine learning method to disaggregate energy consumption of the building using this operational data as input features. Experimental results on two publicly available datasets demonstrate the effectiveness of the approach, which surpasses existing methods. For all but one appliance of House 2 of the publicly available REDD dataset, improvements in normalized error in assigned power range between 20% (Lighting) and 220% (Stove). For another dataset from an educational facility in Singapore, disaggregation accuracy of 92% is reported for the facility's cooling system.
Over the last five years, and while developing an architecture for autonomous service robots in human environments, we have identified several key decisional issues that are to be tackled for a cognitive robot to share space and tasks with a human. We introduce some of them here: situation assessment and mutual modelling, management and exploitation of each agent (human and robot) knowledge in separate cognitive models, natural multi-modal communication, "human-aware" task planning, and human and robot interleaved plan achievement. As a general "take home" message, it appears that explicit knowledge management, both symbolic and geometric, proves to be a successful key while attempting to address these challenges, as it pushes for a different, more semantic way to address the decision-making issue in human-robot interactions.
Many Artificial Intelligence tasks need large amounts of commonsense knowledge. Because obtaining this knowledge through machine learning would require a huge amount of data, a better alternative is to elicit it from people through human computation. We consider the sentiment classification task, where knowledge about the contexts that impact word polarities is crucial, but hard to acquire from data. We describe a novel task design that allows us to crowdsource this knowledge through Amazon Mechanical Turk with high quality. We show that the commonsense knowledge acquired in this way dramatically improves the performance of established sentiment classification methods.
Reviews keep playing an increasingly important role in the decision process of buying products and booking hotels. However, the large amount of available information can be confusing to users. A more succinct interface, gathering only the most helpful reviews, can reduce information processing time and save effort. To create such an interface in real time, we need reliable prediction algorithms to classify and predict new reviews which have not been voted but are potentially helpful. So far such helpfulness prediction algorithms have benefited from structural aspects, such as the length and readability score. Since emotional words are at the heart of our written communication and are powerful to trigger listeners' attention, we believe that emotional words can serve as important parameters for predicting helpfulness of review text. Using GALC, a general lexicon of emotional words associated with a model representing 20 different categories, we extracted the emotionality from the review text and applied supervised classification method to derive the emotion-based helpful review prediction. As the second contribution, we propose an evaluation framework comparing three different real-world datasets extracted from the most well-known product review websites. This framework shows that emotion-based methods are outperforming the structure-based approach, by up to 9%.
This paper proposes a simple linear Bayesian approach to reinforcement learning. We show that with an appropriate basis, a Bayesian linear Gaussian model is sufficient for accurately estimating the system dynamics, and in particular when we allow for correlated noise. Policies are estimated by first sampling a transition model from the current posterior, and then performing approximate dynamic programming on the sampled model. This form of approximate Thompson sampling results in good exploration in unknown environments. The approach can also be seen as a Bayesian generalisation of least-squares policy iteration, where the empirical transition matrix is replaced with a sample from the posterior.