If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Querying large datasets with incomplete and vague data is still a challenge. Ontology-based query answering extends standard database query answering by background knowledge from an ontology to augment incomplete data. We focus on ontologies written in rough description logics (DLs), which allow to represent vague knowledge by partitioning the domain of discourse into classes of indiscernible elements. In this paper, we extend the combined approach for ontology-based query answering to a variant of the DL EL augmented with rough concept constructors. We show that this extension preserves the good computational properties of classical EL and can be implemented by standard database systems.
Thost, Veronika (IBM Research)
The DL-Lite description logics allow for modeling domain knowledge on top of databases and for efficient reasoning. We focus on metric temporal extensions of DL-Lite_bool and its fragments, and study the complexity of satisfiability. In particular, we investigate the influence of rigid and interval-rigid symbols, which allow for modeling knowledge that remains valid over (some) time. We show that especially the latter add considerable expressive power in many logics, but they do not always increase complexity.
Salvetti, Matteo (Università degli Studi di Brescia ) | Botea, Adi (IBM Research) | Gerevini, Alfonso Emilio (Università degli Studi di Brescia) | Harabor, Daniel (Monash University) | Saetti, Alessandro (Università degli Studi di Brescia)
Path planning on grid maps has progressed significantly in recent years, partly due to the Grid-based Path Planning Competition GPPC. In this work we present an optimal approach which combines features from two modern path planning systems, SRC and JPS+, both of which were among the strongest entrants at the 2014 edition of the competition. Given a current state s and a target state t, SRC is used as an oracle to provide an optimal move from s towards t. Once a direction is available we invoke a second JPS-based oracle to tell us for how many steps that move can be repeated, with no need to query the oracles between these steps. Experiments on a range of grid maps demonstrate a strong improvement from our combined approach. Against SRC, which remains an optimal solver with state-of-the-art speed, the performance improvement of our new system ranges from comparable to more than one order of magnitude faster.
User-generated content online is shaped by many factors, including endogenous elements such as platform affordances and norms, as well as exogenous elements, in particular significant events. These impact what users say, how they say it, and when they say it. In this paper, we focus on quantifying the impact of violent events on various types of hate speech, from offensive and derogatory to intimidation and explicit calls for violence. We anchor this study in a series of attacks involving Arabs and Muslims as perpetrators or victims, occurring in Western countries, that have been covered extensively by news media. These attacks have fueled intense policy debates around immigration in various fora, including online media, which have been marred by racist prejudice and hateful speech. The focus of our research is to model the effect of the attacks on the volume and type of hateful speech on two social media platforms, Twitter and Reddit. Among other findings, we observe that extremist violence tends to lead to an increase in online hate speech, particularly on messages directly advocating violence. Our research has implications for the way in which hate speech online is monitored and suggests ways in which it could be fought.
An "elevator pitch" is a brief, persuasive speech that an experience seller can use to attain the attention of a prospective client. Unfortunately, when selling complex enterprise products and solutions, there is no one pitch that works for all customers. To craft a good pitch, a seller must study a large amount of documentation, including product descriptions, client references, and use cases. Leveraging experience developed over the years, sellers then determine which marketing message will work best with a client. The goal of our research is to automatically create knowledge snippets from a large set of enterprise documents that can be used in elevator pitches. We refer to these snippets of text as points of view (POVs). Our method is based on natural language understanding (NLU), clustering and ranking techniques where the most relevant and informative content are selected as POVs for a given product. In addition, our approach is tailored to create POVs for a given aspect of the product, like the business challenges or the benefits of deploying the product. In this paper, we present our initial results in analyzing thousands of client references and programmatically creating POVs for hundreds of IBM solutions. Our tool has been deployed and is being tested by a group of IBM sellers. While specifically built for IBM sellers and business partners, our solution has broad applicability in the delivery of marketing messages for complex enterprise solutions.
Data intensive solutions, such as solutions that include machine learning components, are becoming more and more prevalent. The standard way of developing such solutions is to train machine learning models with manually annotated or labeled data for a given task. This methodology assumes the existence of ample human annotated data. Unfortunately, this is often not the case, due to imbalanced distribution of classes and lack of human annotation resources. This challenge is exasperated when thousands of hierarchical classes are introduced. Therefore, it is critical to quantify the sufficiency of the data for a given task before applying standard machine learning algorithms. Moreover, it may be the case that there is ample labeled training data to only solve a sub-problem. In particular, in the hierarchical classification problem, the sufficiency level of training data could vary significantly depending on the granularity level of hierarchy we use for classification. We identify a need to decompose the given problem to sub-problems for which there is ample training data. In this paper we propose a methodology to decompose a hierarchical classification problem considering the characteristics of a given dataset. We define an optimization problem of adaptive node collapse that identifies an appropriate hierarchy decomposition based on a trade-off between multiple goals. In our experiments, we consider the trade-off between the learning accuracy and the hierarchy abstraction level.
Current existing chatbot engines do not properly handle a group chat with many users and many chatbots. This prevents chatbots from developing their full potential as social participants. This happens because there is a lack of methods and tools to design and engineer conversation rules. The work presented in this paper has two major contributions: the presentation of a Finite-State-Automata-based DSL (Domain Specific Language), called DSL-CR, for engineering multi-party conversation rules for inter-message coherence to be used by chatbot engines; and its usage in a real-world dialogue problem with four bots and humans. With this tool, the amount of domain and programming expertise needed for creating conversation rules is reduced, and a larger group of people, like linguists, can specify the conversation rules.
Due to popularity in texting and messaging, a recent advancement of deep learning technologies, a conversation-based interaction becomes an emerging user interface. While today’s conversation platforms offer basic conversation capabilities such as natural language understanding, entity extraction and simple dialogue management, there are still challenges in developing practical applications to support complex use cases using a dialogue system. In this paper, we highlight such challenges and share practical knowledge learned from our experiences on developing a leisure travel shopping application that combines a personalized recommendation system and a conversation system. Such efforts include a conversation design, extraction of user intents, communication of variables between a dialogue system and analytics engines, and dynamic user interface designs. In particular, we introduce our approach to overcome the unique challenges, understanding user's intent, when dialogue system met personalized recommendation system. Furthermore, we propose a semantic mapping as a novel method to utilize undefined user's preferences when producing recommended items. Finally, examples of recommendations based on natural language conversations are provided in order to exhibit how components in the overall architecture are seamlessly orchestrated. In general, our framework provides guiding principles and best practices on the implementation of task-oriented dialogue system connected with other components in the overall architecture.
Freedman, Richard G. (University of Massachusetts Amherst) | Chakraborti, Tathagata (Arizona State University) | Talamadupula, Kartik (IBM Research) | Magazzeni, Daniele (King's College London) | Frank, Jeremy D. (NASA Ames Research Center)
The User Interfaces and Scheduling and Planning (UISP) Workshop had its inaugural meeting at the 2017 International Conference on Automated Scheduling and Planning (ICAPS). The UISP community focuses on bridging the gap between automated planning and scheduling technologies and user interface (UI) technologies. Planning and scheduling systems need UIs, and UIs can be designed and built using planning and scheduling systems. The workshop participants included representatives from government organizations, industry, and academia with various insights and novel challenges. We summarize the discussions from the workshop as well as outline challenges related to this area of research, introducing the now formally established field to the broader user experience and artificial intelligence communities.
Srivastava, Biplav (IBM Research)
Conversation interfaces (CIs), or chatbots, are a popular form of intelligent agents that engage humans in taskoriented or informal conversation. In this position paper and demonstration, we argue that chatbots working in dynamic environments, like with sensor data, can not only serve as a promising platform to research issues at the intersection of learning, reasoning, representation and execution for goal-directed autonomy; but also handle non-trivial business applications. We explore the underlying issues in the context of Water Advisor, a preliminary multi-modal conversation system that can access and explain water quality data.