If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A general perceptual model is proposed for Eldercare Robot implementation that is comprised of audition functionality interconnected with a feedback-driven perceptual reasoning agent. Using multistage signal analysis to feed temporally tiered learning/recognition modules, concurrent access to sound event localization, classification, and context is realized. Patterns leading to the quantification of patient emotion/well being can be inferred using a perceptual reasoning agent. The system is prototyped using a Nao H-25 humanoid robot with an online processor running the Nao Qi SDK and the Max/MSP environment with the FTM, and GF libraries.
In recent years, online chat has become a dominant mode of communication. This text-based medium has the potential of improving information awareness within an organization, but only if the critical information within messages can be identified and directed to where it is most needed. Such a goal has many challenges that traditional Information Extraction (IE) approaches have rarely addressed: the text is “dirty” (containing typos, misspellings, sparse punctuation, etc.), messages are fragmented and refer implicitly to previous messages and shared knowledge, messages from multiple topics are interleaved, etc. Past work in conversation analysis has included in-depth discussions of dialog acts, i.e., the individual utterances that comprise conversations. This paper describes how dialog acts within online chat differ from those within two-person voice conversations. It then presents methods for identifying dialog acts and the role that dialog acts play in identifying individual conversations within a chat stream. Identifying conversations is a necessary step for extracting actionable information, such as identifying individuals with specific expertise, recognizing reports of offline activities, and alerting decision makers to critical developments. Finally, we describe Chat-IE, a prototype software system that performs live dialog identification on chat streams.
This position paper describes the Activity-Based Computing (ABC) project which has been ongoing in Denmark since 2003. Originally, the project took its outset in the design of a pervasive computing platform suited for the mobile, collaborative, and time-critical work of clinicians in a hospital setting. Out of this grew a conceptual framework, a set of six ABC principles, and a programming and runtime framework for the development of activity-based computing infrastructures and applications. Lately, these principles and technologies have been successfully moved to other application areas, and is now used to design and implement activity-based computing support for work in a biology laboratory and for global software development.
Jain, Manish (University of Southern California) | Yin, Zhengyu ( University of Southern California ) | Tambe, Milind ( University of Southern California ) | Ordóñez, Fernando (University of Southern California and University of Chile (Santiago))
Attacker-defender Stackelberg games have become a popular game-theoretic approach for security with deployments for LAX Police, the FAMS and the TSA. Unfortunately, most of the existing solution approaches do not model two key uncertainties of the real-world: there may be noise in the defender’s execution of the suggested mixed strategy and/or the observations made by an attacker can be noisy. In this paper, we analyze a framework to model these uncertainties, and demonstrate that previous strategies perform poorly in such uncertain settings. We also analyze RECON, a novel algorithm that computes strategies for the defender that are robust to such uncertainties, and explore heuristics that further improve RECON’s efficiency.
While microblogging has gained popularity on the Internet, analyzing and processing short messages has become a challenging task in natural language processing. This paper analyzes the differences between Internet short messages (or “microtext”) and general articles by comparing the Plurk Corpus and the Sinica Balanced Corpus. Likelihood ratio and the tóngyìcícílín thesaurus are adopted to analyze the lexical semantics of frequent terms in each corpus. Furthermore, the NTUSD sentiment dictionary is used to compare the sentiment distribution of the two corpora. The result is also applied to sentiment transition analysis.
Rebguns, Antons ( Department of Computer Sceince School of Information: Science, Technology, and Arts University of Arizona ) | Ford, Daniel ( Department of Electrical and Computer Engineering University of Arizona ) | Fasel, Ian R ( School of Information: Science, Technology, and Arts University of Arizona )
Recently, information gain has been proposed as a candidate intrinsic motivation for lifelong learning agents that may not always have a specific task. In the InfoMax control framework, reinforcement learning is used to find a control policy for a POMDP in which movement and sensing actions are selected to reduce Shannon entropy as quickly as possible. In this study, we implement InfoMax control on a robot which can move between objects and perform sound-producing manipulations on them. We formulate a novel latent variable mixture model for acoustic similarities and learn InfoMax polices that allow the robot to rapidly reduce uncertainty about the categories of the objects in a room. We find that InfoMax with our improved acoustic model leads to policies which lead to high classification accuracy. Interestingly, we also find that with an insufficient model, the InfoMax policy eventually learns to "bury its head in the sand" to avoid getting additional evidence that might increase uncertainty. We discuss the implications of this finding for InfoMax as a principle of intrinsic motivation in lifelong learning agents.
There has been significant recent interest in computing effective practical strategies for playing large games. Most prior work involves computing an approximate equilibrium strategy in a smaller abstract game, then playing this strategy in the full game. In this paper, we present a modification of this approach that works by constructing a deterministic strategy in the full game from the solution to the abstract game; we refer to this procedure as purification. We show that purification, and its generalization which we call thresholding, lead to significantly stronger play than the standard approach in a wide variety of experimental domains. First, we show that purification improves performance in random 4x4 matrix games using random 3x3 abstractions. We observe that whether or not purification helps in this setting depends crucially on the support of the equilibrium in the full game, and we precisely specify the supports for which purification helps. Next we consider a simplifed version of poker called Leduc Hold'em; again we show that purification leads to a significant performance improvement over the standard approach, and furthermore that whenever thresholding improves a strategy, the biggest improvement is often achieved using full purification. Finally, we consider actual strategies that used our algorithms in the 2010 AAAI Computer Poker Competition. One of our programs, which uses purification, won the two-player no-limit Texas Hold'em bankroll division. Furthermore, experiments in two-player limit Texas Hold'em show that these performance gains do not necessarily come at the expense of worst-case exploitability and that our algorithms can actually produce strategies with lower exploitabilities than the standard approach.
Learning about users' preferences allows agents to make intelligent decisions on behalf of users. When we are eliciting preferences from a group of users, we can use the preferences of the users we have already processed to increase the efficiency of the elicitation process for the remaining users. However, current methods either require strong prior knowledge about the users' preferences or can be overly cautious and inefficient. Our method, based on standard techniques from non-parametric statistics, allows the controller to choose a balance between prior knowledge and efficiency. This balance is investigated through experimental results.
Ramachandran, Sowmya (Stottler Henke Associates Inc.) | Jensen, Randy (Stottler Henke Associates Inc.) | Bascara, Oscar (Stottler Henke Associates Inc.) | Carpenter, Tamitha (Stottler Henke Associates Inc.) | Denning, Todd (US Air Force) | Sucillon, Lt. Shaun (US Air Force Research Laboratory)
Analyzing chat traffic has important applications for both the military and the civilian world. This paper presents a case study of a real-world application of chat analysis in support of team training exercise in the military. It compares the results of an unsupervised learning approach with those of a supervised classification approach. The paper also discusses some of the specific challenges presented by this domain.