If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
It is well known that human preferences in decisions underrisk do not always complies with expected utility theory(EUT). In fact, there are several effects that are inconsistent with basic tenets of EUT. Alternative theories have been proposed and perhaps the most well studied is Prospect Theory(PT). Recent work showed experimental results that support the idea that financial professionals may behave according toPT and violate EUT. Meanwhile, some argue that economy needs agent-based modeling, because it may be a better way to help guide financial policies than mathematical models.If financial professional behave according to PT in markets,then agent-based modeling needs PT based agents. Our ideais creating trading agents based on PT to simulate a market.However, the creation of an artificial agent based on PT as originally proposed is very hard and limited to two outcome prospects. We propose an agent model based on an extension of PT called Smooth Prospect Theory (SPT). We used this model to create agents to populate an artificial market withSPT and EUT agents. It was used to predict real market behavior for short periods. SPT agents provided more accurate predictions in crisis periods than EUT agents.
Attributing a cyber-operation through the use of multiple pieces of technical evidence (i.e., malware reverse-engineering and source tracking) and conventional intelligence sources (i.e., human or signals intelligence) is a difficult problem not only due to the effort required to obtain evidence, but the ease with which an adversary can plant false evidence. In this paper, we introduce a formal reasoning system called the InCA (Intelligent Cyber Attribution) framework that is designed to aid an analyst in the attribution of a cyber-operation even when the available information is conflicting and/or uncertain. Our approach combines argumentation-based reasoning, logic programming, and probabilistic models to not only attribute an operation but also explain to the analyst why the system reaches its conclusions.
To provide insight into patient-level disease dynamics from data collected at irregular time intervals, this work extends applications of semi-parametric clustering for temporal mining. In the semi-parametric clustering framework, Markovian models provide useful parametric assumptions for modeling temporal dynamics, and a non-parametric method isused to cluster the temporal abstractions instead operating on the original data. Our contribution extends abstraction to continuous-time Markov models and the clustering componentto the non-parametric Bayesian setting, which does not require the number of clusters to be indicated a priori.
Parsons, Simon (CUNY Brooklyn College) | Sklar, Elizabeth (CUNY Brooklyn College) | Singh, Munindar (North Carolina State University) | Levitt, Karl (University of California, Davis) | Rowe, Jeff (University of California, Davis)
Our work aims to support decision making in situations where the source of the information on which decisions are based is of varying trustworthiness. Our approach uses formal argumentation to capture the relationships between such information sources and conclusions drawn from them. This allows the decision maker to explore how information from particular sources impacts the decisions they have to make. We describe the formal system that underlies our work, and a prototype implementation of that system, applied to a problem from military decision making.
Parsons, Simon, Mamdani, E. H.
In this paper some initial work towards a new approach to qualitative reasoning under uncertainty is presented. This method is not only applicable to qualitative probabilistic reasoning, as is the case with other methods, but also allows the qualitative propagation within networks of values based upon possibility theory and Dempster-Shafer evidence theory. The method is applied to two simple networks from which a large class of directed graphs may be constructed. The results of this analysis are used to compare the qualitative behaviour of the three major quantitative uncertainty handling formalisms, and to demonstrate that the qualitative integration of the formalisms is possible under certain assumptions.
McBurney, Peter, Parsons, Simon
We propose a formal treatment of scenarios in the context of a dialectical argumentation formalism for qualitative reasoning about uncertain propositions. Our formalism extends prior work in which arguments for and against uncertain propositions were presented and compared in interaction spaces called Agoras. We now define the notion of a scenario in this framework and use it to define a set of qualitative uncertainty labels for propositions across a collection of scenarios. This work is intended to lead to a formal theory of scenarios and scenario analysis.
Sklar, Elizabeth (Brooklyn College, City University of New York) | Parsons, Simon (Brooklyn College, City University of New York) | Epstein, Susan L. (Hunter College, City University of New York) | Ozgelen, Arif Tuna (The Graduate Center, City University of New York) | Munoz, Juan Pablo (The Graduate Center, City University of New York) | Abbasi, Farah (College of Staten Island, City University of New York) | Schneider, Eric (Hunter College, City University of New York) | Costantino, Michael (College of Staten Island, City University of New York)
Members of a multi-robot team, operating within close quarters, need to avoid crashing into each other. Simple collision avoidance methods can be used to prevent such collisions, typically by computing the distance to other robots and stopping, perhaps moving away, when this distance falls below a certain threshold. While this approach may avoid disaster, it may also reduce the team's efficiency if robots halt for a long time to let others pass by or if they travel further to move around one another. This paper reports on experiments where a human operator, through a graphical user interface, watches robots perform an exploration task. The operator can manually suspend robots' movements before they crash into each other, and then resume their movements when their paths are clear. Experiment logs record the robots' states when they are paused and resumed. A behavior pattern for collision avoidance is learned, by classifying the states of the robots' environment when the human operator issues "wait" and "resume" commands. Preliminary results indicate that it is possible to learn a classifier which models these behavior patterns, and that different human operators consider different factors when making decisions about stopping and starting robots.
Ozgelen, Arif T. (The Graduate Center, City University of New York) | Costantino, Michael (College of Staten Island, City University of New York) | Ishak, Adiba (Brooklyn College, City University of New York) | Kingston, Moses (Brooklyn College, City University of New York) | Moore, Diquan (Lehman College, City University of New York) | Sanchez, Samuel (Queens College, City University of New York) | Munoz, J. Pablo (Brooklyn College, City University of New York) | Parsons, Simon (Brooklyn College, City University of New York) | Sklar, Elizabeth (Brooklyn College, City University of New York)
Sklar, Elizabeth (Brooklyn College, City University of New York) | Epstein, Susan L. (Hunter College, City University of New York) | Parsons, Simon (Brooklyn College, City University of New York) | Ozgelen, Arif T. (The Graduate Center, City University of New York) | Munoz, Juan Pablo (Brooklyn College, City University of New York) | Gonzalez, Joel (City College, City University of New York)
Within the context of human/multi-robot teams, the "help me help you" paradigm offers different opportunities. A team of robots can help a human operator accomplish a goal, and a human operator can help a team of robots accomplish the same, or a different, goal. Two scenarios are examined here. First, a team of robots helps a human operator search a remote facility by recognizing objects of interest. Second, the human operator helps the robots improve their position (localization) information by providing quality control feedback.
Trust is an approach to managing the uncertainty about autonomous entities and the information they store, and so can play an important role in any decentralized system. As a result, trust has been widely studied in multiagent systems and related fields such as the semantic web. Here we introduce a simple approach to reasoning about trust with logi