If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We present apartial-order probabilistic planning algorithm that adapts plan-graph based heuristics implemented in Repop. We describe our implemented planner, Reburidan, named after its predecessors Repop and Buridan. Reburidan uses plan-graph based heuristics to first generate a base plan. It then improves this plan using plan refinement heuristics based on the success probability of subgoals. Our initial experiments show that these heuristics are effective in improving Buridan significantly.
The paper is based on agent plan computing where the interaction amongst heterogeneous computing resources is via objects, multiagent AI and agent intelligent languages. Modeling, objectives, and planning issues are examined at an agent planning. A basis to model discovery and prediction planning is stated. The new agent computing theories the author defined since 1994 can be applied to present precise decision strategies on multiplayer games with only perfect information between agent pairs. The game trees are applied to train models. The computing model is based on a novel competitive learning with agent multiplayer game tree planning. Specific agents are assigned to transform the models to reach goal plans where goals are satisfied based on competitive game tree learning. The planning applications include OR-Operations Research as goal satisfiability and micro-managing decision support with means-end analysis.
Trust from the perspective of traditional social theory is a function of the cooperation promoted across a system of multiple human or artificial agents, assuring that conflict ends with a consensus of the facts drawn from reality, R. Overlooked is the downside of cooperation (e.g., invisibility of corruption, terrorist sleeper cells), and the reduction in computational power from the costs to communicate with an increasing number, N, of agents cooperating in an interaction, making the traditional model impractical for a large system of computational agents to solve difficult problems. In contrast to logical positivist models, quantizing the pro-con positions in decision-making may produce a robust model of argumentation that increases in computational power with N. Previously, optimum solutions of ill-defined problems, idp's, were found to occur when incommensurable beliefs interacting before neutral decision makers generated sufficient emotion to process information, I, but insufficient to impair the interaction, unexpectedly producing more trust compared to cooperation. We extend this model to the first information density functional theory (IDFT) of groups.
An adaptive agent-based simulation modeling technology has been developed that allows us to build, for example, simulated decision makers representing defenders and attackers of a computer system engaged in cyberwarfare in their simulated microworld. The adaptive adversaries coevolve: attackers evolve new attack patterns and overcome cyber defenses, and defenders subsequently evolve new defensive patterns to the attacks. When we run these adaptive decision-maker models, we see what looks like human adversarial behavior. These simulated attackers learn to time their attacks just as real-world hackers do with virus attacks. Simulated defenders soon catch on and resynchronize their defenses to match the timing of these attacks. This adaptive simulation modeling can automatically discover new behaviors beyond those that were initially built into the models, providing a more realistic simulation of intelligent behavior. Such models provide both an opportunity to discover novel adversarial behavior, and a testbed for other adversary course of action prediction models.
The main focus of this research is to establish the techniques and prove feasibility of the case-based keyhole plan recognition dealing with incomplete plan libraries. Most traditional plan recognition systems operate with complete plan libraries that contain all of the possible plans the planner may pursue. However, enumeration of all possible plans may be difficult (or impossible) in some complex planning domains. Furthermore, the completeness of the library may result in occurrence of extraneous plans that may impact the recognizer's efficiency (Lesh and Etzioni, 1994). The main difficulty when dealing with incomplete plan libraries is the recognizer's inability to reason about the planner's intentions that are not contained in the plan library.
Computer programs for the game of Go play only at the level of an advanced beginning player. The standard approach to constructing a program based on brute force game-tree search does not work well because of the game tree size and, more significantly, the difficulty in constructing fast, accurate heuristic evaluation functions. In this paper, we consider the use of intent inference in a Go program. In particular, we discuss how models of an opponent's long-term playing style and short-term intentions can direct the exploration of candidate moves and influence the evaluation of game positions. We propose a probabilistic approach to user modeling and intent inference, and we note key issues relevant to the implementation of an intent inference agent.
This paper describes ongoing efforts to extend our work (Geib & Goldman 2001b; 2001a) (G&G) on the Probabilistic Hostile Agent Task Tracker (PHATT) to handle the problem ofgoal abandonment. As such, we will be discussing both probabilistic intent inference and the PHATT system andassume the reader is familiar with these areas. We refer the interested reader to the complete paper and to our earlier papers for more discussion of these issues.
Multitasking complicates the problem of intent inference by requiring intent inference mechanisms to distinguish multiple streams of behavior and recognize overt task coordination actions.. Given knowledge of an agent's problem representation and architecture it may be possible to disambiguate the agent's actions and better infer its intent.
Counterterrorism specialists and law enforcement agencies are interested in the long-term intent or plans of the terrorists and Organized Crime members that they oppose. They often get only sporadic, incomplete, or seemingly unrelated secondhand information upon which to base their reasoning.. Some aspects of terrorist behavior are quite repetitive and regular, while terrorists go to great lengths to change and/or hide other aspects of their activity. In order to discover terrorist plans early enough to disrupt them, counterterrorism professionals must both understand terrorist patterns of behavior and have enough evidence to begin to detect these patterns. The question addressed here is how an automated process can support plan discovery.