If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A formalism that has been developed with this purpose in mind is that of active logic. Active logic combines inference rules with a constantly evolving measure of time (a'now') that itself can be referenced in those rules. These features of active logic provide mechanisms to deal with various forms of uncertainties arising in computation. A computational process P can be said to be uncertain about a proposition (or datum) if (i). it explici Uncertainties of type (i) above lend themselves to representation by probabilistic reasoning, which involves the representation of explicit confidence levels for beliefs, for example, Bayesian Networks; and somewhat less so for type (ii); and even less for types (iii) and (iv). On the other hand, a suitably configured default reasoner (nonmonotonic approaches) can represent all of these, and without special ad hoc tools; that is, active logic already has, in its timesensitive inference architecture, the means for performing default reasoning in an appropriately expressive manner. It is the purpose of this paper to elaborate on that claim; the format consists of an initial primer on uncertainty in active logic, then its current implementation (Alma/Carne), existing applications, and finally a discussion of potential future applications. However, in a Bayesian net for instance, because the probabilities have a somewhat holistic character, with the probability of a given proposition depending not just on direct but indirect connections, it looks like adding new propositions or rules (connections between nodes) will be expensive and potentially require recalculation of all connection weights. If one's world-model is well specified enough that reasoning about and interacting with the world is primarily a matter of coming to trust or distrust propositions already present in that model, a Bayesian net may provide a good engine for reasoning. However, if one's world-model is itself expected to be subject to frequent change, as novel propositions and rules are added (or removed) from one's KB, we think that a reasoning engine based on active logic will prove a better candidate. In addition, and partly because a Bayesian net deals so smoothly with inconsistent incoming data, it can operate on the assumption that incoming data is accurate and can be taken at face value. We have two related concerns about this: first, an abnormally long string of inaccurate data - as might be expected from a faulty sensor or a deliberate attempt at deceit - would obviously reduce the probability of certain beliefs that, were the data known to be inaccurate, would have retained their original strengths. It has been suggested to us that one could model inaccurate incoming information by coding child nodes that would contain information regarding the expected accuracy of the incoming information from a given evidence node.
Overview To reason about complex computational systems, researchers are starting to borrow techniques from the field of uncertainty reasoning. In some cases, this is because the algorithms contain stochastic components. For example, Markov decision processes are now being used to model the trajectory of stochastic local search procedures. In other cases, uncertainty is used to help model and cope with the stochastic nature of inputs to (possibly deterministic) algorithms. For example, Monte Carlo sampling is used to deal with uncertainty in game playing programs, whilst probability distributions are used to model variations in runtime performance. Uncertainty and randomness have also been found to be a useful addition to many deterministic algorithms. And a number of areas like planning, constraint satisfaction, and inductive logic programming which have traditionally ignored uncertainty in their computations are waking up to the possibility of incorporating uncertainty into their formalisms. The goal of this workshop is to encourage symbiosis between these different areas. Topics The aim is to bring together researchers from a number different areas of AI including (but not limited to) agents, constraint programming, decision theory, game playing, knowledge representation and reasoning, learning, planning, probabilistic reasoning, qualitative reasoning, reasoning under uncertainty, and search. Possible topics include (but are not limited to): Incorporating uncertainty into existing frameworks Modelling uncertainty in computation Monte Carlo sampling Probabilistic analysis and evaluation of algorithms Randomization of algorithms Stochastic vs. systematic algorithms Utility and computation vii Handling Uncertainty with Active Logic Introduction Reasoning in a complex and dynamic world requires considerable flexibility on the part of the reasoner; flexibility to apply, in the right circumstances, the right tools (e.g. A formalism that has been developed with this purpose in mind is that of active logic. Active logic combines inference rules with a constantly evolving measure of time (a'now') that itself can be referenced in those rules. As an example, Now(6) [the time is now 6] is inferred from No w (5) since the fact of such inference implies that (at least one'step' of) time has passed. Default conclusions can be characterized in terms of lookups to see whether one has information (directly) contrary to the default.
A key research issue in agents and multiagent research is to develop negotiation procedures by which agents can efficiently and effectively negotiate solutions to their conflicts (Rosenschein & Zlotkin 1994). In this paper, we focus on the problem of agents vying for portions of a good. The negotiation process will produce a partition and allocation of the goods among the agents (Huhns & Malhotra 1999; Robertson & Webb 1998). We are interested in both protocols by which agents interact and appropriate decision procedures to adopt given a particular procedure. In an envy-free divisioli, each agent believes that it received, by its own estimate, at least as much as the share received by any other agent (Brams & Taylor 1996; Stewart 1999).
Work on computational models of negotiation has focused almost exclusively on defining contracts consisting of one or a few independent issues (Faratin, Sierra et al. 2000) (Ehtamo, Ketteunen et al. 2001). They work as follows: Each point on the X axis represents a candidate contract. For simplicity of exposition we show only one dimension in these figures, but in general there will be one dimension for every issue negotiated over. The Y axes represents the utility of each contract to each agent. Both agents have a reservation utility value: only contracts whose utility is above that agent's reservation value will be accepted.
Software agents will personalize smart devices to act autonomously on behalf of their human owner. In dynamical electronic commerce environments, these agents could buy and sell goods and services from each other, without using a central market maker. From a system perspective, this creates a decentralized and continuously changing multi-agent system with the need for coordination of supply and demand. In this article we show how such a multi-agent system may be decentrally coordinated while software agents bargain with each other under the constraints of incomplete information, non-equilibrium and time pressure. The agents adapt to a changing environment with an evolutionary learning mechanism. It can be shown that the multi-agent system as a whole shows emergent coordination in absence of a centralized coordination institution.
This paper is primarily centered around the study of the negotiation as a technique needed to supporthe cooperative activity within this kind of systems. Our contribution is mainly about the definition of formal models for negotiation and negotiating agent. These models enable to specify the relations between the concepts of plans, plan proposals and resource allocations, on the one hand, and concepts of roles, knowledge, beliefs and capabilities, on the other hand. Introduction Multi-Agent Systems (MAS) are designed and implemented with a set of agents which interact according to diversified cooperation modes, in order to enrich the collective behaviors (Wooldridge& Jennings 1995, Smith & Davis 1980). The negotiation plays a fundamental role in the cooperative activity by enabling the agents to solve conflicts which could obstruct such behaviors.
We are interested in how cooperation can arise in types of environments, such as open systems, where little or nothing is known about the other agents. We view the negotiation problem as a strategic and communication rich process between different local preference/decision models. This contrasts with the classical cooperative game theoretic (axiomatic) view of negotiation process as a centralized and linear optimization problem. Although unconcerned with the processes of negotiation, such axiomatic models of negotiation (in particular those of mechanism design tradition) has assumed optimality can be achieved through design of normative roles of interactions that incents agents to act rationally (Neumann& Morgemstern 194.4),(Rosenschein & Zlotkin 1994),(Binmore 1990),(Shehory & Kraus 1995),(Sandholm 1999). Likewise, in eration research the focus is the design of optimal solution algorithms based on mathematical programming techniques (Kraus 1997),(Ehtamo, Ketteunen, & Hamalainen 2001),(Heiskanen 1999),(Teich et al. 1996).
In this paper we summarize satisficing decision theory, which provides a mechanism for determining decision options which are "good enough" as as tradeoff between a selectability function and a rejectability function, with an index of caution as a decision control parameter. Single agent satisficing is extended to multi-agent satisficing, by which group rationality can be represented; option vectors for the entire group are obtained as a result of this decision process. Multi-agent satisficing provides the stage upon which negotiation takes place.