If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
When medicinal chemistry was born a hundred years ago, a drug design methodology was expected to be based on the knowledge of the relations among chemistry, biology and medicine. Originally, chemists believed that a drug molecule consists of a scaffold with several substituents. While the substituents were replaced by alternate functional groups (aka substructures), the activity value of the molecule would be changed accordingly. This is termed as structure–activity relationship (SAR), which can be used to guide chemists to chemically modify the molecule to improve its druggability. Along with the progress of computing technology, SAR evolved into QSAR (Quantitative SAR). QSAR method prevailed in the era when determinism dominated the scientific community.
In his book Outliers, Malcom Gladwell unveils the "10,000-Hour Rule" which postulates that the key to achieving world-class mastery of a skill is a matter of 10,000 hours of practice or learning. And while there may be disagreement on the actual number of hours (though I did hear my basketball coaches yell that at me about 10,000 times), let s say that we can accept that it requires roughly 10,000 hours of practice and learning exploring, trying, failing, learning, exploring again, trying again, failing again, learning again for one to master a skill. If that is truly the case, then dang, us humans are doomed. Think about 1,000,000 Tesla cars with its Fully Self Driving (FSD) autonomous driving module practicing and learning every hour that it is driving. In a single hour of the day, Tesla s FSD driving module is learning 100x more than what Malcom Gladwell postulates is necessary to master a task.
In this paper, we investigate inductive inference with system W from conditional belief bases with respect to syntax splitting. The concept of syntax splitting for inductive inference states that inferences about independent parts of the signature should not affect each other. This was captured in work by Kern-Isberner, Beierle, and Brewka in the form of postulates for inductive inference operators expressing syntax splitting as a combination of relevance and independence; it was also shown that c-inference fulfils syntax splitting, while system P inference and system Z both fail to satisfy it. System W is a recently introduced inference system for nonmonotonic reasoning that captures and properly extends system Z as well as c-inference. We show that system W fulfils the syntax splitting postulates for inductive inference operators by showing that it satisfies the required properties of relevance and independence. This makes system W another inference operator besides c-inference that fully complies with syntax splitting, while in contrast to c-inference, also extending rational closure.
A central result in the AGM framework for belief revision is the construction of revisionfunctions in terms of total preorders on possible worlds. These preorders encode comparative plausibility: r r' states that the world r is at least as plausible as r'. Indifference in the plausibility of two worlds, r, r', denoted r r', is defined as the absence of a preference between r and r'. Herein we take a closer look at plausibility indifference. We contend that the transitivity of indifference assumed in the AGM framework is not always a desirable property for comparative plausibility. Our argument originates from similar concerns in preference modelling, where a structure weaker than a total preorder, called a semiorder, is widely consider to be a more adequate model of preference.
In this paper, we investigate the revision of argumentation systems à la Dung. We focus on revision as minimal change of the arguments status. Contrarily to most of the previous works on the topic, the addition of new arguments is not allowed in the revision process, so that the revised system has to be obtained by modifying the attack relation only. We introduce a language of revision formulae which is expressive enough for enabling the representation of complex conditions on the acceptability of arguments in the revised system. We show how AGM belief revision postulates can be translated to the case of argumentation systems. We provide a corresponding representation theorem in terms of minimal change of the arguments statuses. Several distance-based revision operators satisfying the postulates are also pointed out, along with some methods to build revised argumentation systems. We also discuss some computational aspects of those methods.
In many scenarios where the integration of information into a knowledge base (KB) leads to inconsistencies there is a need to change the KB minimally. In belief revision, relevance postulates meet the minimality requirement by restricting the elimination of KB elements to those that are relevant for the incoming information. This paper focuses on two minimality postulates in an ontology revision scenario in which conflicts are caused by ambiguous use of symbols: a relevance postulate and a generalized inclusion postulate which limits the creativity of the operators. Both postulates exploit the (satisfiably) equivalent representation of a first-order logic KB by its prime implicates, which, intuitively, represent the most atomic logical components of the KB. The paper shows that reinterpretation operators (which are ontology revision operators) fulfill both postulates.
Belief merging is a central operation within the field of belief change and addresses the problem of combining multiple, possibly mutually inconsistent knowledge bases into a single, consistent one. A current research trend in belief change is concerned with tailored representation theorems for fragments of logic, in particular Horn logic. Hereby, the goal is to guarantee that the result of the change operations stays within the fragment under consideration. While several such results have been obtained for Horn revision and Horn contraction, merging of Horn theories has been neglected so far. In this paper, we provide a novel representation theorem for Horn merging by strengthening the standard merging postulates. Moreover, we present a concrete Horn merging operator satisfying all postulates.
In this paper we combine two of the most important areas of knowledge representation, namely belief revision and (abstract) argumentation. More precisely, we show how AGM-style expansion and revision operators can be defined for Dung's abstract argumentation frameworks (AFs). Our approach is based on a reformulation of the original AGM postulates for revision in terms of monotonic consequence relations for AFs. The latter are defined via a new family of logics, called Dung logics, which satisfy the important property that ordinary equivalence in these logics coincides with strong equivalence for the respective argumentation semantics. Based on these logics we define expansion as usual via intersection of models. We show the existence of such operators. This is far from trivial and requires to study realizability in the context of Dung logics. We then study revision operators. We show why standard approaches based on a distance measure on models do not work for AFs and present an operator satisfying all postulates for a specific Dung logic.
Logical argumentation is a well-known approach to modelling nonmonotonic reasoning with conflicting information. In this paper we provide a proof-theoretic study of properties of logical argumentation frameworks. Given some desiderata in terms of rationality postulates, we consider the conditions that an argumentation framework should fulfill for the desiderata to hold. The rationality behind this approach is to assist designers to plug-in'' pre-defined formalisms according to actual needs. This work extends related research on the subject in several ways: more postulates are characterized, a more abstract notion of arguments is considered, and it is shown how the nature of the attack rules (subset attacks versus direct attacks) affects the properties of the whole setting.
In this article, we consider iteration principles for contraction, with the goal of identifying properties for contractions that respect conditional beliefs. Therefore, we investigate and evaluate four groups of iteration principles for contraction which consider the dynamics of conditional beliefs. For all these principles, we provide semantic characterization theorems and provide formulations by postulates which highlight how the change of beliefs and of conditional beliefs is constrained, whenever that is possible. The first group is similar to the syntactic Darwiche-Pearl postulates. As a second group, we consider semantic postulates for iteration of contraction by Chopra, Ghose, Meyer and Wong, and by Konieczny and Pino P\'erez, respectively, and we provide novel syntactic counterparts. Third, we propose a contraction analogue of the independence condition by Jin and Thielscher. For the fourth group, we consider natural and moderate contraction by Nayak. Methodically, we make use of conditionals for contractions, so-called contractionals and furthermore, we propose and employ the novel notion of $ \alpha $-equivalence for formulating some of the new postulates.