Goto

Collaborating Authors

 intension


Two Cases of Deduction with Non-referring Descriptions

Raclavský, Jiří

arXiv.org Artificial Intelligence

Formal reasoning with non-denoting terms, esp. non-referring descriptions such as "the King of France", is still an under-investigated area. The recent exception being a series of papers e.g. by Indrzejczak, Zawidzki and K\"rbis. The present paper offers an alternative to their approach since instead of free logic and sequent calculus, it's framed in partial type theory with natural deduction in sequent style. Using a Montague- and Tich\'y-style formalization of natural language, the paper successfully handles deduction with intensional transitives whose complements are non-referring descriptions, and derives Strawsonian rules for existential presuppositions of sentences with such descriptions.


TaBIIC: Taxonomy Building through Iterative and Interactive Clustering

d'Aquin, Mathieu

arXiv.org Artificial Intelligence

Building taxonomies is often a significant part of building an ontology, and many attempts have been made to automate the creation of such taxonomies from relevant data. The idea in such approaches is either that relevant definitions of the intension of concepts can be extracted as patterns in the data (e.g. in formal concept analysis) or that their extension can be built from grouping data objects based on similarity (clustering). In both cases, the process leads to an automatically constructed structure, which can either be too coarse and lacking in definition, or too fined-grained and detailed, therefore requiring to be refined into the desired taxonomy. In this paper, we explore a method that takes inspiration from both approaches in an iterative and interactive process, so that refinement and definition of the concepts in the taxonomy occur at the time of identifying those concepts in the data. We show that this method is applicable on a variety of data sources and leads to taxonomies that can be more directly integrated into ontologies.


Computable Artificial General Intelligence

Bennett, Michael Timothy

arXiv.org Artificial Intelligence

Artificial general intelligence (AGI) may herald our extinction, according to AI safety research. Yet claims regarding AGI must rely upon mathematical formalisms -- theoretical agents we may analyse or attempt to build. AIXI appears to be the only such formalism supported by proof that its behaviour is optimal, a consequence of its use of compression as a proxy for intelligence. Unfortunately, AIXI is incomputable and claims regarding its behaviour highly subjective. We argue that this is because AIXI formalises cognition as taking place in isolation from the environment in which goals are pursued (Cartesian dualism). We propose an alternative, supported by proof and experiment, which overcomes these problems. Integrating research from cognitive science with AI, we formalise an enactive model of learning and reasoning to address the problem of subjectivity. This allows us to formulate a different proxy for intelligence, called weakness, which addresses the problem of incomputability. We prove optimal behaviour is attained when weakness is maximised. This proof is supplemented by experimental results comparing weakness and description length (the closest analogue to compression possible without reintroducing subjectivity). Weakness outperforms description length, suggesting it is a better proxy. Furthermore we show that, if cognition is enactive, then minimisation of description length is neither necessary nor sufficient to attain optimal performance, undermining the notion that compression is closely related to intelligence. However, there remain open questions regarding the implementation of scale-able AGI. In the short term, these results may be best utilised to improve the performance of existing systems. For example, our results explain why Deepmind's Apperception Engine is able to generalise effectively, and how to replicate that performance by maximising weakness.


Quantification and aggregation over concepts of the ontology

Carbonnelle, Pierre, Van der Hallen, Matthias, Denecker, Marc

arXiv.org Artificial Intelligence

The first phase of developing an intelligent system is the selection of an ontology of symbols representing relevant concepts of the application domain. These symbols are then used to represent the knowledge of the domain. This representation should be \emph{elaboration tolerant}, in the sense that it should be convenient to modify it to take into account new knowledge or requirements. Unfortunately, current formalisms require a significant rewrite of that representation when the new knowledge is about the \emph{concepts} themselves: the developer needs to "\emph{reify}" them. This happens, for example, when the new knowledge is about the number of concepts that satisfy some conditions. The value of expressing knowledge about concepts, or "intensions", has been well-established in \emph{modal logic}. However, the formalism of modal logic cannot represent the quantifications and aggregates over concepts that some applications need. To address this problem, we developed an extension of first order logic that allows referring to the \emph{intension} of a symbol, i.e., to the concept it represents. We implemented this extension in IDP-Z3, a reasoning engine for FO($\cdot$) (aka FO-dot), a logic-based knowledge representation language. This extension makes the formalism more elaboration tolerant, but also introduces the possibility of syntactically incorrect formula. Hence, we developed a guarding mechanism to make formula syntactically correct, and a method to verify correctness. The complexity of this method is linear with the length of the formula. This paper describes these extensions, how their relate to intensions in modal logic and other formalisms, and how they allowed representing the knowledge of four different problem domains in an elaboration tolerant way.

  Country:
  Genre: Research Report (0.64)
  Industry: Law (0.68)

The Sigma-Max System Induced from Randomness and Fuzziness

Mei, Wei, Li, Ming, Cheng, Yuanzeng, Liu, Limin

arXiv.org Artificial Intelligence

This paper managed to induce probability theory (sigma system) and possibility theory (max system) respectively from randomness and fuzziness, through which the premature theory of possibility is expected to be well founded. Such an objective is achieved by addressing three open key issues: a) the lack of clear mathematical definitions of randomness and fuzziness; b) the lack of intuitive mathematical definition of possibility; c) the lack of abstraction procedure of the axiomatic definitions of probability/possibility from their intuitive definitions. Especially, the last issue involves the question why the key axiom of "maxitivity" is adopted for possibility measure. By taking advantage of properties of the well-defined randomness and fuzziness, we derived the important conclusion that "max" is the only but un-strict disjunctive operator that is applicable across the fuzzy event space, and is an exact operator for fuzzy feature extraction that assures the max inference is an exact mechanism. It is fair to claim that the long-standing problem of lack of consensus to the foundation of possibility theory is well resolved, which would facilitate wider adoption of possibility theory in practice and promote cross prosperity of the two uncertainty theories of probability and possibility. Randomness and fuzziness are well recognized as two kinds of fundamental uncertainties of this world. It remains as an open topic on how to correctly comprehend these uncertainties and effectively handle them in practice. For modeling of random uncertainty, probability theory and the derivative subjects of statistics and stochastic process are no doubt the classic tool set. Probability theory, which satisfies the key axiom of "additivity" [18,23], has grown up to be mature, upon which nearly the whole building of information sciences is based and applications of which could be found over a great diversity of communities [22, 29,41,42,52,53].


Machine Learning Won't Solve Natural Language Understanding

#artificialintelligence

In the early 1990s a statistical revolution overtook artificial intelligence (AI) by a storm – a revolution that culminated by the 2000's in the triumphant return of neural networks with their modern-day deep learning (DL) reincarnation. This empiricist turn engulfed all subfields of AI although the most controversial employment of this technology has been in natural language processing (NLP) – a subfield of AI that has proven to be a lot more difficult than any of the AI pioneers had imagined. The widespread use of data-driven empirical methods in NLP has the following genesis: the failure of the symbolic and logical methods to produce scalable NLP systems after three decades of supremacy led to the rise of what are called empirical methods in NLP (EMNLP) – a phrase that I use here to collectively refer to data-driven, corpus-based, statistical and machine learning (ML) methods. The motivation behind this shift to empiricism was quite simple: until we gain some insights in how language works and how language is related to our knowledge of the world we talk about in ordinary spoken language, empirical and data-driven methods might be useful in building some practical text processing applications. As Kenneth Church, one of the pioneers of EMNLP explains, the advocates of the data-driven and statistical approaches to NLP were interested in solving simple language tasks – the motivation was never to suggest that this is how language works, but that "it is better to do something simple than nothing at all".


Rational coordination with no communication or conventions

Goranko, Valentin, Kuusisto, Antti, Rönnholm, Raine

arXiv.org Artificial Intelligence

Coordination games ([16]) are games in strategic form with several pure strategy Nash equilibria with the same or comparable payoffs for every player. In these games, all players have the mutual interest to select one of these equilibria. In pure coordination games ([16]), aka games of common payoffs ([17]), all players in the game receive the same payoffs and thus the players have fully aligned preferences to coordinate with each other in order to reach the best possible outcome for everyone. In this paper we study one-step pure winlose coordination games (WLC games) in which all payoffs are either 1 (i.e., win) or 0 (i.e., lose). Clearly, if players can communicate when playing a pure coordination game with at least one winning outcome, then they can simply agree on a winning strategy profile, so the game is trivialised.


Proceedings of the 2018 XCSP3 Competition

Lecoutre, Christophe, Roussel, Olivier

arXiv.org Artificial Intelligence

This document represents the proceedings of the 2018 XCSP3 Competition. The results of this competition of constraint solvers were presented at CP'18, the 24th International Conference on Principles and Practice of Constraint Programming, held in Lille, France from 27th August 2018 to 31th August, 2018.


Expressive Description Logic with Instantiation Metamodelling

Kubincová, Petra (Comenius University in Bratislava) | Kľuka, Ján (Comenius University in Bratislava) | Homola, Martin (Comenius University in Bratislava)

AAAI Conferences

We investigate a higher-order extension of the description logic (DL) SROIQ that provides a fixedly interpreted role semantically coupled with instantiation. It is useful to express interesting meta-level constraints on the modelled ontology. We provide a model-theoretic characterization of the semantics, and we show the decidability by means of reduction.