Not enough data to create a plot.
Try a different view from the menu above.
arXiv.org Artificial Intelligence
Le terme et le concept : fondements d'une ontoterminologie
Most definitions of ontology, viewed as a "specification of a conceptualization", agree on the fact that if an ontology can take different forms, it necessarily includes a vocabulary of terms and some specification of their meaning in relation to the domain's conceptualization. And as domain knowledge is mainly conveyed through scientific and technical texts, we can hope to extract some useful information from them for building ontology. But is it as simple as this? In this article we shall see that the lexical structure, i.e. the network of words linked by linguistic relationships, does not necessarily match the domain conceptualization. We have to bear in mind that writing documents is the concern of textual linguistics, of which one of the principles is the incompleteness of text, whereas building ontology - viewed as task-independent knowledge - is concerned with conceptualization based on formal and not natural languages. Nevertheless, the famous Sapir and Whorf hypothesis, concerning the interdependence of thought and language, is also applicable to formal languages. This means that the way an ontology is built and a concept is defined depends directly on the formal language which is used; and the results will not be the same. The introduction of the notion of ontoterminology allows to take into account epistemological principles for formal ontology building.
Toward a statistical mechanics of four letter words
Stephens, Greg J., Bialek, William
Princeton Center for Theoretical Physics, Princeton University, Princeton, New Jersey 08544 USA (Dated: December 13, 2021) We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial (and arbitrary), we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of four letter words, capturing 92% of the multi-information among letters and even'discovering' real words that were not represented in the data from which the pairwise correlations were estimated. The maximum entropy model defines an energy landscape on the space of possible words, and local minima in this landscape account for nearly two-thirds of words used in written English. Many complex systems convey an impression of order into these controversies about language in the broad that is not so easily captured by the traditional tools of sense, but rather to test the power of pairwise interactions theoretical physics. Thus, it is not clear what sort of to capture seemingly complex structure.
TRUST-TECH based Methods for Optimization and Learning
Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.
Tests of Machine Intelligence
Although the definition and measurement of intelligence is clearly of fundamental importance to the field of artificial intelligence, no general survey of definitions and tests of machine intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of machine intelligence that have been proposed.
Improving the Performance of PieceWise Linear Separation Incremental Algorithms for Practical Hardware Implementations
De Lara, Alejandro Chinea Manrique, Moreno, Juan Manuel, Madrenas, Arostegui Jordi, Cabestany, Joan
In this paper we shall review the common problems associated with Piecewise Linear Separation incremental algorithms. This kind of neural models yield poor performances when dealing with some classification problems, due to the evolving schemes used to construct the resulting networks. So as to avoid this undesirable behavior we shall propose a modification criterion. It is based upon the definition of a function which will provide information about the quality of the network growth process during the learning phase. This function is evaluated periodically as the network structure evolves, and will permit, as we shall show through exhaustive benchmarks, to considerably improve the performance(measured in terms of network complexity and generalization capabilities) offered by the networks generated by these incremental models.
Universal Intelligence: A Definition of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
Ontology and Formal Semantics - Integration Overdue
In this note we suggest that difficulties encountered in natural language semantics are, for the most part, due to the use of mere symbol manipulation systems that are devoid of any content. In such systems, where there is hardly any link with our common-sense view of the world, and it is quite difficult to envision how one can formally account for the considerable amount of content that is often implicit, but almost never explicitly stated in our everyday discourse. The solution, in our opinion, is a compositional semantics grounded in an ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. In the compositional logic we envision there are ontological (or first-intension) concepts, and logical (or second-intension) concepts, and where the ontological concepts include not only Davidsonian events, but other abstract objects as well (e.g., states, processes, properties, activities, attributes, etc.) It will be demonstrated here that in such a framework, a number of challenges in the semantics of natural language (e.g., metonymy, intensionality, metaphor, etc.) can be properly and uniformly addressed.
On Using Unsatisfiability for Solving Maximum Satisfiability
Marques-Silva, Joao, Planes, Jordi
Maximum Satisfiability (MaxSAT) is a well-known optimization pro- blem, with several practical applications. The most widely known MAXS AT algorithms are ineffective at solving hard problems instances from practical application domains. Recent work proposed using efficient Boolean Satisfiability (SAT) solvers for solving the MaxSAT problem, based on identifying and eliminating unsatisfiable subformulas. However, these algorithms do not scale in practice. This paper analyzes existing MaxSAT algorithms based on unsatisfiable subformula identification. Moreover, the paper proposes a number of key optimizations to these MaxSAT algorithms and a new alternative algorithm. The proposed optimizations and the new algorithm provide significant performance improvements on MaxSAT instances from practical applications. Moreover, the efficiency of the new generation of unsatisfiability-based MaxSAT solvers becomes effectively indexed to the ability of modern SAT solvers to proving unsatisfiability and identifying unsatisfiable subformulas.
Cumulative and Averaging Fission of Beliefs
Belief fusion is the principle of combining separate beliefs or bodies of evidence originating from different sources. Depending on the situation to be modelled, different belief fusion methods can be applied. Cumulative and averaging belief fusion is defined for fusing opinions in subjective logic, and for fusing belief functions in general. The principle of fission is the opposite of fusion, namely to eliminate the contribution of a specific belief from an already fused belief, with the purpose of deriving the remaining belief. This paper describes fission of cumulative belief as well as fission of averaging belief in subjective logic. These operators can for example be applied to belief revision in Bayesian belief networks, where the belief contribution of a given evidence source can be determined as a function of a given fused belief and its other contributing beliefs.
Dimensionality Reduction and Reconstruction using Mirroring Neural Networks and Object Recognition based on Reduced Dimension Characteristic Vector
Deepthi, Dasika Ratna, Kuchibhotla, Sujeet, Eswaran, K.
In this paper, we present a Mirroring Neural Network architecture to perform non-linear dimensionality reduction and Object Recognition using a reduced lowdimensional characteristic vector. In addition to dimensionality reduction, the network also reconstructs (mirrors) the original high-dimensional input vector from the reduced low-dimensional data. The Mirroring Neural Network architecture has more number of processing elements (adalines) in the outer layers and the least number of elements in the central layer to form a converging-diverging shape in its configuration. Since this network is able to reconstruct the original image from the output of the innermost layer (which contains all the information about the input pattern), these outputs can be used as object signature to classify patterns. The network is trained to minimize the discrepancy between actual output and the input by back propagating the mean squared error from the output layer to the input layer. After successfully training the network, it can reduce the dimension of input vectors and mirror the patterns fed to it. The Mirroring Neural Network architecture gave very good results on various test patterns.