Country
AceWiki: A Natural and Expressive Semantic Wiki
We present AceWiki, a prototype of a new kind of semantic wiki using the controlled natural language Attempto Controlled English (ACE) for representing its content. ACE is a subset of English with a restricted grammar and a formal semantics. The use of ACE has two important advantages over existing semantic wikis. First, we can improve the usability and achieve a shallow learning curve. Second, ACE is more expressive than the formal languages of existing semantic wikis. Our evaluation shows that people who are not familiar with the formal foundations of the Semantic Web are able to deal with AceWiki after a very short learning phase and without the help of an expert.
AceWiki: Collaborative Ontology Management in Controlled Natural Language
AceWiki is a prototype that shows how a semantic wiki using controlled natural language - Attempto Controlled English (ACE) in our case - can make ontology management easy for everybody. Sentences in ACE can automatically be translated into first-order logic, OWL, or SWRL. AceWiki integrates the OWL reasoner Pellet and ensures that the ontology is always consistent. Previous results have shown that people with no background in logic are able to add formal knowledge to AceWiki without being instructed or trained in advance.
An Image-Based Sensor System for Autonomous Rendez-Vous with Uncooperative Satellites
Miravet, Carlos, Pascual, Luis, Krouch, Eloise, del Cura, Juan Manuel
In this paper are described the image processing algorithms developed by SENER, Ingenieria y Sistemas to cope with the problem of image-based, autonomous rendez-vous (RV) with an orbiting satellite. The methods developed have a direct application in the OLEV (Orbital Life Extension Extension Vehicle) mission. OLEV is a commercial mission under development by a consortium formed by Swedish Space Corporation, Kayser-Threde and SENER, aimed to extend the operational life of geostationary telecommunication satellites by supplying them control, navigation and guidance services. OLEV is planned to use a set of cameras to determine the angular position and distance to the client satellite during the complete phases of rendez-vous and docking, thus enabling the operation with satellites not equipped with any specific navigational aid to provide support during the approach. The ability to operate with un-equipped client satellites significantly expands the range of applicability of the system under development, compared to other competing video technologies already tested in previous spatial missions, such as the ones described here below.
A Distributed Process Infrastructure for a Distributed Data Structure
The Resource Description Framework (RDF) is continuing to grow outside the bounds of its initial function as a metadata framework and into the domain of general-purpose data modeling. This expansion has been facilitated by the continued increase in the capacity and speed of RDF database repositories known as triple-stores. High-end RDF triple-stores can hold and process on the order of 10 billion triples. In an effort to provide a seamless integration of the data contained in RDF repositories, the Linked Data community is providing specifications for linking RDF data sets into a universal distributed graph that can be traversed by both man and machine. While the seamless integration of RDF data sets is important, at the scale of the data sets that currently exist and will ultimately grow to become, the "download and index" philosophy of the World Wide Web will not so easily map over to the Semantic Web. This essay discusses the importance of adding a distributed RDF process infrastructure to the current distributed RDF data structure.
A new probabilistic transformation of belief mass assignment
Dezert, Jean, Smarandache, Florentin
In this paper, we propose in Dezert-Smarandache Theory (DSmT) framework, a new probabilistic transformation, called DSmP, in order to build a subjective probability measure from any basic belief assignment defined on any model of the frame of discernment. Several examples are given to show how the DSmP transformation works and we compare it to main existing transformations proposed in the literature so far. We show the advantages of DSmP over classical transformations in term of Probabilistic Information Content (PIC). The direct extension of this transformation for dealing with qualitative belief assignments is also presented.
Implementing general belief function framework with a practical codification for low complexity
In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory.
Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods
Wollbold, Johannes, Guthke, Reinhard, Ganter, Bernhard
Our aim is to build a set of rules, such that reasoning over temporal dependencies within gene regulatory networks is possible. The underlying transitions may be obtained by discretizing observed time series, or they are generated based on existing knowledge, e.g. by Boolean networks or their nondeterministic generalization. We use the mathematical discipline of formal concept analysis (FCA), which has been applied successfully in domains as knowledge representation, data mining or software engineering. By the attribute exploration algorithm, an expert or a supporting computer program is enabled to decide about the validity of a minimal set of implications and thus to construct a sound and complete knowledge base. From this all valid implications are derivable that relate to the selected properties of a set of genes. We present results of our method for the initiation of sporulation in Bacillus subtilis. However the formal structures are exhibited in a most general manner. Therefore the approach may be adapted to signal transduction or metabolic networks, as well as to discrete temporal transitions in many biological and nonbiological areas.
Text Data Mining: Theory and Methods
This paper provides the reader with a very brief introduction to some of the theory and methods of text data mining. The intent of this article is to introduce the reader to some of the current methodologies that are employed within this discipline area while at the same time making the reader aware of some of the interesting challenges that remain to be solved within the area. Finally, the articles serves as a very rudimentary tutorial on some of techniques while also providing the reader with a list of references for additional study.
CPBVP: A Constraint-Programming Framework for Bounded Program Verification
Collavizza, Hélène, Rueher, Michel, Van Hentenryck, Pascal
This paper studies how to verify the conformity of a program with its specification and proposes a novel constraint-programming framework for bounded program verification (CPBPV). The CPBPV framework uses constraint stores to represent the specification and the program and explores execution paths nondeterministically. The input program is partially correct if each constraint store so produced implies the post-condition. CPBPV does not explore spurious execution paths as it incrementally prunes execution paths early by detecting that the constraint store is not consistent. CPBPV uses the rich language of constraint programming to express the constraint store. Finally, CPBPV is parametrized with a list of solvers which are tried in sequence, starting with the least expensive and less general. Experimental results often produce orders of magnitude improvements over earlier approaches, running times being often independent of the variable domains. Moreover, CPBPV was able to detect subtle errors in some programs while other frameworks based on model checking have failed.
An Algorithm to Determine Peer-Reviewers
Rodriguez, Marko A., Bollen, Johan
The peer-review process is the most widely accepted certification mechanism for officially accepting the written results of researchers within the scientific community. An essential component of peer-review is the identification of competent referees to review a submitted manuscript. This article presents an algorithm to automatically determine the most appropriate reviewers for a manuscript by way of a co-authorship network data structure and a relative-rank particle-swarm algorithm. This approach is novel in that it is not limited to a pre-selected set of referees, is computationally efficient, requires no human-intervention, and, in some instances, can automatically identify conflict of interest situations. A useful application of this algorithm would be to open commentary peer-review systems because it provides a weighting for each referee with respects to their expertise in the domain of a manuscript. The algorithm is validated using referee bid data from the 2005 Joint Conference on Digital Libraries.