Not enough data to create a plot.
Try a different view from the menu above.
Country
Automatic Music Composition using Answer Set Programming
Boenn, Georg, Brain, Martin, De Vos, Marina, ffitch, John
Music composition used to be a pen and paper activity. These these days music is often composed with the aid of computer software, even to the point where the computer compose parts of the score autonomously. The composition of most styles of music is governed by rules. We show that by approaching the automation, analysis and verification of composition as a knowledge representation task and formalising these rules in a suitable logical language, powerful and expressive intelligent composition tools can be easily built. This application paper describes the use of answer set programming to construct an automated system, named ANTON, that can compose melodic, harmonic and rhythmic music, diagnose errors in human compositions and serve as a computer-aided composition tool. The combination of harmonic, rhythmic and melodic composition in a single framework makes ANTON unique in the growing area of algorithmic composition. With near real-time composition, ANTON reaches the point where it can not only be used as a component in an interactive composition tool but also has the potential for live performances and concerts or automatically generated background music in a variety of applications. With the use of a fully declarative language and an "off-the-shelf" reasoning engine, ANTON provides the human composer a tool which is significantly simpler, more compact and more versatile than other existing systems. This paper has been accepted for publication in Theory and Practice of Logic Programming (TPLP).
An Efficient Technique for Similarity Identification between Ontologies
Farooq, Amjad, Ahsan, Syed, Shah, Abad
Ontologies usually suffer from the semantic heterogeneity when simultaneously used in information sharing, merging, integrating and querying processes. Therefore, the similarity identification between ontologies being used becomes a mandatory task for all these processes to handle the problem of semantic heterogeneity. In this paper, we propose an efficient technique for similarity measurement between two ontologies. The proposed technique identifies all candidate pairs of similar concepts without omitting any similar pair. The proposed technique can be used in different types of operations on ontologies such as merging, mapping and aligning. By analyzing its results a reasonable improvement in terms of completeness, correctness and overall quality of the results has been found.
Vagueness of Linguistic variable
Raheja, Supriya, Rajpal, Smita
In the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into various programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chessplayer, and countless other feats never before possible. Ability of the human to estimate the information is most brightly shown in using of natural languages. Using words of a natural language for valuation qualitative attributes, for example, the person pawns uncertainty in form of vagueness in itself estimations. Vague sets, vague judgments, vague conclusions takes place there and then, where and when the reasonable subject exists and also is interested in something. The vague sets theory has arisen as the answer to an illegibility of language the reasonable subject speaks. Language of a reasonable subject is generated by vague events which are created by the reason and which are operated by the mind. The theory of vague sets represents an attempt to find such approximation of vague grouping which would be more convenient, than the classical theory of sets in situations where the natural language plays a significant role. Such theory has been offered by known American mathematician Gau and Buehrer .In our paper we are describing how vagueness of linguistic variables can be solved by using the vague set theory.This paper is mainly designed for one of directions of the eventology (the theory of the random vague events), which has arisen within the limits of the probability theory and which pursue the unique purpose to describe eventologically a movement of reason.
Understanding Semantic Web and Ontologies: Theory and Applications
Semantic Web is actually an extension of the current one in that it represents information more meaningfully for humans and computers alike. It enables the description of contents and services in machine-readable form, and enables annotating, discovering, publishing, advertising and composing services to be automated. It was developed based on Ontology, which is considered as the backbone of the Semantic Web. In other words, the current Web is transformed from being machine-readable to machine-understandable. In fact, Ontology is a key technique with which to annotate semantics and provide a common, comprehensible foundation for resources on the Semantic Web. Moreover, Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic description of data which can be used to preserve the Ontologies and keep them ready for inference. This paper provides basic concepts of web services and the Semantic Web, defines the structure and the main applications of ontology, and provides many relevant terms are explained in order to provide a basic understanding of ontologies.
sTeX+ - a System for Flexible Formalization of Linked Data
Kohlhase, Andrea, Kohlhase, Michael, Lange, Christoph
We present the sTeX+ system, a user-driven advancement of sTeX - a semantic extension of LaTeX that allows for producing high-quality PDF documents for (proof)reading and printing, as well as semantic XML/OMDoc documents for the Web or further processing. Originally sTeX had been created as an invasive, semantic frontend for authoring XML documents. Here, we used sTeX in a Software Engineering case study as a formalization tool. In order to deal with modular pre-semantic vocabularies and relations, we upgraded it to sTeX+ in a participatory design process. We present a tool chain that starts with an sTeX+ editor and ultimately serves the generated documents as XHTML+RDFa Linked Data via an OMDoc-enabled, versioned XML database. In the final output, all structural annotations are preserved in order to enable semantic information retrieval services.
The State of the Art: Ontology Web-Based Languages: XML Based
Many formal languages have been proposed to express or represent Ontologies, including RDF, RDFS, DAML+OIL and OWL. Most of these languages are based on XML syntax, but with various terminologies and expressiveness. Therefore, choosing a language for building an Ontology is the main step. The main point of choosing language to represent Ontology is based mainly on what the Ontology will represent or be used for. That language should have a range of quality support features such as ease of use, expressive power, compatibility, sharing and versioning, internationalisation. This is because different kinds of knowledge-based applications need different language features. The main objective of these languages is to add semantics to the existing information on the web. The aims of this paper is to provide a good knowledge of existing language and understanding of these languages and how could be used.
Human Disease Diagnosis Using a Fuzzy Expert System
Hasan, Mir Anamul, Sher-E-Alam, Khaja Md., Chowdhury, Ahsan Raja
Human disease diagnosis is a complicated process and requires high level of expertise. Any attempt of developing a web-based expert system dealing with human disease diagnosis has to overcome various difficulties. This paper describes a project work aiming to develop a web-based fuzzy expert system for diagnosing human diseases. Now a days fuzzy systems are being used successfully in an increasing number of application areas; they use linguistic rules to describe systems. This research project focuses on the research and development of a web-based clinical tool designed to improve the quality of the exchange of health information between health care professionals and patients. Practitioners can also use this web-based tool to corroborate diagnosis. The proposed system is experimented on various scenarios in order to evaluate it's performance. In all the cases, proposed system exhibits satisfactory results.
SPOT: An R Package For Automatic and Interactive Tuning of Optimization Algorithms by Sequential Parameter Optimization
The sequential parameter optimization (SPOT) package for R is a toolbox for tuning and understanding simulation and optimization algorithms. Model-based investigations are common approaches in simulation and optimization. Sequential parameter optimization has been developed, because there is a strong need for sound statistical analysis of simulation and optimization algorithms. SPOT includes methods for tuning based on classical regression and analysis of variance techniques; tree-based models such as CART and random forest; Gaussian process models (Kriging), and combinations of different meta-modeling approaches. This article exemplifies how SPOT can be used for automatic and interactive tuning.
Heavy-Tailed Processes for Selective Shrinkage
Wauthier, Fabian L., Jordan, Michael I.
Heavy-tailed distributions are frequently used to enhance the robustness of regression and classification methods to outliers in output space. Often, however, we are confronted with "outliers" in input space, which are isolated observations in sparsely populated regions. We show that heavy-tailed stochastic processes (which we construct from Gaussian processes via a copula), can be used to improve robustness of regression and classification estimators to such outliers by selectively shrinking them more strongly in sparse regions than in dense regions. We carry out a theoretical analysis to show that selective shrinkage occurs, provided the marginals of the heavy-tailed process have sufficiently heavy tails. The analysis is complemented by experiments on biological data which indicate significant improvements of estimates in sparse regions while producing competitive results in dense regions.
A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization
Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt to combat this. This proposed work is applied in the medical domain to find the minimal reducts and experimentally compared with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods such as Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO).