Where Are the Semantics in the Semantic Web?

AI Magazine

The most widely accepted defining feature of the semantic web is machine-usable content. By this definition, the semantic web is already manifest in shopping agents that automatically access and use web content to find the lowest air fares or book prices. However, where are the semantics? Most people regard the semantic web as a vision, not a reality -- so shopping agents should not "count." To use web content, machines need to know what to do when they encounter it, which, in turn, requires the machine to know what the content means (that is, its semantics). The challenge of developing the semantic web is how to put this knowledge into the machine. The manner in which it is done is at the heart of the confusion about the semantic web. The goal of this article is to clear up some of this confusion. I explain that shopping agents work in the complete absence of any explicit account of the semantics of web content because the meaning of the web content that the agents are expected to encounter can be determined by the human programmers who hardwire it into the web application software. I therefore regard shopping agents as a degenerate case of the semantic web. I note various shortcomings of this approach. I conclude by presenting some ideas about how the semantic web will likely evolve.


Translating OWL and Semantic Web Rules into Prolog: Moving Toward Description Logic Programs

arXiv.org Artificial Intelligence

To appear in Theory and Practice of Logic Programming (TPLP), 2008. We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology/rule language, and 2) layering rules on top of an ontology with RuleML and OWL. Toward this end, we are developing the SWORIER system, which enables efficient automated reasoning on ontologies and rules, by translating all of them into Prolog and adding a set of general rules that properly capture the semantics of OWL. We have also enabled the user to make dynamic changes on the fly, at run time. This work addresses several of the concerns expressed in previous work, such as negation, complementary classes, disjunctive heads, and cardinality, and it discusses alternative approaches for dealing with inconsistencies in the knowledge base. In addition, for efficiency, we implemented techniques called extensionalization, avoiding reanalysis, and code minimization.


HodgeRank With Information Maximization for Crowdsourced Pairwise Ranking Aggregation

AAAI Conferences

Recently, crowdsourcing has emerged as an effective paradigm for human-powered large scale problem solving in various domains. However, task requester usually has a limited amount of budget, thus it is desirable to have a policy to wisely allocate the budget to achieve better quality. In this paper, we study the principle of information maximization for active sampling strategies in the framework of HodgeRank, an approach based on Hodge Decomposition of pairwise ranking data with multiple workers. The principle exhibits two scenarios of active sampling: Fisher information maximization that leads to unsupervised sampling based on a sequential maximization of graph algebraic connectivity without considering labels; and Bayesian information maximization that selects samples with the largest information gain from prior to posterior, which gives a supervised sampling involving the labels collected. Experiments show that the proposed methods boost the sampling efficiency as compared to traditional sampling schemes and are thus valuable to practical crowdsourcing experiments.


Modeling a Sensor to Improve its Efficacy

arXiv.org Machine Learning

Robots rely on sensors to provide them with information about their surroundings. However, high-quality sensors can be extremely expensive and cost-prohibitive. Thus many robotic systems must make due with lower-quality sensors. Here we demonstrate via a case study how modeling a sensor can improve its efficacy when employed within a Bayesian inferential framework. As a test bed we employ a robotic arm that is designed to autonomously take its own measurements using an inexpensive LEGO light sensor to estimate the position and radius of a white circle on a black field. The light sensor integrates the light arriving from a spatially distributed region within its field of view weighted by its Spatial Sensitivity Function (SSF). We demonstrate that by incorporating an accurate model of the light sensor SSF into the likelihood function of a Bayesian inference engine, an autonomous system can make improved inferences about its surroundings. The method presented here is data-based, fairly general, and made with plug-and play in mind so that it could be implemented in similar problems.


A Bayesian Framework for Robust Reasoning from Sensor Networks

AAAI Conferences

The work described in this paper defines a Bayesian framework to use noisy, but redundant data from multiple sensor streams and incorporate it with the contextual and domain knowledge that is provided by both the physical constraints imposed by the local environment where the sensors are located and by the people that are involved in the surveillance tasks. The paper also presents the preliminary results of applying the Bayesian framework to the people localization problem in indoor environment using a sensor network that consists of video cameras, infrared tag readers and a fingerprint reader.