Information Technology
Towards Territorial Privacy in Smart Environments
Könings, Bastian (Ulm University) | Schaub, Florian (Ulm University) | Weber, Michael (Ulm University) | Kargl, Frank (University of Twente)
Territorial privacy is an old concept for privacy of the personal space dating back to the 19th century. Despite its former relevance, territorial privacy has been neglected in recent years, while privacy research and legislation mainly focused on the issue of information privacy. However, with the prospect of smart and ubiquitous environments, territorial privacy deserves new attention. Walls, as boundaries between personal and public spaces, will be insufficient to guard territorial privacy when our environments are permeated with numerous computing and sensing devices, that gather and share real-time information about us. Territorial privacy boundaries spanning both the physical and virtual world are required for the demarcation of personal spaces in smart environments. In this paper, we analyze and discuss the issue of territorial privacy in smart environments. We further propose a real-time user-centric observation model to describe multimodal observation channels of multiple physical and virtual observers. The model facilitates the definition of a territorial privacy boundary by separating desired from undesired observers, regardless of whether they are physically present in the user’s private territory or virtually participating in it. Moreover, we outline future research challenges and identify areas of work that require attention in the context of territorial privacy in smart environments.
Preprocessing Legal Text: Policy Parsing and Isomorphic Intermediate Representation
Waterman, K. Krasnow (Massachusetts Institute of Technology)
One of the most significant challenges in achieving digital privacy is incorporating privacy policy directly in computer systems. While rule systems have long existed, translating privacy laws, regulations, policies, and contracts into processor amenable forms is slow and difficult because the legal text is scattered, run-on, and unstructured, antithetical to the lean and logical forms of computer science. We are using and developing intermediate isomorphic forms as a Rosetta Stone-like tool to accelerate the translation process and in hopes of providing support to future domain-specific Natural Language Processing technology. This report describes our experience, thoughts about how to improve the form, and discoveries about the form and logic of the legal text that will affect the successful development of a rules tool to implement real-world complex privacy policies.
Stream-Based Middleware Support for Embedded Reasoning
Heintz, Fredrik (Linköping University) | Kvarnström, Jonas (Linköping University) | Doherty, Patrick (Linköping University)
For autonomous systems such as unmanned aerial vehicles tosuccessfully perform complex missions, a great deal of embedded reasoning is required at varying levels of abstraction. In order to make use of diverse reasoning modules in such systems, issues ofintegration such as sensor data flow and information flow between such modules has to be taken into account. The DyKnow framework is a tool with a formal basis that pragmatically deals with many of the architectural issues which arise in such systems. This includes a systematic stream-based method for handling the sense-reasoning gap,caused by the wide difference in abstraction levels between the noisy data generally available from sensors and the symbolic, semantically meaningful information required by many high-level reasoning modules. DyKnow has proven to be quite robust and widely applicable to different aspects of hybrid software architectures forrobotics. In this paper, we describe the DyKnow framework and show how it is integrated and used in unmanned aerial vehicle systems developed in our group. In particular, we focus on issues pertaining to the sense-reasoning gap and the symbol grounding problem and the use of DyKnow as a means of generating semantic structures representing situational awareness for such systems. We also discuss the use of DyKnow in the context of automated planning, in particular execution monitoring.
Vocabulary Hosting: A Modest Proposal
Halpin, Harry R. (University of Edinburgh) | Baker, Tom (Dublin Core Metadata Initiative Ltd)
Many of the benefits of structured data come about when users can re-use existing vocabularies rather than create new ones, but it is currently difficult for users to find, create, and host new vocabularies. Moreover, the value of any given vocabulary as a foundation for applications depends on the perceived certainty that the vocabulary — both its machine-readable schemas and human-readable specification documents — will remain reliably accessible over time and that its URIs will not be sold, re-purposed, or simply forgotten. This note proposes two approaches for solving these problems: one for multiple Vocabulary Hosting Services and a Vocabulary Preservation System to keep them linked together.
Predicting Positive and Negative Links in Online Social Networks
Leskovec, Jure, Huttenlocher, Daniel, Kleinberg, Jon
We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.
Indexer Based Dynamic Web Services Discovery
Bashir, Saba, Khan, Farhan Hassan, Javed, M. Younus, Khan, Aihab, Khiyal, Malik Sikandar Hayat
Recent advancement in web services plays an important role in business to business and business to consumer interaction. Discovery mechanism is not only used to find a suitable service but also provides collaboration between service providers and consumers by using standard protocols. A static web service discovery mechanism is not only time consuming but requires continuous human interaction. This paper proposed an efficient dynamic web services discovery mechanism that can locate relevant and updated web services from service registries and repositories with timestamp based on indexing value and categorization for faster and efficient discovery of service. The proposed prototype focuses on quality of service issues and introduces concept of local cache, categorization of services, indexing mechanism, CSP (Constraint Satisfaction Problem) solver, aging and usage of translator. Performance of proposed framework is evaluated by implementing the algorithm and correctness of our method is shown. The results of proposed framework shows greater performance and accuracy in dynamic discovery mechanism of web services resolving the existing issues of flexibility, scalability, based on quality of service, and discovers updated and most relevant services with ease of usage.
Integrating Innate and Adaptive Immunity for Intrusion Detection
Tedesco, Gianni, Twycross, Jamie, Aickelin, Uwe
Network Intrusion Detection Systems (NDIS) monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDS's rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alters, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to an intrusion detection problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Further Exploration of the Dendritic Cell Algorithm: Antigen Multiplier and Time Windows
Gu, Feng, Greensmith, Julie, Aickelin, Uwe
As an immune-inspired algorithm, the Dendritic Cell Algorithm (DCA), produces promising performances in the field of anomaly detection. This paper presents the application of the DCA to a standard data set, the KDD 99 data set. The results of different implementation versions of the DXA, including the antigen multiplier and moving time windows are reported. The real-valued Negative Selection Algorithm (NSA) using constant-sized detectors and the C4.5 decision tree algorithm are used, to conduct a baseline comparison. The results suggest that the DCA is applicable to KDD 99 data set, and the antigen multiplier and moving time windows have the same effect on the DCA for this particular data set. The real-valued NSA with constant-sized detectors is not applicable to the data set, and the C4.5 decision tree algorithm provides a benchmark of the classification performance for this data set.
Security Analysis of Online Centroid Anomaly Detection
Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). It is to be expected in such cases that learning algorithms will have to deal with manipulated data aimed at hampering decision making. Although some previous work addressed the handling of malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution we analyze the performance of a particular method -- online centroid anomaly detection -- in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, analysis of its efficiency and constraints. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly under different conditions: bounded and unbounded percentage of traffic, and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds and practicality of our protection mechanisms.
Multibiometrics Belief Fusion
Kisku, Dakshina Ranjan, Sing, Jamuna Kanta, Gupta, Phalguni
This paper proposes a multimodal biometric system through Gaussian Mixture Model (GMM) for face and ear biometrics with belief fusion of the estimated scores characterized by Gabor responses and the proposed fusion is accomplished by Dempster-Shafer (DS) decision theory. Face and ear images are convolved with Gabor wavelet filters to extracts spatially enhanced Gabor facial features and Gabor ear features. Further, GMM is applied to the high-dimensional Gabor face and Gabor ear responses separately for quantitive measurements. Expectation Maximization (EM) algorithm is used to estimate density parameters in GMM. This produces two sets of feature vectors which are then fused using Dempster-Shafer theory. Experiments are conducted on multimodal database containing face and ear images of 400 individuals. It is found that use of Gabor wavelet filters along with GMM and DS theory can provide robust and efficient multimodal fusion strategy.