Plotting

 Information Technology



The IJCAI-09 Workshop on Learning Structural Knowledge From Observations (STRUCK-09)

AI Magazine

These formalisms have in common the use of certain kinds of constructs (for example, objects, goals, skills, and tasks) that represent knowledge of varying degrees of complexity and that are connected through structural relations. In recent years, we have observed increasing interest toward the problem of learning such structural knowledge from observations. These observations range from traces generated by an automated planner to video feeds from a robot performing some actions. The goal of the workshop was to bring researchers together from machine learning, automated planning, case-based reasoning, cognitive science, and other communities that are looking into instances of this problem and to share ideas and perspectives in a common forum.


A two-step fusion process for multi-criteria decision applied to natural hazards in mountains

arXiv.org Artificial Intelligence

Mountain river torrents and snow avalanches generate human and material damages with dramatic consequences. Knowledge about natural phenomenona is often lacking and expertise is required for decision and risk management purposes using multi-disciplinary quantitative or qualitative approaches. Expertise is considered as a decision process based on imperfect information coming from more or less reliable and conflicting sources. A methodology mixing the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and information fusion using Belief Function Theory is described. Fuzzy Sets and Possibilities theories allow to transform quantitative and qualitative criteria into a common frame of discernment for decision in Dempster-Shafer Theory (DST ) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic belief assignments elicitation, conflict identification and management, fusion rule choices, results validation but also in specific needs to make a difference between importance and reliability and uncertainty in the fusion process.


The Application of a Dendritic Cell Algorithm to a Robotic Classifier

arXiv.org Artificial Intelligence

The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.


Real-Time Alert Correlation with Type Graphs

arXiv.org Artificial Intelligence

The premise of automated alert correlation is to accept that false alerts from a low level intrusion detection system are inevitable and use attack models to explain the output in an understandable way. Several algorithms exist for this purpose which use attack graphs to model the ways in which attacks can be combined. These algorithms can be classified in to two broad categories namely scenario-graph approaches, which create an attack model starting from a vulnerability assessment and type-graph approaches which rely on an abstract model of the relations between attack types. Some research in to improving the efficiency of type-graph correlation has been carried out but this research has ignored the hypothesizing of missing alerts. Our work is to present a novel type-graph algorithm which unifies correlation and hypothesizing in to a single operation. Our experimental results indicate that the approach is extremely efficient in the face of intensive alerts and produces compact output graphs comparable to other techniques.


Performance Evaluation of DCA and SRC on a Single Bot Detection

arXiv.org Artificial Intelligence

Malicious users try to compromise systems using new techniques. One of the recent techniques used by the attacker is to perform complex distributed attacks such as denial of service and to obtain sensitive data such as password information. These compromised machines are said to be infected with malicious software termed a "bot". In this paper, we investigate the correlation of behavioural attributes such as keylogging and packet flooding behaviour to detect the existence of a single bot on a compromised machine by applying (1) Spearman's rank correlation (SRC) algorithm and (2) the Dendritic Cell Algorithm (DCA). We also compare the output results generated from these two methods to the detection of a single bot. The results show that the DCA has a better performance in detecting malicious activities.


Malicious Code Execution Detection and Response Immune System inspired by the Danger Theory

arXiv.org Artificial Intelligence

The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.


A Gender-Centric Analysis of Calling Behavior in a Developing Economy Using Call Detail Records

AAAI Conferences

The gender divide in the access to technology in developing economies makes gender characterization and automatic gender identification two of the most critical needs for improving cell phone-based services. Gender identification has been typically solved using voice or image processing. ย  However, such techniques cannot be applied to cell phone networks mostly due to privacy concerns. In this paper, we present a study aimed at characterizing and automatically identifying the gender of a cell phone user in a developing economy based on behavioral, social and mobility variables. Our contributions are twofold: (1) understanding the role that gender plays on phone usage, and (2) evaluating common machine learning approaches for gender identification. The analysis was carried out using the encrypted CDRs (Call Detail Records) of approximately 10,000 users from a developing economy, whose gender was known a priori. Our results indicate that behavioral and social variables, including the number of input/output calls and the in degree/out degree of the social network, reveal statistically significant differences between male and female callers. Finally, we propose a new gender identification algorithm that can achieve classification rates of up to 80% when the percentage of predicted instances is reduced.


Privacy and Transparency

AAAI Conferences

In this essay I argue that it is logically and practically possible to secure the right to privacy under conditions of increasing social transparency. The argument is predicated on a particular analysis of the right to privacy as the right to the personal space required for the exercise of practical rationality. It also rests on the distinction between the unidirectional transparency required by repressive governments and the increasing omnidirectional transparency that liberal information societies are experiencing today. I claim that a properly administered omnidirectional transparency will not only enhance privacy and autonomy, but can also be a key development in the creation of a society that is more tolerant of harmless diversity and temperate in its punishment of anti-social behaviors.


Reasoning about the Appropriate Use of Private Data through Computational Workflows

AAAI Conferences

While there is a plethora of mechanisms to ensure lawful access to privacy-protected data, additional research is required in order to reassure individuals that their personal data is being used for the purpose that they consented to. This is particularly important in the context of new data mining approaches, as used, for instance, in biomedical research and commercial data mining. We argue for the use of computational workflows to ensure and enforce appropriate use of sensitive personal data. Computational workflows describe in a declarative manner the data processing steps and the expected results of complex data analysis processes such as data mining (Gil et al. 2007b; Taylor et al. 2006). We see workflows as an artifact that captures, among other things, how data is being used and for what purpose. Existing frameworks for computational workflows need to be extended to incorporate privacy policies that can govern the use of data.