Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
A two-step fusion process for multi-criteria decision applied to natural hazards in mountains
Tacnet, Jean-Marc, Batton-Hubert, Mireille, Dezert, Jean
Mountain river torrents and snow avalanches generate human and material damages with dramatic consequences. Knowledge about natural phenomenona is often lacking and expertise is required for decision and risk management purposes using multi-disciplinary quantitative or qualitative approaches. Expertise is considered as a decision process based on imperfect information coming from more or less reliable and conflicting sources. A methodology mixing the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and information fusion using Belief Function Theory is described. Fuzzy Sets and Possibilities theories allow to transform quantitative and qualitative criteria into a common frame of discernment for decision in Dempster-Shafer Theory (DST ) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic belief assignments elicitation, conflict identification and management, fusion rule choices, results validation but also in specific needs to make a difference between importance and reliability and uncertainty in the fusion process.
The Application of a Dendritic Cell Algorithm to a Robotic Classifier
Oates, Robert, Greensmith, Julie, Aickelin, Uwe, Garibaldi, Jonathan M., Kendall, Graham
The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.
Real-Time Alert Correlation with Type Graphs
Tedesco, Gianni, Aickelin, Uwe
The premise of automated alert correlation is to accept that false alerts from a low level intrusion detection system are inevitable and use attack models to explain the output in an understandable way. Several algorithms exist for this purpose which use attack graphs to model the ways in which attacks can be combined. These algorithms can be classified in to two broad categories namely scenario-graph approaches, which create an attack model starting from a vulnerability assessment and type-graph approaches which rely on an abstract model of the relations between attack types. Some research in to improving the efficiency of type-graph correlation has been carried out but this research has ignored the hypothesizing of missing alerts. Our work is to present a novel type-graph algorithm which unifies correlation and hypothesizing in to a single operation. Our experimental results indicate that the approach is extremely efficient in the face of intensive alerts and produces compact output graphs comparable to other techniques.
Performance Evaluation of DCA and SRC on a Single Bot Detection
Al-Hammadi, Yousof, Aickelin, Uwe, Greensmith, Julie
Malicious users try to compromise systems using new techniques. One of the recent techniques used by the attacker is to perform complex distributed attacks such as denial of service and to obtain sensitive data such as password information. These compromised machines are said to be infected with malicious software termed a "bot". In this paper, we investigate the correlation of behavioural attributes such as keylogging and packet flooding behaviour to detect the existence of a single bot on a compromised machine by applying (1) Spearman's rank correlation (SRC) algorithm and (2) the Dendritic Cell Algorithm (DCA). We also compare the output results generated from these two methods to the detection of a single bot. The results show that the DCA has a better performance in detecting malicious activities.
Malicious Code Execution Detection and Response Immune System inspired by the Danger Theory
Kim, Jungwon, Greensmith, Julie, Twycross, Jamie, Aickelin, Uwe
The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.
A Gender-Centric Analysis of Calling Behavior in a Developing Economy Using Call Detail Records
Frias-Martinez, Vanessa (Telefonica Research, Madrid) | Frias-Martinez, Enrique (Telefonica Research, Madrid) | Oliver, Nuria (Telefonica Research, Madrid)
The gender divide in the access to technology in developing economies makes gender characterization and automatic gender identification two of the most critical needs for improving cell phone-based services. Gender identification has been typically solved using voice or image processing. ย However, such techniques cannot be applied to cell phone networks mostly due to privacy concerns. In this paper, we present a study aimed at characterizing and automatically identifying the gender of a cell phone user in a developing economy based on behavioral, social and mobility variables. Our contributions are twofold: (1) understanding the role that gender plays on phone usage, and (2) evaluating common machine learning approaches for gender identification. The analysis was carried out using the encrypted CDRs (Call Detail Records) of approximately 10,000 users from a developing economy, whose gender was known a priori. Our results indicate that behavioral and social variables, including the number of input/output calls and the in degree/out degree of the social network, reveal statistically significant differences between male and female callers. Finally, we propose a new gender identification algorithm that can achieve classification rates of up to 80% when the percentage of predicted instances is reduced.
Privacy and Transparency
Mayes, Gregory Randolph (California State University Sacramento)
In this essay I argue that it is logically and practically possible to secure the right to privacy under conditions of increasing social transparency. The argument is predicated on a particular analysis of the right to privacy as the right to the personal space required for the exercise of practical rationality. It also rests on the distinction between the unidirectional transparency required by repressive governments and the increasing omnidirectional transparency that liberal information societies are experiencing today. I claim that a properly administered omnidirectional transparency will not only enhance privacy and autonomy, but can also be a key development in the creation of a society that is more tolerant of harmless diversity and temperate in its punishment of anti-social behaviors.
Reasoning about the Appropriate Use of Private Data through Computational Workflows
Gil, Yolanda (Information Sciences Institute, University of Southern California) | Fritz, Christian (Information Sciences Institute, University of Southern California)
While there is a plethora of mechanisms to ensure lawful access to privacy-protected data, additional research is required in order to reassure individuals that their personal data is being used for the purpose that they consented to. This is particularly important in the context of new data mining approaches, as used, for instance, in biomedical research and commercial data mining. We argue for the use of computational workflows to ensure and enforce appropriate use of sensitive personal data. Computational workflows describe in a declarative manner the data processing steps and the expected results of complex data analysis processes such as data mining (Gil et al. 2007b; Taylor et al. 2006). We see workflows as an artifact that captures, among other things, how data is being used and for what purpose. Existing frameworks for computational workflows need to be extended to incorporate privacy policies that can govern the use of data.
Combining Privacy and Security Risk Assessment in Security Quality Requirements Engineering
Abu-Nimeh, Saeed (Websense Security Labs) | Mead, Nancy (Carnegie Mellon University)
Functional or end user requirements are the tasks that the system - Protection and control of consolidated data under development is expected to perform. However, nonfunctional - Data retrieval requirements are the qualities that the system is - Equitable treatment of users to adhere to. Functional requirements are not as difficult - Data retention and disposal to tackle, as it is easier to test their implementation in the - User monitoring and protection against unauthorized system under development. Security and privacy requirements monitoring are considered nonfunctional requirements, although in many instances they do have functionality. To identify Several laws and regulations provide a set of guidelines privacy risks early in the design process, privacy requirements that can be used to assess privacy risks. For example, engineering is used (Chiasera et al. 2008). However, the Health Insurance Portability and Accountability Act unlike security requirements engineering, little attention is (HIPAA) addresses privacy concerns of health information paid to privacy requirements engineering, thus it is less mature systems by enforcing data exchange standards.
Privacy Classification Systems: Recall and Precision Optimization as Enabler of Trusted Information Sharing
Hogan, Christopher (H5) | Bauer, Robert S. (H5)
Information is shared more extensively when a user can confidently classify all his information according to its desired degree of disclosure prior to transmission. While high quality classification is relatively straightforward for structured data (e.g., credit card numbers, cookies, "confidential" reports), most consumer and business information is unstructured (e.g., Facebook posts, corporate email). All current technological approaches to classifying unstructured information seek to identify only that information having the desired characteristics (i.e., to maximize the percentage of filtered content that requires privacy protection). Such focus on boosting classifier Precision (P) causes technology solutions to miss sensitive information [i.e., Recall (R) is compromised for the sake of P improvement]. Such privacy protection will fall short of user expectations no matter how "intelligent" the technology may be in extending beyond keywords to user meaning. Systems must simultaneously optimize both P and R in order to protect privacy sufficiently to encourage the free flow of personal and corporate information. This requires a socio-technical methodology wherein the user is intimately involved in iterative privacy improvement. The approach is a general one in which the classifier can be modified as necessary at any time when sampling measures of P and R deem it appropriate. Matching the ever-evolving user privacy model to the technology solution (e.g., active machine learning) affords a technique for building and maintaining user trust.