Plotting

 Information Technology


Face Recognition by Fusion of Local and Global Matching Scores using DS Theory: An Evaluation with Uni-classifier and Multi-classifier Paradigm

arXiv.org Artificial Intelligence

Faces are highly deformable objects which may easily change their appearance over time. Not all face areas are subject to the same variability. Therefore decoupling the information from independent areas of the face is of paramount importance to improve the robustness of any face recognition technique. This paper presents a robust face recognition technique based on the extraction and matching of SIFT features related to independent face areas. Both a global and local (as recognition from parts) matching strategy is proposed. The local strategy is based on matching individual salient facial SIFT features as connected to facial landmarks such as the eyes and the mouth. As for the global matching strategy, all SIFT features are combined together to form a single feature. In order to reduce the identification errors, the Dempster-Shafer decision theory is applied to fuse the two matching techniques. The proposed algorithms are evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition techniques also in the case of partially occluded faces or with missing information.


Classifying Network Data with Deep Kernel Machines

arXiv.org Machine Learning

Inspired by a growing interest in analyzing network data, we study the problem of node classification on graphs, focusing on approaches based on kernel machines. Conventionally, kernel machines are linear classifiers in the implicit feature space. We argue that linear classification in the feature space of kernels commonly used for graphs is often not enough to produce good results. When this is the case, one naturally considers nonlinear classifiers in the feature space. We show that repeating this process produces something we call "deep kernel machines." We provide some examples where deep kernel machines can make a big difference in classification performance, and point out some connections to various recent literature on deep architectures in artificial intelligence and machine learning.


Detecting Botnets Through Log Correlation

arXiv.org Artificial Intelligence

Botnets, which consist of thousands of compromised machines, can cause significant threats to other systems by launching Distributed Denial of Service (SSoS) attacks, keylogging, and backdoors. In response to these threats, new effective techniques are needed to detect the presence of botnets. In this paper, we have used an interception technique to monitor Windows Application Programming Interface (API) functions calls made by communication applications and store these calls with their arguments in log files. Our algorithm detects botnets based on monitoring abnormal activity by correlating the changes in log file sizes from different hosts.


Dendritic Cells for Anomaly Detection

arXiv.org Artificial Intelligence

Artificial immune systems, more specifically the negative selection algorithm, have previously been applied to intrusion detection. The aim of this research is to develop an intrusion detection system based on a novel concept in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting cells and key to the activation of the human signals from the host tissue and correlate these signals with proteins know as antigens. In algorithmic terms, individual DCs perform multi-sensor data fusion based on time-windows. The whole population of DCs asynchronously correlates the fused signals with a secondary data stream. The behaviour of human DCs is abstracted to form the DC Algorithm (DCA), which is implemented using an immune inspired framework, libtissue. This system is used to detect context switching for a basic machine learning dataset and to detect outgoing portscans in real-time. Experimental results show a significant difference between an outgoing portscan and normal traffic.


DCA for Bot Detection

arXiv.org Artificial Intelligence

Ensuring the security of computers is a non-trivial task, with many techniques used by malicious users to compromise these systems. In recent years a new threat has emerged in the form of networks of hijacked zombie machines used to perform complex distributed attacks such as denial of service and to obtain sensitive data such as password information. These zombie machines are said to be infected with a 'bot' - a malicious piece of software which is installed on a host machine and is controlled by a remote attacker, termed the 'botmaster of a botnet'. In this work, we use the biologically inspired Dendritic Cell Algorithm (DCA) to detect the existence of a single bot on a compromised host machine. The DCA is an immune-inspired algorithm based on an abstract model of the behaviour of the dendritic cells of the human body. The basis of anomaly detection performed by the DCA is facilitated using the correlation of behavioural attributes such as keylogging and packet flooding behaviour. The results of the application of the DCA to the detection of a single bot show that the algorithm is a successful technique for the detection of such malicious software without responding to normally running programs.


Cooperative Automated Worm Response and Detection Immune Algorithm

arXiv.org Artificial Intelligence

The role of T-cells within the immune system is to confirm and assess anomalous situations and then either respond to or tolerate the source of the effect. To illustrate how these mechanisms can be harnessed to solve real-world problems, we present the blueprint of a T-cell inspired algorithm for computer security worm detection. We show how the three central T-cell processes, namely T-cell maturation, differentiation and proliferation, naturally map into this domain and further illustrate how such an algorithm fits into a complete immune inspired computer security system and framework.


Web-Based Expert System for Civil Service Regulations: RCSES

arXiv.org Artificial Intelligence

Internet and expert systems have offered new ways of sharing and distributing knowledge, but there is a lack of researches in the area of web based expert systems. This paper introduces a development of a web-based expert system for the regulations of civil service in the Kingdom of Saudi Arabia named as RCSES. It is the first time to develop such system (application of civil service regulations) as well the development of it using web based approach. The proposed system considers 17 regulations of the civil service system. The different phases of developing the RCSES system are presented, as knowledge acquiring and selection, ontology and knowledge representations using XML format. XML Rule-based knowledge sources and the inference mechanisms were implemented using ASP.net technique. An interactive tool for entering the ontology and knowledge base, and the inferencing was built. It gives the ability to use, modify, update, and extend the existing knowledge base in an easy way. The knowledge was validated by experts in the domain of civil service regulations, and the proposed RCSES was tested, verified, and validated by different technical users and the developers staff. The RCSES system is compared with other related web based expert systems, that comparison proved the goodness, usability, and high performance of RCSES.


Client-server multi-task learning from distributed datasets

arXiv.org Artificial Intelligence

A client-server architecture to simultaneously solve multiple learning tasks from distributed datasets is described. In such architecture, each client is associated with an individual learning task and the associated dataset of examples. The goal of the architecture is to perform information fusion from multiple datasets while preserving privacy of individual data. The role of the server is to collect data in real-time from the clients and codify the information in a common database. The information coded in this database can be used by all the clients to solve their individual learning task, so that each client can exploit the informative content of all the datasets without actually having access to private data of others. The proposed algorithmic framework, based on regularization theory and kernel methods, uses a suitable class of mixed effect kernels. The new method is illustrated through a simulated music recommendation system.


The Design and Evaluation of User Interfaces for the RADAR Learning Personal Assistant

AI Magazine

The RADAR project developed a large multi-agent system with a mixed-initiative user interface designed to help office workers cope with email overload. Most RADAR agents observe experts performing tasks and then assist other users who are performing similar tasks. The interaction design for RADAR focused on developing user interfaces that allowed the intelligent functionality to improve the user's workflow without frustrating the user when the system's suggestions were either unhelpful or simply incorrect. A large evaluation of RADAR demonstrated that novice users confronted with an email overload test performed significantly better, achieving a 37% better overall score when assisted by RADAR.


AI and HCI: Two Fields Divided by a Common Focus

AI Magazine

Although AI and HCI explore computing and intelligent behavior and the fields have seen some cross-over, until recently there was not very much. This article outlines a history of the fields that identifies some of the forces that kept the fields at arm’s length. AI was generally marked by a very ambitious, long-term vision requiring expensive systems, although the term was rarely envisioned as being as long as it proved to be, whereas HCI focused more on innovation and improvement of widely-used hardware within a short time-scale. These differences led to different priorities, methods, and assessment approaches.  A consequence was competition for resources, with HCI flourishing in AI winters and moving more slowly when AI was in favor. The situation today is much more promising, in part because of platform convergence: AI can be exploited on widely-used systems.