Not enough data to create a plot.
Try a different view from the menu above.
Country
Artificial Immune Systems
Aickelin, Uwe, Dasgupta, Dipankar
The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years. A novel computational intelligence technique, inspired by immunology, has emerged, called Artificial Immune Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. In this tutorial, we briefly describe the immune system metaphors that are relevant to existing Artificial Immune Systems methods. We will then show illustrative real-world problems suitable for Artificial Immune Systems and give a step-by-step algorithm walkthrough for one such problem. A comparison of the Artificial Immune Systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted that as Artificial Immune Systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.
Sum of Us: Strategyproof Selection from the Selectors
Alon, Noga, Fischer, Felix, Procaccia, Ariel D., Tennenholtz, Moshe
We consider directed graphs over a set of n agents, where an edge (i,j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k\leq n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted agents. At the same time we assume that each individual agent is only interested in being selected, and may misreport its outgoing edges to this end. This problem formulation captures realistic scenarios where agents choose among themselves, which can be found in the context of Internet search, social networks like Twitter, or reputation systems like Epinions. Our goal is to design mechanisms without payments that map each graph to a k-subset of agents to be selected and satisfy the following two constraints: strategyproofness, i.e., agents cannot benefit from misreporting their outgoing edges, and approximate optimality, i.e., the sum of indegrees of the selected subset of agents is always close to optimal. Our first main result is a surprising impossibility: for k \in {1,...,n-1}, no deterministic strategyproof mechanism can provide a finite approximation ratio. Our second main result is a randomized strategyproof mechanism with an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.
Experiment Study of Entropy Convergence of Ant Colony Optimization
Pang, Chao-Yang, Wang, Chong-Bao, Hu, Ben-Qiong
Ant colony optimization (ACO) has been applied to the field of combinatorial optimization widely. But the study of convergence theory of ACO is rare under general condition. In this paper, the authors try to find the evidence to prove that entropy is related to the convergence of ACO, especially to the estimation of the minimum iteration number of convergence. Entropy is a new view point possibly to studying the ACO convergence under general condition. Key Words: Ant Colony Optimization, Convergence of ACO, Entropy
Sparsification and feature selection by compressive linear regression
The Minimum Description Length (MDL) principle states that the optimal model for a given data set is that which compresses it best. Due to practial limitations the model can be restricted to a class such as linear regression models, which we address in this study. As in other formulations such as the LASSO and forward step-wise regression we are interested in sparsifying the feature set while preserving generalization ability. We derive a well-principled set of codes for both parameters and error residuals along with smooth approximations to lengths of these codes as to allow gradient descent optimization of description length, and go on to show that sparsification and feature selection using our approach is faster than the LASSO on several datasets from the UCI and StatLib repositories, with favorable generalization accuracy, while being fully automatic, requiring neither cross-validation nor tuning of regularization hyper-parameters, allowing even for a nonlinear expansion of the feature set followed by sparsification.
How to Complete an Interactive Configuration Process?
Janota, Mikolas, Botterweck, Goetz, Grigore, Radu, Marques-Silva, Joao
When configuring customizable software, it is useful to provide interactive tool-support that ensures that the configuration does not breach given constraints. But, when is a configuration complete and how can the tool help the user to complete it? We formalize this problem and relate it to concepts from non-monotonic reasoning well researched in Artificial Intelligence. The results are interesting for both practitioners and theoreticians. Practitioners will find a technique facilitating an interactive configuration process and experiments supporting feasibility of the approach. Theoreticians will find links between well-known formal concepts and a concrete practical application.
Learning Class-Level Bayes Nets for Relational Data
Schulte, Oliver, Khosravi, Hassan, Moser, Flavia, Ester, Martin
Many databases store data in relational format, with different types of entities and information about links between the entities. The field of statistical-relational learning (SRL) has developed a number of new statistical models for such data. In this paper we focus on learning class-level or first-order dependencies, which model the general database statistics over attributes of linked objects and links (e.g., the percentage of A grades given in computer science classes). Class-level statistical relationships are important in themselves, and they support applications like policy making, strategic planning, and query optimization. Most current SRL methods find class-level dependencies, but their main task is to support instance-level predictions about the attributes or links of specific entities. We focus only on class-level prediction, and describe algorithms for learning class-level models that are orders of magnitude faster for this task. Our algorithms learn Bayes nets with relational structure, leveraging the efficiency of single-table nonrelational Bayes net learners. An evaluation of our methods on three data sets shows that they are computationally feasible for realistic table sizes, and that the learned structures represent the statistical information in the databases well. After learning compiles the database statistics into a Bayes net, querying these statistics via Bayes net inference is faster than with SQL queries, and does not depend on the size of the database.
Computer Models of Creativity
Boden, Margaret A. (University of Sussex)
Creativity isn’t magical. It’s an aspect of normal human intelligence, not a special faculty granted to a tiny elite. There are three forms: combinational, exploratory, and transformational. All three can be modeled by AI—in some cases, with impressive results. AI techniques underlie various types of computer art. Whether computers could “really” be creative isn’t a scientific question but a philosophical one, to which there’s no clear answer. But we do have the beginnings of a scientific understanding of creativity.
Converging on the Divergent: The History (and Future) of the International Joint Workshops in Computational Creativity
Cardoso, Amílcar (University of Coimbra) | Veale, Tony (School of Computer Science and Informatics, University College Dublin) | Wiggins, Geraint A. (Centre for Cognition, Computation and Culture, Goldsmiths, University of London)
The difference between comedians and their audience is a matter not of kind, but of degree, a difference that is reflected in the vocational emphasis they place on humor. Researchers in the field of computational creativity find themselves in a similar situation. As a subdiscipline of artificial intelligence, computational creativity explores theories and practices that give rise to a phenomenon, creativity, that all intelligent systems, human or machine, can legitimately lay claim to. Who is to say that a given AI system is not creative, insofar as it solves nontrivial problems or generates useful outputs that are not hard wired into its programming? As with comedians' being funny, the difference between studying computational creativity and studying artificial intelligence is one of emphasis rather than one of kind: the field of computational creativity, as typified by a long-running series of workshops at AIrelated conferences, places a vocational emphasis on creativity and attempts to draw together the commonalities of what
AAAI Conferences Calendar
ICAART 2010 will be held January 22-24, 2010, in Valencia, Spain. International Conference on Intelligent This page includes forthcoming AAAI sponsored conferences, conferences presented User Interfaces. IUI 2010 will be by AAAI Affiliates, and conferences held in cooperation with AAAI. AI held February 7-10, 2010, in Hong Magazine also maintains a calendar listing that also includes nonaffiliated Kong. ICEIS 2010 will be held June 8-12, The International RuleML Symposium Stanford, California.