Not enough data to create a plot.
Try a different view from the menu above.
North America
Evolutionary Mechanics: new engineering principles for the emergence of flexibility in a dynamic and uncertain world
Whitacre, James M, Rohlfshagen, Philipp, Bender, Axel, Yao, Xin
Engineered systems are designed to deftly operate under predetermined conditions yet are notoriously fragile when unexpected perturbations arise. In contrast, biological systems operate in a highly flexible manner; learn quickly adequate responses to novel conditions, and evolve new routines/traits to remain competitive under persistent environmental change. A recent theory on the origins of biological flexibility has proposed that degeneracy - the existence of multi-functional components with partially overlapping functions - is a primary determinant of the robustness and adaptability found in evolved systems. While degeneracy's contribution to biological flexibility is well documented, there has been little investigation of degeneracy design principles for achieving flexibility in systems engineering. Actually, the conditions that can lead to degeneracy are routinely eliminated in engineering design. With the planning of transportation vehicle fleets taken as a case study, this paper reports evidence that degeneracy improves robustness and adaptability of a simulated fleet without incurring costs to efficiency. We find degeneracy dramatically increases robustness of a fleet to unpredicted changes in the environment while it also facilitates robustness to anticipated variations. When we allow a fleet's architecture to be adapted in response to environmental change, we find degeneracy can be selectively acquired, leading to faster rates of design adaptation and ultimately to better designs. Given the range of conditions where favorable short-term and long-term performance outcomes are observed, we propose that degeneracy design principles fundamentally alter the propensity for adaptation and may be useful within several engineering and planning contexts.
Survival of the flexible: explaining the recent dominance of nature-inspired optimization within a rapidly evolving world
Although researchers often comment on the rising popularity of nature-inspired meta-heuristics (NIM), there has been a paucity of data to directly support the claim that NIM are growing in prominence compared to other optimization techniques. This study presents evidence that the use of NIM is not only growing, but indeed appears to have surpassed mathematical optimization techniques (MOT) in several important metrics related to academic research activity (publication frequency) and commercial activity (patenting frequency). Motivated by these findings, this article discusses some of the possible origins of this growing popularity. I review different explanations for NIM popularity and discuss why some of these arguments remain unsatisfying. I argue that a compelling and comprehensive explanation should directly account for the manner in which most NIM success has actually been achieved, e.g. through hybridization and customization to different problem environments. By taking a problem lifecycle perspective, this paper offers a fresh look at the hypothesis that nature-inspired meta-heuristics derive much of their utility from being flexible. I discuss global trends within the business environments where optimization algorithms are applied and I speculate that highly flexible algorithm frameworks could become increasingly popular within our diverse and rapidly changing world.
Context Capture in Software Development
Antunes, Bruno, Correia, Francisco, Gomes, Paulo
The context of a software developer is something hard to define and capture, as it represents a complex network of elements across different dimensions that are not limited to the work developed on an IDE. We propose the definition of a software developer context model that takes into account all the dimensions that characterize the work environment of the developer. We are especially focused on what the software developer context encompasses at the project level and how it can be captured. The experimental work done so far show that useful context information can be extracted from project management tools. The extraction, analysis and availability of this context information can be used to enrich the work environment of the developer with additional knowledge to support her/his work.
Minimum mean square distance estimation of a subspace
Besson, Olivier, Dobigeon, Nicolas, Tourneret, Jean-Yves
We consider the problem of subspace estimation in a Bayesian setting. Since we are operating in the Grassmann manifold, the usual approach which consists of minimizing the mean square error (MSE) between the true subspace $U$ and its estimate $\hat{U}$ may not be adequate as the MSE is not the natural metric in the Grassmann manifold. As an alternative, we propose to carry out subspace estimation by minimizing the mean square distance (MSD) between $U$ and its estimate, where the considered distance is a natural metric in the Grassmann manifold, viz. the distance between the projection matrices. We show that the resulting estimator is no longer the posterior mean of $U$ but entails computing the principal eigenvectors of the posterior mean of $U U^{T}$. Derivation of the MMSD estimator is carried out in a few illustrative examples including a linear Gaussian model for the data and a Bingham or von Mises Fisher prior distribution for $U$. In all scenarios, posterior distributions are derived and the MMSD estimator is obtained either analytically or implemented via a Markov chain Monte Carlo simulation method. The method is shown to provide accurate estimates even when the number of samples is lower than the dimension of $U$. An application to hyperspectral imagery is finally investigated.
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models
Fan, Jianqing, Feng, Yang, Song, Rui
A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.
The 2008 Classic Paper Award: Summary and Significance
We at the NASA laboratory believed that our best work came when we simultaneously advanced AI theory and provided immediately usable solutions for current NASA problems. “Solving Large-Scale Constraint Satisfaction and Scheduling Problems Using a Heuristic Repair Method,” by Steve Minton, Mark Johnston, Andy Phillips, and Phil Laird clearly achieved both. It proved that local search and repair was applicable to a wide class of constraint satisfaction problems and clearly explicated the theory behind that proof.
Algorithmic Game Theory and Artificial Intelligence
Elkind, Edith (Nanyang Technological University) | Leyton-Brown, Kevin (University of British Columbia)
Indeed, game theory now serves as perhaps the main analytical framework in microeconomic theory, as evidenced by its prominent role in economics textbooks (for example, Mas-Colell, Whinston, and Green 1995) and by the many Nobel prizes in economic sciences awarded to prominent game theorists. Artificial intelligence got its start shortly after game theory (McCarthy et al. 1955), and indeed pioneers such as von Neumann and Simon made early contributions to both fields (see, for example, Findler [1988], Simon [1981]). Both game theory and AI draw (nonexclusively) on decision theory (von Neumann and Morgenstern 1947); for example, one prominent view defines artificial intelligence as "the study and construction of rational agents" (Russell and Norvig 2003), and hence takes a decision-theoretic approach when the world is stochastic. However, artificial intelligence spent most of its first 40 years focused on the design and analysis of agents that act in isolation, and hence had little need for game-theoretic analysis. Starting in the mid to late 1990s, game theory became a major topic of study for computer scientists, for at least two main reasons. First, economists began to be interested in systems whose computational properties posed serious barriers to practical use, and hence reached out to computer scientists; notably, this occurred around the study of combinatorial auctions (see, for example, Cramton, Shoham, and Steinberg 2006). Second, the rise of distributed computing in general and the Internet in particular made it increasingly necessary for computer scientists to study settings in which intelligent agents reason about and interact with other agents.
AAAI News
Hamilton, Carol M. (Association for the Advancement of Artificial Intelligence)
The Doctoral Consortium materials; a workshop for of ideas between basic and applied AI. (DC) provides an opportunity for a mentoring new faculty, instructors, IAAI-11 will consider papers in two group of Ph.D. students to discuss and and graduate students on teaching; an tracks: (1) deployed application case explore their research interests and career Educational Video Track within the studies and (2) emerging applications objectives with a panel of established AAAI-11 Video program; and a Student/Educator or methodologies.
Using Mechanism Design to Prevent False-Name Manipulations
Conitzer, Vincent (Duke University) | Yokoo, Makoto (Kyushu University)
The basic notion of false-name-proofness allows for useful mechanisms under certain circumstances, but in general there are impossibility results that show that false-name-proof mechanisms have severe limitations. One may react to these impossibility results by saying that, since false-name-proof mechanisms are unsatisfactory, we should not run any important mechanisms in highly anonymous settings—unless, perhaps, we can find some methodology that directly prevents false-name manipulation even in such settings, so that we are back in a more typical mechanism design context. However, it seems unlikely that the phenomenon of false-name manipulation will disappear anytime soon. Because the Internet is so attractive as a platform for running certain types of mechanisms, it seems unlikely that the organizations running these mechanisms will take them offline. Moreover, because a goal of these organizations is often to get as many users to participate as possible, they will be reluctant to use high-overhead solutions that discourage users from participating. As a result, perhaps the most promising approaches at this point are those that combine techniques from mechanism design with other techniques discussed in this article. It appears that this is a rich domain for new, creative approaches that can have significant practical impact.