Country
Context Capture in Software Development
Antunes, Bruno, Correia, Francisco, Gomes, Paulo
The context of a software developer is something hard to define and capture, as it represents a complex network of elements across different dimensions that are not limited to the work developed on an IDE. We propose the definition of a software developer context model that takes into account all the dimensions that characterize the work environment of the developer. We are especially focused on what the software developer context encompasses at the project level and how it can be captured. The experimental work done so far show that useful context information can be extracted from project management tools. The extraction, analysis and availability of this context information can be used to enrich the work environment of the developer with additional knowledge to support her/his work.
Minimum mean square distance estimation of a subspace
Besson, Olivier, Dobigeon, Nicolas, Tourneret, Jean-Yves
We consider the problem of subspace estimation in a Bayesian setting. Since we are operating in the Grassmann manifold, the usual approach which consists of minimizing the mean square error (MSE) between the true subspace $U$ and its estimate $\hat{U}$ may not be adequate as the MSE is not the natural metric in the Grassmann manifold. As an alternative, we propose to carry out subspace estimation by minimizing the mean square distance (MSD) between $U$ and its estimate, where the considered distance is a natural metric in the Grassmann manifold, viz. the distance between the projection matrices. We show that the resulting estimator is no longer the posterior mean of $U$ but entails computing the principal eigenvectors of the posterior mean of $U U^{T}$. Derivation of the MMSD estimator is carried out in a few illustrative examples including a linear Gaussian model for the data and a Bingham or von Mises Fisher prior distribution for $U$. In all scenarios, posterior distributions are derived and the MMSD estimator is obtained either analytically or implemented via a Markov chain Monte Carlo simulation method. The method is shown to provide accurate estimates even when the number of samples is lower than the dimension of $U$. An application to hyperspectral imagery is finally investigated.
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models
Fan, Jianqing, Feng, Yang, Song, Rui
A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.
Extended Active Learning Method
Kiaei, Ali Akbar, Shouraki, Saeed Bagheri, Khasteh, Seyed Hossein, Khademi, Mahmoud, Samani, Alireza Ghatreh
Active Learning Method (ALM) is a soft computing method which is used for modeling and control, based on fuzzy logic. Although ALM has shown that it acts well in dynamic environments, its operators cannot support it very well in complex situations due to losing data. Thus ALM can find better membership functions if more appropriate operators be chosen for it. This paper substituted two new operators instead of ALM original ones; which consequently renewed finding membership functions in a way superior to conventional ALM. This new method is called Extended Active Learning Method (EALM).
Gaussian Process Bandits for Tree Search: Theory and Application to Planning in Discounted MDPs
Dorard, Louis, Shawe-Taylor, John
We motivate and analyse a new Tree Search algorithm, GPTS, based on recent theoretical advances in the use of Gaussian Processes for Bandit problems. We consider tree paths as arms and we assume the target/reward function is drawn from a GP distribution. The posterior mean and variance, after observing data, are used to define confidence intervals for the function values, and we sequentially play arms with highest upper confidence bounds. We give an efficient implementation of GPTS and we adapt previous regret bounds by determining the decay rate of the eigenvalues of the kernel matrix on the whole set of tree paths. We consider two kernels in the feature space of binary vectors indexed by the nodes of the tree: linear and Gaussian. The regret grows in square root of the number of iterations T, up to a logarithmic factor, with a constant that improves with bigger Gaussian kernel widths. We focus on practical values of T, smaller than the number of arms. Finally, we apply GPTS to Open Loop Planning in discounted Markov Decision Processes by modelling the reward as a discounted sum of independent Gaussian Processes. We report similar regret bounds to those of the OLOP algorithm.
The 2008 Classic Paper Award: Summary and Significance
We at the NASA laboratory believed that our best work came when we simultaneously advanced AI theory and provided immediately usable solutions for current NASA problems. “Solving Large-Scale Constraint Satisfaction and Scheduling Problems Using a Heuristic Repair Method,” by Steve Minton, Mark Johnston, Andy Phillips, and Phil Laird clearly achieved both. It proved that local search and repair was applicable to a wide class of constraint satisfaction problems and clearly explicated the theory behind that proof.
Algorithmic Game Theory and Artificial Intelligence
Elkind, Edith (Nanyang Technological University) | Leyton-Brown, Kevin (University of British Columbia)
Indeed, game theory now serves as perhaps the main analytical framework in microeconomic theory, as evidenced by its prominent role in economics textbooks (for example, Mas-Colell, Whinston, and Green 1995) and by the many Nobel prizes in economic sciences awarded to prominent game theorists. Artificial intelligence got its start shortly after game theory (McCarthy et al. 1955), and indeed pioneers such as von Neumann and Simon made early contributions to both fields (see, for example, Findler [1988], Simon [1981]). Both game theory and AI draw (nonexclusively) on decision theory (von Neumann and Morgenstern 1947); for example, one prominent view defines artificial intelligence as "the study and construction of rational agents" (Russell and Norvig 2003), and hence takes a decision-theoretic approach when the world is stochastic. However, artificial intelligence spent most of its first 40 years focused on the design and analysis of agents that act in isolation, and hence had little need for game-theoretic analysis. Starting in the mid to late 1990s, game theory became a major topic of study for computer scientists, for at least two main reasons. First, economists began to be interested in systems whose computational properties posed serious barriers to practical use, and hence reached out to computer scientists; notably, this occurred around the study of combinatorial auctions (see, for example, Cramton, Shoham, and Steinberg 2006). Second, the rise of distributed computing in general and the Internet in particular made it increasingly necessary for computer scientists to study settings in which intelligent agents reason about and interact with other agents.
AAAI News
Hamilton, Carol M. (Association for the Advancement of Artificial Intelligence)
The Doctoral Consortium materials; a workshop for of ideas between basic and applied AI. (DC) provides an opportunity for a mentoring new faculty, instructors, IAAI-11 will consider papers in two group of Ph.D. students to discuss and and graduate students on teaching; an tracks: (1) deployed application case explore their research interests and career Educational Video Track within the studies and (2) emerging applications objectives with a panel of established AAAI-11 Video program; and a Student/Educator or methodologies.
Using Mechanism Design to Prevent False-Name Manipulations
Conitzer, Vincent (Duke University) | Yokoo, Makoto (Kyushu University)
The basic notion of false-name-proofness allows for useful mechanisms under certain circumstances, but in general there are impossibility results that show that false-name-proof mechanisms have severe limitations. One may react to these impossibility results by saying that, since false-name-proof mechanisms are unsatisfactory, we should not run any important mechanisms in highly anonymous settings—unless, perhaps, we can find some methodology that directly prevents false-name manipulation even in such settings, so that we are back in a more typical mechanism design context. However, it seems unlikely that the phenomenon of false-name manipulation will disappear anytime soon. Because the Internet is so attractive as a platform for running certain types of mechanisms, it seems unlikely that the organizations running these mechanisms will take them offline. Moreover, because a goal of these organizations is often to get as many users to participate as possible, they will be reluctant to use high-overhead solutions that discourage users from participating. As a result, perhaps the most promising approaches at this point are those that combine techniques from mechanism design with other techniques discussed in this article. It appears that this is a rich domain for new, creative approaches that can have significant practical impact.