Goto

Collaborating Authors

 Government


A Novel Two-Stage Dynamic Decision Support based Optimal Threat Evaluation and Defensive Resource Scheduling Algorithm for Multi Air-borne threats

arXiv.org Artificial Intelligence

This paper presents a novel two-stage flexible dynamic decision support based optimal threat evaluation and defensive resource scheduling algorithm for multi-target air-borne threats. The algorithm provides flexibility and optimality by swapping between two objective functions, i.e. the preferential and subtractive defense strategies as and when required. To further enhance the solution quality, it outlines and divides the critical parameters used in Threat Evaluation and Weapon Assignment (TEWA) into three broad categories (Triggering, Scheduling and Ranking parameters). Proposed algorithm uses a variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat Evaluation (TE) and Weapon Assignment (WA) problem. In TE stage, Threat Ranking and Threat-Asset pairing is done. Stage two is based on a new flexible dynamic weapon scheduling algorithm, allowing multiple engagements using shoot-look-shoot strategy, to compute near-optimal solution for a range of scenarios. Analysis part of this paper presents the strengths and weaknesses of the proposed algorithm over an alternative greedy algorithm as applied to different offline scenarios.


Compiling the Votes of a Subelectorate

AAAI Conferences

In many practical contexts where a number of agents have to find a common decision, the votes do not come all together at the same time. In such situations, we may want to preprocess the information given by the subelectorate (consisting of the voters who have expressed their votes) so as to ``compile'' the known votes for the time when the latecomers have expressed their votes. We study the amount of space necessary for such a compilation, as a function of the voting rule, the number of candidates, and the number of votes already known. We relate our results to existing work, especially on communication complexity.


Complexity of Unweighted Coalitional Manipulation Under Some Common Voting Rules

AAAI Conferences

Understanding the computational complexity of manipulation in elections is arguably the most central agenda in Computational Social Choice. One of the influential variations of the the problem involves a coalition of manipulators trying to make a favorite candidate win the election. Although the complexity of the problem is well-studied under the assumption that the voters are weighted, there were very few successful attempts to abandon this strong assumption. In this paper, we study the complexity of the unweighted coalitional manipulation problem (UCM) under several prominent voting rules. Our main result is that UCM is NP-complete under the maximin rule; this resolves an enigmatic open question. We then show that UCM is NP-complete under the ranked pairs rule, even with respect to a single manipulator. Furthermore, we provide an extreme hardness-of-approximation result for an optimization version of UCM under ranked pairs. Finally, we show that UCM under the Bucklin rule is in P.


Approximate inference on planar graphs using Loop Calculus and Belief Propagation

arXiv.org Artificial Intelligence

We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006) allows to express the exact partition function of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in (Certkov et al., 2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze the performance of the algorithm for the partition function approximation for models with binary variables and pairwise interactions on grids and other planar graphs. We study in detail both the loop series and the equivalent Pfaffian series and show that the first term of the Pfaffian series for the general, intractable planar model, can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.


Reasoning about soft constraints and conditional preferences: complexity results and approximation techniques

arXiv.org Artificial Intelligence

Many real life optimization problems contain both hard and soft constraints, as well as qualitative conditional preferences. However, there is no single formalism to specify all three kinds of information. We therefore propose a framework, based on both CP-nets and soft constraints, that handles both hard and soft constraints as well as conditional preferences efficiently and uniformly. We study the complexity of testing the consistency of preference statements, and show how soft constraints can faithfully approximate the semantics of conditional preference statements whilst improving the computational complexity


Just-in-Time Backfilling in Multi-Agent Scheduling

AAAI Conferences

This paper addresses the problem of how a group of agents cooperating on a complex plan with interdependent actions can coordinate their scheduling and execution of those actions, particularly in domains where actions may fail or have uncertain durations.  If actions fail (or fail to meet their deadlines), the repercussions for the rest of the team's plan can be dramatic.  This paper presents a pro-active strategy, called Just-in-Time Backfilling (JIT-BF), that agents can use to increase the fault tolerance of their interdependent schedules by identifying actions in danger of failing and inserting redundant (or back-up) actions into their schedules.  The insertion of redundant actions can be done locally (i.e., by the agent whose action is in danger of failing) or through negotiations with the rest of the team. The computations performed by agents following the JIT-BF strategy depend on probabilistic models of action durations and the ``quality'' achieved by successfully executing actions.  The paper presents an experimental evaluation of the JIT-BF strategy within a simulated real-time dynamic environment that demonstrates that teams using the pro-active JIT-BF strategy significantly out-perform teams that rely solely on reactive strategies.


Discovering Patterns of Collaboration for Recommendation

AAAI Conferences

Collaboration between research scientists, particularly those with diverse backgrounds, is a driver of scientific innovation. However, finding the right collaborator is often an unscientific process that is subject to chance. This paper explores recommending collaborators based on repeating patterns of previous successful collaboration experiences, what we term prototypical collaborations. We investigate a method for discovering such prototypes to use them as a basis to guide the recommendation of new collaborations. To this end, we also examine two methods for matching collaboration seekers to these prototypical collaborations. Our initial studies reveal that though promising, improving collaborations through recommendation is a complex goal.


Responding to Sneaky Agents in Multi-agent Domains

AAAI Conferences

This paper extends the concept of trust modeling within a multi-agent environment.  Trust modeling often focuses on identifying the appropriate trust level for the other agents in the environment and then using these levels to determine how to interact with each agent.  However, this type of modeling does not account for sneaky agents who are willing to cooperate when the stakes are low and take selfish, greedy actions when the rewards rise.  Adding trust to an interactive partially observable Markov decision process (I-POMDP) allows trust levels to be continuously monitored and corrected enabling agents to make better decisions.  The addition of trust modeling increases the decision process calculations, but solves more complex trust problems that are representative of the human world.  The modified I-POMDP reward function and belief models can be used to accurately track the trust levels of agents with hidden agendas.  Testing demonstrates that agents quickly identify the hidden trust levels to mitigate the impact of a deceitful agent.


HAMR: A Hybrid Multi-Robot Control Architecture

AAAI Conferences

Highly capable multiple robot architectures often resort to micromanagement to provide enhanced cooperative abilities, sacrificing individual autonomy. Conversely, multi-robot architectures that maintain individual autonomy are often limited in their cooperative abilities.  This article presents a modified three layer architecture that solves both of these issues.  The addition of a Coordinator layer to a three-layered approach provides a platform-independent interface for coordination on tasks and takes advantage of individual autonomy to improve coordination capabilities.  This reduces communication overhead versus many multi-robot architecture designs and allows for more straightforward resizing of the robot collective and increased individual autonomy.


Mining Meaning from Wikipedia

arXiv.org Artificial Intelligence

Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.