Not enough data to create a plot.
Try a different view from the menu above.
Government
Shape-Based Object Localization for Descriptive Classification
Heitz, Geremy, Elidan, Gal, Packer, Benjamin, Koller, Daphne
Discriminative tasks, including object categorization and detection, are central components of high-level computer vision. Sometimes, however, we are interested inmore refined aspects of the object in an image, such as pose or particular regions. In this paper we develop a method (LOOPS) for learning a shape and image feature model that can be trained on a particular object class, and used to outline instances of the class in novel images. Furthermore, while the training data consists of uncorresponded outlines, the resulting LOOPS model contains a set of landmark points that appear consistently across instances, and can be accurately localized in an image. Our model achieves state-of-the-art results in precisely outlining objectsthat exhibit large deformations and articulations in cluttered natural images. These localizations can then be used to address a range of tasks, including descriptive classification, search, and clustering.
Why so? or Why no? Functional Causality for Explaining Query Answers
Meliou, Alexandra, Gatterbauer, Wolfgang, Moore, Katherine F., Suciu, Dan
In this paper, we propose causality as a unified framework to explain query answers and non-answers, thus generalizing and extending several previously proposed approaches of provenance and missing query result explanations. We develop our framework starting from the well-studied definition of actual causes by Halpern and Pearl. After identifying some undesirable characteristics of the original definition, we propose functional causes as a refined definition of causality with several desirable properties. These properties allow us to apply our notion of causality in a database context and apply it uniformly to define the causes of query results and their individual contributions in several ways: (i) we can model both provenance as well as non-answers, (ii) we can define explanations as either data in the input relations or relational operations in a query plan, and (iii) we can give graded degrees of responsibility to individual causes, thus allowing us to rank causes. In particular, our approach allows us to explain contributions to relational aggregate functions and to rank causes according to their respective responsibilities. We give complexity results and describe polynomial algorithms for evaluating causality in tractable cases. Throughout the paper, we illustrate the applicability of our framework with several examples. Overall, we develop in this paper the theoretical foundations of causality theory in a database context.
A survey of statistical network models
Goldenberg, Anna, Zheng, Alice X, Fienberg, Stephen E, Airoldi, Edoardo M
Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics.
Using Fuzzy Decision Trees and Information Visualization to Study the Effects of Cultural Diversity on Team Planning and Communication
Liu, Yan (Wright State University) | Warren, Rik (Wright-Patterson Air Force Base)
Virtual teams that span multiple geographic and cultural boundaries have become commonplace in numerous organizations due to the competitive advantages they provide in human resources, products, financial means, knowledge sharing and many others. However, the promises of multinational and multicultural (MNMC) distributed teams are accompanied by a number of challenges. Many research studies have suggested that one of the most challenging barriers to the effective implementation of MNMC distributed teams is culture. In this study, data collected from the experiment conducted by the NATO RTO Human Factors and Medicine Panel Research Task Group (HFM-138/RTG) on โAdapatability in Multinational Coalitionsโ has been analyzed to study the effects of cultural diversity on team planning and communication. Fuzzy decision trees have been derived to model the effects, and information visualization techniques are used to facilitate understanding of the derived classification patterns. Results of the research suggest that there are no single and straightforward conclusions on how cultural diversity affects team planning and communication. Different dimensions of culture values interact in influencing team behaviors. However, diversities in power distance and masculinity seem to play more influential roles than others.
AutoMed - An Automated Mediator for Multi-Issue Bilateral Negotiations
Chalamish, Michal (Ashkelon Academic College) | Kraus, Sarit (Bar Ilan University)
In this paper, we present AutoMed, an automated mediator for multi-issue bilateral negotiation under time constraints. AutoMed uses a qualitative model to represent the negotiators' preferences. It analyzes the negotiators' preferences, monitors the negotiations and proposes possible solutions for resolving the conflict. We conducted experiments in a simulated environment. The results show that negotiations mediated by AutoMed are concluded significantly faster than non-mediated ones and without any of the negotiators opting out. Furthermore, the subjects in the mediated negotiations are more satisfied from the resolutions than the subjects in the non-mediated negotiations.
A Trend Pattern Approach to Forecasting Socio-Political Violence
Rohloff, Kurt (BBN Technologies) | Battle, Rob (BBN Technologies) | Chatigny, Jim (BBN Technologies) | Schantz, Rick (BBN Technologies) | Asal, Victor (SUNY Albany)
We present an approach to identifying concurrent patterns of behavior in in-sample temporal factor training data that precede Events of Interest (EoIs). We also present how to use discovered patterns to forecast EoIs in out-of-sample test data. The forecasting methodology is based on matching entities' observed behaviors to patterns discovered in retrospective data. This pattern concept is a generalization of previous pattern definitions. The new pattern concept, based around patterns observed in trends of factor data is based on a finite-state model where observed, sustained trends in a factor map to pattern states. Discovered patterns can be used as a diagnostic tool to better understand the dynamic conditions leading up to specific Event of Interest occurrences and hint at underlying causal structures leading to onsets and terminations of socio-political violence. We present a computationally efficient data-mining method to discover trend patterns. We give an example of using our pattern forecasting methodology to correctly forecast the advent and cessation of ethnic-religious violence in nation states with a low false-alarm rate.
Manipulability of Single Transferable Vote
For many voting rules, it is NP-hard to compute a successful manipulation. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. We study empirically the cost of manipulating the single transferable vote (STV) rule. This was one of the first rules shown to be NP-hard to manipulate. It also appears to be one of the harder rules to manipulate since it involves multiple rounds and since, unlike many other rules, it is NP-hard for a single agent to manipulate without weights on the votes or uncertainty about how the other agents have voted. In almost every election in our experiments, it was easy to compute how a single agent could manipulate the election or to prove that manipulation by a single agent was impossible. It remains an interesting open question if manipulation by a coalition of agents is hard to compute in practice.
Dynamics of Price Sensitivity and Market Structure in an Evolutionary Matching Model
Drutchas, Griffin Vernor (Kalamazoo College) | รrdi, Pรฉter (Kalamazoo College)
The relationship between equilibrium convergence to a uniform quality distribution and price is investigated in the Q-model, a self-organizing, evolutionary computational matching model of a fixed-price post-secondary higher education created by Ortmann and Slobodyan (2006). The Q-model is replicated with price equaling 100% its Ortmann and Slobodyan (2006) value, Varying the fixed price between 0% and 200% reveals thresholds at which the Q-model reaches different market clustering configurations. Results indicate structural market robustness to prices less than 100% and high sensitivity to prices greater than 100%.
Modeling and Simulating Community Sentiments and Interactions at the Pacific Missile Range Facility
Zanbaka, Catherine (BAE Systems)
PMRFSim is a proof of concept geospatial social agent-based simulation capable of examining the interactions of 60,000+ agents over a simulated year within a few minutes. PMRFSim utilizes real world data from sources ranging from the U.S. Census Bureau, a regional sociologist, and base security. PMRFSim models two types of agents, normal and adverse agents. Adverse agents have harmful intent and goals to spread negative sentiment and acquire intelligence. All agents are endowed with demographic and geospatial attributes. Agents interact with each other and respond to events. PMRFSim allows an analyst to construct various what-if scenarios and generates numerous graphs that characterize the social landscape. This analysis is intended to aid public affairs officers understand the social landscape.
Data Theory, Discourse Mining and Thresholds
Sallach, David L. (Argonne National Laboratory) | Ozik, Jonathan (Argonne National Laboratory)
The availability of online documents coupled with emergent text mining methods has opened new research horizons. To achieve their potential, mining technologies need to be theoretically focused. We present data theory as a crucial component of text mining, and provide a substantive proto- theory from the synthesis of complex multigames, prototype concepts, and emotio-cognitive orientation fields. We discuss how the data theory presented informs the application of text mining to mining discourse(s) and how, in turn, this allows for modeling across contextual thresholds. Finally, the relationship between discourse mining, data theory, and thresholds is illustrated with an historical example, the events surrounding the 1992 civil war in Tajikistan.