Goto

Collaborating Authors

University of Maryland


Investigating the Relationship between Multi-Party Linguistic Entrainment, Team Characteristics and the Perception of Team Social Outcomes

AAAI Conferences

Multi-party linguistic entrainment refers to the phenomenon that speakers tend to speak more similarly during conversation. We first developed new measures of multi-party entrainment on features describing linguistic style, and then examined the relationship between entrainment and team characteristics in terms of gender composition, team size, and diversity. Next, we predicted the perception of team social outcomes using multi-party linguistic entrainment and team characteristics with a hierarchical regression model. We found that teams with greater gender diversity had higher minimum convergence than teams with less gender diversity. Entrainment contributed significantly to predicting perceived team social outcomes both alone and controlling for team characteristics.


Task-Aware Compressed Sensing With Generative Adversarial Networks

AAAI Conferences

In recent years, neural network approaches have been widely adopted for machine learning tasks, with applications in computer vision. More recently, unsupervised generative models based on neural networks have been successfully applied to model data distributions via low-dimensional latent spaces. In this paper, we use Generative Adversarial Networks (GANs) to impose structure in compressed sensing problems, replacing the usual sparsity constraint. We propose to train the GANs in a task-aware fashion, specifically for reconstruction tasks. We also show that it is possible to train our model without using any (or much) non-compressed data. Finally, we show that the latent space of the GAN carries discriminative information and can further be regularized to generate input features for general inference tasks. We demonstrate the effectiveness of our method on a variety of reconstruction and classification problems.


Towards a Unified Framework for Syntactic Inconsistency Measures

AAAI Conferences

A number of proposals have been made to define inconsistency measures. Each has its rationale. But to date, it is not clear how to delineate the space of options for measures, nor is it clear how we can classify measures systematically. In this paper, we introduce a general framework for comparing syntactic inconsistency measures. It uses the construction of an inconsistency graph for each knowledgebase. We then introduce abstractions of the inconsistency graph and use the hierarchy of the abstractions to classify a range of inconsistency measures.


Adapting a Kidney Exchange Algorithm to Align With Human Values

AAAI Conferences

The efficient allocation of limited resources is a classical problem in economics and computer science. In kidney exchanges, a central market maker allocates living kidney donors to patients in need of an organ. Patients and donors in kidney exchanges are prioritized using ad-hoc weights decided on by committee and then fed into an allocation algorithm that determines who get what—and who does not. In this paper, we provide an end-to-end methodology for estimating weights of individual participant profiles in a kidney exchange. We first elicit from human subjects a list of patient attributes they consider acceptable for the purpose of prioritizing patients (e.g., medical characteristics, lifestyle choices, and so on). Then, we ask subjects comparison queries between patient profiles and estimate weights in a principled way from their responses. We show how to use these weights in kidney exchange market clearing algorithms. We then evaluate the impact of the weights in simulations and find that the precise numerical values of the weights we computed matter little, other than the ordering of profiles that they imply. However, compared to not prioritizing patients at all, there is a significant effect, with certain classes of patients being (de)prioritized based on the human-elicited value judgments.


Market Pricing for Data Streams

AAAI Conferences

Internet-enabled marketplaces such as Amazon deal with huge datasets registering transaction of merchandises between lots of buyers and sellers. It is important that algorithms become more time and space efficient as the size of datasets increase. An algorithm that runs in polynomial time may not have a reasonable running time for such large datasets. Here, we study the development of pricing algorithms that are appropriate for use with massive datasets. We especially focus on the streaming setting, the common model for big data analysis. We present an envy-free mechanism for social welfare maximization problem in the streaming setting using O ( k 2 l ) space, where k is the number of different goods and l is the number of available items of each good. We also provide an α-approximation mechanism for revenue maximization in this setting given an α-approximation mechanism for the corresponding offline problem exists. Moreover, we provide mechanisms to approximate the optimum social welfare (or revenue) within 1 – ε factor, in space independent of l which would be favorable in case l is large compared to k . Finally, we present hardness results showing approximation of optimal prices that maximize social welfare (or revenue) in the streaming setting needs Ω( l ) space. We achieve our results by developing a powerful sampling technique for bipartite networks. The simplicity of our sampling technique empowers us to maintain the sample over the input sequence. Indeed, one can construct this sample in the distributed setting (a.k.a, MapReduce) and get the same results in two rounds of computations, or one may simply apply this sampling technique to provide faster offline algorithms.


Small Representations of Big Kidney Exchange Graphs

AAAI Conferences

Kidney exchanges are organized markets where patients swap willing but incompatible donors. In the last decade, kidney exchanges grew from small and regional to large and national — and soon, international. This growth results in more lives saved, but exacerbates the empirical hardness of the NP-complete problem of optimally matching patients to donors. State-of-the-art matching engines use integer programming techniques to clear fielded kidney exchanges, but these methods must be tailored to specific models and objective functions, and may fail to scale to larger exchanges. In this paper, we observe that if the kidney exchange compatibility graph can be encoded by a constant number of patient and donor attributes, the clearing problem is solvable in polynomial time. We give necessary and sufficient conditions for losslessly shrinking the representation of an arbitrary compatibility graph. Then, using real compatibility graphs from the UNOS US-wide kidney exchange, we show how many attributes are needed to encode real graphs. The experiments show that, indeed, small numbers of attributes suffice.


A Study of Compact Reserve Pricing Languages

AAAI Conferences

Online advertising allows advertisers to implement fine-tuned targeting of users. While such precise targeting leads to more effective advertising,  it introduces challenging multidimensional pricing and bidding problems for publishers and advertisers. In this context, advertisers and publishers need to deal with an exponential number of possibilities. As a result, designing efficient and compact multidimensional bidding and pricing systems and algorithms are practically important for online advertisement.  Compact bidding languages have already been studied in the context of multiplicative bidding.  In this paper, we study the compact pricing problem.


Weighted Bandits or: How Bandits Learn Distorted Values That Are Not Expected

AAAI Conferences

Motivated by models of human decision making proposed to explain commonly observed deviations from conventional expected value preferences, we formulate two stochastic multi-armed bandit problems with distorted probabilities on the cost distributions: the classic K -armed bandit and the linearly parameterized bandit. In both settings, we propose algorithms that are inspired by Upper Confidence Bound (UCB) algorithms, incorporate cost distortions, and exhibit sublinear regret assuming Holder continuous weight distortion functions. For the K -armed setting, we show that the algorithm, called W-UCB, achieves problem-dependent regret O ( L 2 M 2 log n / Δ(2/α – 1), where n is the number of plays, Δ is the gap in distorted expected value between the best and next best arm, L and alpha are the Holder constants for the distortion function, and M is an upper bound on costs, and a problem-independent regret bound of O (( KL 2 M 2 ) (α/2) n (2 – α)/2) ). We also present a matching lower bound on the regret, showing that the regret of W-UCB is essentially unimprovable over the class of Holder-continuous weight distortions. For the linearly parameterized setting, we develop a new algorithm, a variant of the Optimism in the Face of Uncertainty Linear bandit (OFUL) algorithm called WOFUL (Weight-distorted OFUL), and show that it has regret O ( d √ n polylog( n) ) with high probability, for sub-Gaussian cost distributions.


Faster and Simpler Algorithm for Optimal Strategies of Blotto Game

AAAI Conferences

In the Colonel Blotto game, which was initially introduced by Borel in 1921, two colonels simultaneously distribute their troops across different battlefields.The winner of each battlefield is determined independently by a winner-take-all rule. The ultimate payoff of each colonel is the number of battlefields he wins. This game is commonly used for analyzing a wide range of applications such as the U.S presidential election, innovative technology competitions, advertisements, etc. There have been persistent efforts for finding the optimal strategies for the Colonel Blotto game. After almost a century Ahmadinejad, Dehghani, Hajiaghayi, Lucier, Mahini, and Seddighin provided a poly-time algorithm for finding the optimal strategies. They first model the problem by a Linear Program (LP) with exponential number of constraints and use Ellipsoid method to solve it. However, despite the theoretical importance of their algorithm, it ishighly impractical. In general, even Simplex method (despite its exponential running-time) performs better than Ellipsoid method in practice. In this paper, we provide the first polynomial-size LP formulation of the optimal strategies for the Colonel Blotto game. We use linear extension techniques. Roughly speaking, we project the strategy space polytope to a higher dimensional space, which results in a lower number of facets for the polytope.We use this polynomial-size LP to provide a novel, simpler and significantly faster algorithm for finding the optimal strategies for the Colonel Blotto game. We further show this representation is asymptotically tight in terms of the number of constraints. We also extend our approach to multi-dimensional Colonel Blotto games, and implement our algorithm to observe interesting properties of Colonel Blotto; for example, we observe the behavior of players in the discrete model is very similar to the previously studied continuous model.


Business-Aware Visual Concept Discovery from Social Media for Multimodal Business Venue Recognition

AAAI Conferences

Image localization is important for marketing and recommendation of local business; however, the level of granularity is still a critical issue. Given a consumer photo and its rough GPS information, we are interested in extracting the fine-grained location information, i.e. business venues, of the image. To this end, we propose a novel framework for business venue recognition. The framework mainly contains three parts. First, business-aware visual concept discovery: we mine a set of concepts that are useful for business venue recognition based on three guidelines including business awareness, visually detectable, and discriminative power. We define concepts that satisfy all of these three criteria as business-aware visual concept. Second, business-aware concept detection by convolutional neural networks (BA-CNN): we propose a new network configuration that can incorporate semantic signals mined from business reviews for extracting semantic concept features from a query image. Third, multimodal business venue recognition: we extend visually detected concepts to multimodal feature representations that allow a test image to be associated with business reviews and images from social media for business venue recognition. The experiments results show the visual concepts detected by BA-CNN can achieve up to 22.5% relative improvement for business venue recognition compared to the state-of-the-art convolutional neural network features. Experiments also show that by leveraging multimodal information from social media we can further boost the performance, especially when the database images belonging to each business venue are scarce.