Plotting

 Information Technology


Do We Criticise (and Laugh) in the Same Way? Automatic Detection of Multi-Lingual Satirical News in Twitter

AAAI Conferences

During the last few years, the investigation of methodologies to automatically detect and characterise the figurative traits of textual contents has attracted a growing interest. Indeed, the capability to correctly deal with figurative language and more specifically with satire is fundamental to build robust approaches in several sub-fields of Artificial Intelligence including Sentiment Analysis and Affective Computing. In this paper we investigate the automatic detection of Tweets that advertise satirical news in English, Spanish and Italian. To this purpose we present a system that models Tweets from different languages by a set of language independent features that describe lexical, semantic and usage-related properties of the words of each Tweet. We approach the satire identification problem as binary classification of Tweets as satirical or not satirical messages. We test the performance of our system by performing experiments of both monolingual and cross-language classifications, evaluating the satire detection effectiveness of our features.Our system outperforms a word-based baseline and it is able to recognise if a news in Twitter is satirical or not with good accuracy. Moreover, we analyse the behaviour of the system across the different languages, obtaining interesting results.


Quantifying Robustness of Trust Systems against Collusive Unfair Rating Attacks Using Information Theory

AAAI Conferences

Unfair rating attacks happen in existing trust and reputation systems, lowering the quality of the systems. There exists a formal model that measures the maximum impact of independent attackers [Wang et al., 2015] — based on information theory. We improve on these results in multiple ways: (1) we alter the methodology to be able to reason about colluding attackers as well, and (2) we extend the method to be able to measure the strength of any attacks (rather than just the strongest attack). Using (1), we identify the strongest collusion attacks, helping construct robust trust system. Using (2), we identify the strength of (classes of) attacks that we found in the literature. Based on this, we help to overcome a shortcoming of current research into collusion-resistance — specific (types of) attacks are used in simulations, disallowing direct comparisons between analyses of systems.


Implementing the Wisdom of Waze

AAAI Conferences

We study a setting of non-atomic routing in a network of m parallel links with asymmetry of information. While a central entity (such as a GPS navigation system) — a mediator hereafter — knows the cost functions associated with the links, they are unknown to the individual agents controlling the flow. The mediator gives incentive compatible recommendations to agents, trying to minimize the total travel time. Can the mediator do better than when agents minimize their travel time selfishly without coercing agents to follow his recommendations? We study the mediation ratio: the ratio between the mediated equilibrium obtained from an incentive compatible mediation protocol and the social optimum. We find that mediation protocols can reduce the efficiency loss compared to the full revelation alternative, and compared to the non mediated Nash equilibrium. In particular, in the case of two links with affine cost functions, the mediation ratio is at most 8/7, and remains strictly smaller than the price of anarchy of 4/3 for any fixed m. Yet, it approaches the price of anarchy as m grows. For general (monotone) cost functions, the mediation ratio is at most m, a significant improvement over the unbounded price of anarchy


A Privacy Preserving Algorithm for Multi-Agent Planning and Search

AAAI Conferences

To engage diverse agents in cooperative behavior, it is important, even necessary, to provide algorithms that do not reveal information that is private or proprietary.A number of recent planning algorithms enable agents to plan together for shared goals without disclosing information about their private state and actions. But these algorithms lack clear and formal privacy guarantees: the fact that they do not require agents to explicitly reveal private information, does not imply that such information cannot be deduced. The main contribution of this paper is an enhanced version of the distributed forward-search planning framework of Nissim and Brafman that reveals less information than the original algorithm, and the first, to our knowledge, discussion and formal proof of privacy guarantees for distributed planning and search algorithms.


Analysis of Sampling Algorithms for Twitter

AAAI Conferences

The daily volume of Tweets in Twitter is around 500 million, and the impact of this data on applications ranging from public safety, opinion mining, news broadcast, etc., is increasing day by day. Analyzing large volumes of Tweets for various applications would require techniques that scale well with the number of Tweets. In this work we come up with a theoretical formulation for sampling Twitter data. We introduce novel statistical metrics to quantify the statistical representativeness of the Tweet sample, and derive sufficient conditions on the number of samples needed for obtaining highly representative Tweet samples. These new statistical metrics quantify the representativeness or goodness of the sample in terms of frequent keyword identification and in terms of restoring public sentiments associated with these keywords. We use uniform random sampling with replacement as our algorithm, and sampling could serve as a first step before using other sophisticated summarization methods to generate summaries for human use. We show that experiments conducted on real Twitter data agree with our bounds. In these experiments, we also compare different kinds of random sampling algorithms. Our bounds are attractive since they do not depend on the total number of Tweets in the universe. Although our ideas and techniques are specific to Twitter, they could find applications in other areas as well.


Tradeoffs between Incentive Mechanisms in Boolean Games

AAAI Conferences

Two incentive mechanisms for Boolean games were proposed recently - taxation schemes and side payments. Both mechanisms have been shown to be able to secure a pure Nash equilibrium (PNE) for Boolean games. A complete characterization of outcomes that can be transformed to PNEs is given for each of the two incentive mechanisms. Side payments are proved to be a weaker mechanism in the sense that the outcomes that they can transform to PNEs are a subset of those transformable by taxation. A family of social-network-based Boolean games, which demonstrates the differences between the two mechanisms for securing a PNE, is presented. A distributed search algorithm for finding the side payments needed for securing a PNE is proposed. An empirical evaluation demonstrates the properties of the two mechanisms on the family of social-network-based Boolean games.


Influence in Classification via Cooperative Game Theory

AAAI Conferences

A dataset has been classified by some unknown classifier into two types of points. What were the most important factors in determining the classification outcome? In this work, we employ an axiomatic approach in order to uniquely characterize an influence measure: a function that, given a set of classified points, outputs a value for each feature corresponding to its influence in determining the classification outcome. We show that our influence measure takes on an intuitive form when the unknown classifier is linear. Finally, we employ our influence measure in order to analyze the effects of user profiling on Google’s online display advertising.


Optimal Network Security Hardening Using Attack Graph Games

AAAI Conferences

Preventing attacks in a computer network is the core problem in network security. We introduce a new game-theoretic model of the interaction between a network administrator who uses limited resource to harden a network and an attacker who follows a multi-stage plan to attack the network. The possible plans of the attacker are compactly represented using attack graphs, while the defender adds fake targets (honeypots) to the network to deceive the attacker. The compact representation of the attacker's strategies presents a computational challenge and finding the best response of the attacker is NP-hard. We present a solution method that first translates an attack graph into an MDP and solves it using policy search with a set of pruning techniques. We present an empirical evaluation of the model and solution algorithms, evaluating scalability, the types of solutions that are generated for realistic cases, and sensitivity analysis.


User Modeling with Neural Network for Review Rating Prediction

AAAI Conferences

We present a neural network method for review rating prediction in this paper. Existing neural network methods for sentiment prediction typically only capture the semantics of texts, but ignore the user who expresses the sentiment.This is not desirable for review rating prediction as each user has an influence on how to interpret the textual content of a review.For example, the same word (e.g. good) might indicate different sentiment strengths when written by different users. We address this issue by developing a new neural network that takes user information into account. The intuition is to factor in user-specific modification to the meaning of a certain word.Specifically, we extend the lexical semantic composition models and introduce a user-word composition vector model (UWCVM), which effectively captures how user acts as a function affecting the continuous word representation. We integrate UWCVM into a supervised learning framework for review rating prediction, andconduct experiments on two benchmark review datasets.Experimental results demonstrate the effectiveness of our method. It shows superior performances over several strong baseline methods.


Differentially Private Matrix Factorization

AAAI Conferences

Matrix factorization (MF) is a prevailing collaborative filtering method for building recommender systems. It requires users to upload their personal preferences to the recommender for performing MF, which raises serious privacy concerns. This paper proposes a differentially private MF mechanism that can prevent an untrusted recommender from learning any users' ratings or profiles. Our design decouples computations upon users' private data from the recommender to users, and makes the recommender aggregate local results in a privacy-preserving way. It uses the objective perturbation to make sure that the final item profiles satisfy differential privacy and solves the challenge to decompose the noise component for objective perturbation into small pieces that can be determined locally and independently by users. We also propose a third-party based mechanism to reduce noises added in each iteration and adapt our online algorithm to the dynamic setting that allows users to leave and join. The experiments show that our proposal is efficient and introduces acceptable side effects on the precision of results.