Collaborating Authors


What if New York City Mayor Andrew Yang Is … a Good Idea?


Andrew Yang will not forestall the robot apocalypse from the Oval Office, but he may get to do it from New York City Hall. In the 2020 Democratic presidential primary, the former entrepreneur's quirky campaign found a surprisingly robust audience, attracted by Yang's warnings about automation and his promise to mail every American a "freedom dividend" (or, at least, by his math jokes and laid-back, open collar). In the end, the Yang Gang only got their guy as far as the New Hampshire primary. But thanks in part to the name recognition and national network of donors he accrued during that race, Yang is actually leading the polls this year's contest to be the Democratic candidate for New York City mayor. On Friday, Henry Grabar and Jordan Weissmann, two of Slate's native New Yorkers, convened to debate whether this is a good thing. Their debate has been edited and condensed for clarity.

Persuading Voters in District-based Elections Artificial Intelligence

We focus on the scenario in which an agent can exploit his information advantage to manipulate the outcome of an election. In particular, we study district-based elections with two candidates, in which the winner of the election is the candidate that wins in the majority of the districts. District-based elections are adopted worldwide (e.g., UK and USA) and are a natural extension of widely studied voting mechanisms (e.g., k-voting and plurality voting). We resort to the Bayesian persuasion framework, where the manipulator (sender) strategically discloses information to the voters (receivers) that update their beliefs rationally. We study both private signaling, in which the sender can use a private communication channel per receiver, and public signaling, in which the sender can use a single communication channel for all the receivers. Furthermore, for the first time, we introduce semi-public signaling in which the sender can use a single communication channel per district. We show that there is a sharp distinction between private and (semi-)public signaling. In particular, optimal private signaling schemes can provide an arbitrarily better probability of victory than (semi-)public ones and can be computed efficiently, while optimal (semi-)public signaling schemes cannot be approximated to within any factor in polynomial time unless P=NP. However, we show that reasonable relaxations allow the design of multi-criteria PTASs for optimal (semi-)public signaling schemes. In doing so, we introduce a novel property, namely comparative stability, and we design a bi-criteria PTAS for public signaling in general Bayesian persuasion problems beyond elections when the sender's utility function is state-dependent.

GAEA: Graph Augmentation for Equitable Access via Reinforcement Learning Artificial Intelligence

Disparate access to resources by different subpopulations is a prevalent issue in societal and sociotechnical networks. For example, urban infrastructure networks may enable certain racial groups to more easily access resources such as high-quality schools, grocery stores, and polling places. Similarly, social networks within universities and organizations may enable certain groups to more easily access people with valuable information or influence. Here we introduce a new class of problems, Graph Augmentation for Equitable Access (GAEA), to enhance equity in networked systems by editing graph edges under budget constraints. We prove such problems are NP-hard, and cannot be approximated within a factor of $(1-\tfrac{1}{3e})$. We develop a principled, sample- and time- efficient Markov Reward Process (MRP)-based mechanism design framework for GAEA. Our algorithm outperforms baselines on a diverse set of synthetic graphs. We further demonstrate the method on real-world networks, by merging public census, school, and transportation datasets for the city of Chicago and applying our algorithm to find human-interpretable edits to the bus network that enhance equitable access to high-quality schools across racial groups. Further experiments on Facebook networks of universities yield sets of new social connections that would increase equitable access to certain attributed nodes across gender groups.

Over a Decade of Social Opinion Mining Artificial Intelligence

Social media popularity and importance is on the increase, due to people using it for various types of social interaction across multiple channels. This social interaction by online users includes submission of feedback, opinions and recommendations about various individuals, entities, topics, and events. This systematic review focuses on the evolving research area of Social Opinion Mining, tasked with the identification of multiple opinion dimensions, such as subjectivity, sentiment polarity, emotion, affect, sarcasm and irony, from user-generated content represented across multiple social media platforms and in various media formats, like text, image, video and audio. Therefore, through Social Opinion Mining, natural language can be understood in terms of the different opinion dimensions, as expressed by humans. This contributes towards the evolution of Artificial Intelligence, which in turn helps the advancement of several real-world use cases, such as customer service and decision making. A thorough systematic review was carried out on Social Opinion Mining research which totals 485 studies and spans a period of twelve years between 2007 and 2018. The in-depth analysis focuses on the social media platforms, techniques, social datasets, language, modality, tools and technologies, natural language processing tasks and other aspects derived from the published studies. Such multi-source information fusion plays a fundamental role in mining of people's social opinions from social media platforms. These can be utilised in many application areas, ranging from marketing, advertising and sales for product/service management, and in multiple domains and industries, such as politics, technology, finance, healthcare, sports and government. Future research directions are presented, whereas further research and development has the potential of leaving a wider academic and societal impact.

Modeling Voters in Multi-Winner Approval Voting Artificial Intelligence

In many real world situations, collective decisions are made using voting and, in scenarios such as committee or board elections, employing voting rules that return multiple winners. In multi-winner approval voting (AV), an agent submits a ballot consisting of approvals for as many candidates as they wish, and winners are chosen by tallying up the votes and choosing the top-$k$ candidates receiving the most approvals. In many scenarios, an agent may manipulate the ballot they submit in order to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics instead of incurring the additional effort required to compute the manipulation which most favors them. In this paper, we examine voting behavior in single-winner and multi-winner approval voting scenarios with varying degrees of uncertainty using behavioral data obtained from Mechanical Turk. We find that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. There are a number of predictive models of agent behavior in the COMSOC and psychology literature that are based on cognitively plausible heuristic strategies. We show that the existing approaches do not adequately model real-world data. We propose a novel model that takes into account the size of the winning set and human cognitive constraints, and demonstrate that this model is more effective at capturing real-world behaviors in multi-winner approval voting scenarios.

Brad Parscale accuses 'D-level' 'talking heads' around Trump for forcing him out of 2020 campaign

FOX News

Former Trump campaign manager reacts to 2020 election results in exclusive interview on'The Story' Former Trump 2020 campaign manager Brad Parscale has accused "D-level" "talking heads" in the president's orbit of starting a whisper campaign that forced him out earlier this year. Speaking with Fox News' Martha MacCallum in an exclusive interview on "The Story" Tuesday night, Parscale alleged that "when the polling numbers were going down, they were in his ear and I was out working." Discussing a reported incident in which Trump berated Parscale for passing along a bad polling report, the former campaign manager said: "I didn't like lying to him -- I like telling the truth. Sometimes that comes with a lot of painful days, knowing that I might let him down or make him upset, but a lot of the D-level people that hung around him told him what he wanted to hear: They were'yes' men. I wasn't going to be'yes' man, but a'get it done' man." Parscale did not name anyone as being specifically responsible for his ouster in mid-July, when he was replaced by Bill Stepien.

New tool uses machine learning to identify fake news domains


Misinformation and fake news arguably weren't as problematic for our recent Presidential election as it was back in 2016 (Trump himself lied enough on his own this time around, anyway), but the spread of false statements remains an extremely serious hurdle for modern society. Bad faith actors certainly aren't going anywhere, and worse, they will likely only get better at their work. Luckily, new technologies are emerging to help combat the spread of disinfo in the coming years, including a program that can identify malicious sites before they begin to propagate their lies... yes, a robot to help determine other robots. The new program developed by Anil Doshi at the UCL School of Management, Sharat Raghavan from the University of California, Berkley, and Cornell University's William Schmidt is detailed in their recent working paper, "Real-Time Prediction of Online False Information Purveyors and their Characteristics." In it, the team explains the tool is impressively effective at spotting domain names created purely for the purpose of spreading misinformation online: the machine learning program correctly identified 92 percent of all fake news domains as well as 96.2 percent real sites supplied to it relating to the 2016 election.

Biden transition 'moving forward,' awaiting GSA confirmation of election results

FOX News

Here's what you need to know as you start your day ... Biden transition hangs in limbo, awaiting GSA certification for results to become official The Biden transition is hanging in limbo, awaiting the General Services Administration's certification, which will give President-elect Joe Biden and his team the power to make decisions about the future of the federal government-- but the incoming administration is "moving forward" anyway, urging the GSA to "move quickly" and "respect" the "will of the American people." A Biden-Harris transition spokesperson told Fox News that the transition "is moving forward with preparations so that President-elect Joe Biden and Vice President-elect Kamala Harris are ready to lead our country on Day One and meet the pressing challenges facing our nation." The GSA has not yet made an "ascertainment" decision -- the formal declaration set up by the 1963 Presidential Transition Act. Until that ascertainment is made, the Biden team can not formally begin the transition process. The GSA has defended its precedent, which they said was "established by the Clinton Administration in 2000."

Detecting Social Media Manipulation in Low-Resource Languages Artificial Intelligence

Social media have been deliberately used for malicious purposes, including political manipulation and disinformation. Most research focuses on high-resource languages. However, malicious actors share content across countries and languages, including low-resource ones. Here, we investigate whether and to what extent malicious actors can be detected in low-resource language settings. We discovered that a high number of accounts posting in Tagalog were suspended as part of Twitter's crackdown on interference operations after the 2016 US Presidential election. By combining text embedding and transfer learning, our framework can detect, with promising accuracy, malicious users posting in Tagalog without any prior knowledge or training on malicious content in that language. We first learn an embedding model for each language, namely a high-resource language (English) and a low-resource one (Tagalog), independently. Then, we learn a mapping between the two latent spaces to transfer the detection model. We demonstrate that the proposed approach significantly outperforms state-of-the-art models, including BERT, and yields marked advantages in settings with very limited training data-the norm when dealing with detecting malicious activity in online platforms.

Have Deepfakes influenced the 2020 Election?


Media manipulation through images and videos has been around for decades. For example, in WWII Mousollini released a propaganda image of himself on a horse with his horse handler edited out. The goal was to make himself seem more impressive and powerful [1]. These types of tricks can have significant impacts given the scale of people that see images like these, especially in the internet era. DARPA has an entire program constructed just to develop methods for detecting manipulated media through their media forensics (MEDIFOR) [2].