Goto

Collaborating Authors

Results


Content Moderation Case Study: Chatroulette Leverages New AI To Combat Unwanted Nudity (2020)

#artificialintelligence

Summary: Chatroulette rose to fame shortly after its creation in late 2009. The platform offered a new take on video chat, pairing users with other random users with each spin of the virtual wheel. The novelty of the experience soon wore off when it became apparent Chatroulette was host to a large assortment of pranksters and exhibitionists. Users hoping to luck into some scintillating video chat were instead greeted with exposed penises and other body parts. This especially unsavory aspect of the service was assumed to be its legacy -- one that would see it resigned to the junkheap of failed social platforms.


High-level Approaches to Detect Malicious Political Activity on Twitter

arXiv.org Artificial Intelligence

Our work represents another step into the detection and prevention of these ever-more present political manipulation efforts. We, therefore, start by focusing on understanding what the state-of-the-art approaches lack -- since the problem remains, this is a fair assumption. We find concerning issues within the current literature and follow a diverging path. Notably, by placing emphasis on using data features that are less susceptible to malicious manipulation and also on looking for high-level approaches that avoid a granularity level that is biased towards easy-to-spot and low impact cases. We designed and implemented a framework -- Twitter Watch -- that performs structured Twitter data collection, applying it to the Portuguese Twittersphere. We investigate a data snapshot taken on May 2020, with around 5 million accounts and over 120 million tweets (this value has since increased to over 175 million). The analyzed time period stretches from August 2019 to May 2020, with a focus on the Portuguese elections of October 6th, 2019. However, the Covid-19 pandemic showed itself in our data, and we also delve into how it affected typical Twitter behavior. We performed three main approaches: content-oriented, metadata-oriented, and network interaction-oriented. We learn that Twitter's suspension patterns are not adequate to the type of political trolling found in the Portuguese Twittersphere -- identified by this work and by an independent peer - nor to fake news posting accounts. We also surmised that the different types of malicious accounts we independently gathered are very similar both in terms of content and interaction, through two distinct analysis, and are simultaneously very distinct from regular accounts.


Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

arXiv.org Artificial Intelligence

While algorithm audits are growing rapidly in commonality and public importance, relatively little scholarly work has gone toward synthesizing prior work and strategizing future research in the area. This systematic literature review aims to do just that, following PRISMA guidelines in a review of over 500 English articles that yielded 62 algorithm audit studies. The studies are synthesized and organized primarily by behavior (discrimination, distortion, exploitation, and misjudgement), with codes also provided for domain (e.g. search, vision, advertising, etc.), organization (e.g. Google, Facebook, Amazon, etc.), and audit method (e.g. sock puppet, direct scrape, crowdsourcing, etc.). The review shows how previous audit studies have exposed public-facing algorithms exhibiting problematic behavior, such as search algorithms culpable of distortion and advertising algorithms culpable of discrimination. Based on the studies reviewed, it also suggests some behaviors (e.g. discrimination on the basis of intersectional identities), domains (e.g. advertising algorithms), methods (e.g. code auditing), and organizations (e.g. Twitter, TikTok, LinkedIn) that call for future audit attention. The paper concludes by offering the common ingredients of successful audits, and discussing algorithm auditing in the context of broader research working toward algorithmic justice.


Hostility Detection and Covid-19 Fake News Detection in Social Media

arXiv.org Artificial Intelligence

Withtheadventofsocialmedia,therehasbeenanextremely rapid increase in the content shared online. Consequently, the propagation of fake news and hostile messages on social media platforms has also skyrocketed. In this paper, we address the problem of detecting hostile and fake content in the Devanagari (Hindi) script as a multi-class, multi-label problem. Using NLP techniques, we build a model that makes use of an abusive language detector coupled with features extracted via Hindi BERT and Hindi FastText models and metadata. Our model achieves a 0.97 F1 score on coarse grain evaluation on Hostility detection task. Additionally, we built models to identify fake news related to Covid-19 in English tweets. We leverage entity information extracted from the tweets along with textual representations learned from word embeddings and achieve a 0.93 F1 score on the English fake news detection task.


Socially Responsible AI Algorithms: Issues, Purposes, and Challenges

arXiv.org Artificial Intelligence

In the current era, people and society have grown increasingly reliant on Artificial Intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, health care, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great efforts of designing more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI's indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation.


Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling

arXiv.org Machine Learning

Obtaining large annotated datasets is critical for training successful machine learning models and it is often a bottleneck in practice. Weak supervision offers a promising alternative for producing labeled datasets without ground truth annotations by generating probabilistic labels using multiple noisy heuristics. This process can scale to large datasets and has demonstrated state of the art performance in diverse domains such as healthcare and e-commerce. One practical issue with learning from user-generated heuristics is that their creation requires creativity, foresight, and domain expertise from those who handcraft them, a process which can be tedious and subjective. We develop the first framework for interactive weak supervision in which a method proposes heuristics and learns from user feedback given on each proposed heuristic. Our experiments demonstrate that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels. We conduct user studies, which show that users are able to effectively provide feedback on heuristics and that test set results track the performance of simulated oracles. The performance of supervised machine learning (ML) hinges on the availability of labeled data in sufficient quantity and quality. However, labeled data for applications of ML can be scarce, and the common process of obtaining labels by having annotators inspect individual samples is often expensive and time consuming. Additionally, this cost is frequently exacerbated by factors such as privacy concerns, required expert knowledge, and shifting problem definitions. Weak supervision provides a promising alternative, reducing the need for humans to hand label large datasets to train ML models (Riedel et al., 2010; Hoffmann et al., 2011; Ratner et al., 2016; Dehghani et al., 2018). A recent approach called data programming (Ratner et al., 2016) combines multiple weak supervision sources by using an unsupervised label model to estimate the latent true class label, an idea that has close connections to modeling workers in crowd-sourcing (Dawid & Skene, 1979; Karger et al., 2011; Dalvi et al., 2013; Zhang et al., 2014).


Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

arXiv.org Artificial Intelligence

In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.


The 'deep fake' scare is more dangerous than AI-tech behind it

#artificialintelligence

Recognizing them is increasingly hard if not impossible to the untrained human eye. Overall, as most journalistic coverage of the topic tells us, deepfakes -- alongside other AI technologies, machine learning, and online neural networks in general -- are here and will serve to cast a shadow of technological terror over society. As part of media coverage on this topic, our future is deemed dystopian -- humankind has lost the battles to machines and episodes of the TV series "Black Mirror" will pale in comparison with the havoc sowed by technology. In fact, research I conducted with a colleague from the University of Haifa (Yael Oppenheim) has found that most images and narratives that journalists worldwide use to cover these technologies tend to stress destruction, loss, crisis, and fear regarding the future of humanity. From Israel to the U.S., deepfake videos are becoming a major threat to democracy'Every woman on Instagram is exposed': New AI creates nude photos of clothed women It is, however, important to contextualize this alarmist media frenzy.


Preserving Integrity in Online Social Networks

arXiv.org Artificial Intelligence

Online social networks provide a platform for sharing information and free expression. However, these networks are also used for malicious purposes, such as distributing misinformation and hate speech, selling illegal drugs, and coordinating sex trafficking or child exploitation. This paper surveys the state of the art in keeping online platforms and their users safe from such harm, also known as the problem of preserving integrity. This survey comes from the perspective of having to combat a broad spectrum of integrity violations at Facebook. We highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community. Instead of discussing the many individual violation types, we identify key aspects of the social-media eco-system, each of which is common to a wide variety violation types. Furthermore, each of these components represents an area for research and development, and the innovations that are found can be applied widely.


AI Invasion in Journalism is Revolutionising the Trend of News Reporting

#artificialintelligence

Journalism is a vast industry. The never tiring sector needs more human power for jobs starting from field reporting to approving a copy and publishing it. Thousands of journalists are on the ground covering stories and doing live telecast across the globe. However, it is very rare for a normal news agency to think of bringing in Artificial Intelligence (AI) technologies or a robot to the functioning system. Even when well-established media house is on the process, small news agencies struggle to digest the fact that AI can aid them in a lot of ways.