Well File:

Results


Modeling Ideological Agenda Setting and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity

arXiv.org Artificial Intelligence

The increasing polarization of online political discourse calls for computational tools that are able to automatically detect and monitor ideological divides in social media. Here, we introduce a minimally supervised method that directly leverages the network structure of online discussion forums, specifically Reddit, to detect polarized concepts. We model polarization along the dimensions of agenda setting and framing, drawing upon insights from moral psychology. The architecture we propose combines graph neural networks with structured sparsity and results in representations for concepts and subreddits that capture phenomena such as ideological radicalization and subreddit hijacking. We also create a new dataset of political discourse covering 12 years and more than 600 online groups with different ideologies.


Mitigating Media Bias through Neutral Article Generation

arXiv.org Artificial Intelligence

Media bias can lead to increased political polarization, and thus, the need for automatic mitigation methods is growing. Existing mitigation work displays articles from multiple news outlets to provide diverse news coverage, but without neutralizing the bias inherent in each of the displayed articles. Therefore, we propose a new task, a single neutralized article generation out of multiple biased articles, to facilitate more efficient access to balanced and unbiased information. In this paper, we compile a new dataset NeuWS, define an automatic evaluation metric, and provide baselines and multiple analyses to serve as a solid starting point for the proposed task. Lastly, we obtain a human evaluation to demonstrate the alignment between our metric and human judgment.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


3 Kansas police officers injured by modified shotgun inside vacant home: cops

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Authorities in Wichita, Kan., said Sunday that they are investigating a shooting that injured three police officers this weekend and working to determine if the shotgun was rigged to the door. A "modified, loaded shotgun" discharged as the officers entered a home in the city on Saturday, according to a release by Wichita Police Department spokesman Officer Trevor Macy. "Apparently there were several modifications made to this one," Macy told The Wichita Eagle.


FBI has trail of 140,000 images of selfie-loving Capitol siege suspects

FOX News

Local businesses close as more National Guard troops deploy in the nation's capital. Selfie-snapping Capitol rioters left investigators a treasure trove of evidence -- at least 140,000 pictures and videos taken during the deadly Jan. 6 siege, according to federal prosecutors. The mass of digital evidence from media reports, live-streams and social media posts has been crucial to the FBI, which by Friday had identified more than 275 suspects, with close to 100 charged, officials said. Investigators have been working with social media and phone companies to help ID suspects -- as well as using advanced facial recognition technology, according to Bloomberg News. FILE: Rioters try to break through a police barrier at the Capitol in Washington.


Socially Responsible AI Algorithms: Issues, Purposes, and Challenges

arXiv.org Artificial Intelligence

In the current era, people and society have grown increasingly reliant on Artificial Intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, health care, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great efforts of designing more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI's indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation.


Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

arXiv.org Artificial Intelligence

In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.


The State of AI Ethics Report (October 2020)

arXiv.org Artificial Intelligence

The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.


Graph-based Topic Extraction from Vector Embeddings of Text Documents: Application to a Corpus of News Articles

arXiv.org Artificial Intelligence

Production of news content is growing at an astonishing rate. To help manage and monitor the sheer amount of text, there is an increasing need to develop efficient methods that can provide insights into emerging content areas, and stratify unstructured corpora of text into `topics' that stem intrinsically from content similarity. Here we present an unsupervised framework that brings together powerful vector embeddings from natural language processing with tools from multiscale graph partitioning that can reveal natural partitions at different resolutions without making a priori assumptions about the number of clusters in the corpus. We show the advantages of graph-based clustering through end-to-end comparisons with other popular clustering and topic modelling methods, and also evaluate different text vector embeddings, from classic Bag-of-Words to Doc2Vec to the recent transformers based model Bert. This comparative work is showcased through an analysis of a corpus of US news coverage during the presidential election year of 2016.


How the Police Use AI to Track and Identify You

#artificialintelligence

Surveillance is becoming an increasingly controversial application given the rapid pace at which AI systems are being developed and deployed worldwide. While protestors marched through the city demanding justice for George Floyd and an end to police brutality, Minneapolis police trained surveillance tools to identify them. With just hours to sift through thousands of CCTV camera feeds and other dragnet data streams, the police turned to a range of automated systems for help, reaching for information collected by automated license plate readers, CCTV-video analysis software, open-source geolocation tools, and Clearview AI's controversial facial recognition system. High above the city, an unarmed Predator drone flew in circles, outfitted with a specialized camera first pioneered by the police in Baltimore that is capable of identifying individuals from 10,000 feet in the air, providing real-time surveillance of protestors across the city. But Minneapolis is not an isolated case of excessive policing and technology run amok. Instead, it is part of a larger strategy by the state, local, and federal government to build surveillance dragnets that pull in people's emails, texts, bank records, and smartphone location as well as their faces, movements, and physical whereabouts to equip law enforcement with unprecedented tools to search for and identify Americans without a warrant.