Goto

Collaborating Authors

Results


Biased Algorithms Are a Racial Justice Issue

#artificialintelligence

Decisions on where to send police patrol cars, which foster parents to investigate, and who gets released on bail before trial are some of the most important, life-or-death decisions made by our government. And, increasingly, those decisions are being automated. The last eight years have seen an explosion in the capability of artificial intelligence, which is now used for everything from arranging your news feed on Facebook to identifying enemy combatants for the U.S. military. The automated decisions that affect us the most are somewhere in the middle. A.I.'s big feature is essentially pattern matching.


Weakly Supervised Learning of Nuanced Frames for Analyzing Polarization in News Media

arXiv.org Artificial Intelligence

In this paper we suggest a minimally-supervised approach for identifying nuanced frames in news article coverage of politically divisive topics. We suggest to break the broad policy frames suggested by Boydstun et al., 2014 into fine-grained subframes which can capture differences in political ideology in a better way. We evaluate the suggested subframes and their embedding, learned using minimal supervision, over three topics, namely, immigration, gun-control and abortion. We demonstrate the ability of the subframes to capture ideological differences and analyze political discourse in news media.


Are any of us safe from deepfakes? - TechHQ

#artificialintelligence

Deepfakes may have innocent and fun applications -- companies like RefaceAI and Morphin enable users to swap their faces with those of popular celebrities in a GIF or digital content format. But like a double-edged sword, the more realistic the content looks, the greater the potential for deception. Deepfakes have been ranked by experts as one of the most serious artificial intelligence (AI) crime threats based on the wide array of applications it can be used for criminal activities and terrorism. A study by University College London (UCL) identified 20 ways AI can be deployed for the greater evil and these emerging technologies were ranked in order of concern in accordance with the severity of the crime, the profit gained, and the difficulty in combating their threats. When the term was first coined, the idea of deepfakes triggered widespread concern mostly centered around the misuse of the technology in spreading misinformation, especially in politics.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Simulating Offender Mobility: Modeling Activity Nodes from Large-Scale Human Activity Data

Journal of Artificial Intelligence Research

In recent years, simulation techniques have been applied to investigate the spatiotemporal dynamics of crime. Researchers have instantiated mobile offenders in agent-based simulations for theory testing, experimenting with crime prevention strategies, and exploring crime prediction techniques, despite facing challenges due to the complex dynamics of crime and the lack of detailed information about offender mobility. This paper presents a simulation model to explore offender mobility, focusing on the interplay between the agent's awareness space and activity nodes. The simulation generates patterns of individual mobility aiming to cumulatively match crime patterns. To instantiate a realistic urban environment, we use open data to simulate the urban structure, location-based social networks data to represent activity nodes as a proxy for human activity, and taxi trip data as a proxy for human movement between regions of the city. We analyze and systematically compare 35 different mobility strategies and demonstrate the benefits of using large-scale human activity data to simulate offender mobility. The strategies combining taxi trip data or historic crime data with popular activity nodes perform best compared to other strategies, especially for robbery. Our approach provides a basis for building agent-based crime simulations that infer offender mobility in urban areas from real-world data.


Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions

#artificialintelligence

Artificial intelligence (AI) may play an increasingly essentialFootnote 1 role in criminal acts in the future. Criminal acts are defined here as any act (or omission) constituting an offence punishable under English criminal law,Footnote 2 without loss of generality to jurisdictions that similarly define crime. Evidence of "AI-Crime" (AIC) is provided by two (theoretical) research experiments. In the first one, two computational social scientists (Seymour and Tully 2016) used AI as an instrument to convince social media users to click on phishing links within mass-produced messages. Because each message was constructed using machine learning techniques applied to users' past behaviours and public profiles, the content was tailored to each individual, thus camouflaging the intention behind each message. If the potential victim had clicked on the phishing link and filled in the subsequent web-form, then (in real-world circumstances) a criminal would have obtained personal and private information that could be used for theft and fraud. AI-fuelled crime may also impact commerce. In the second experiment, three computer scientists (Martínez-Miranda et al. 2016) simulated a market and found that trading agents could learn and execute a "profitable" market manipulation campaign comprising a set of deceitful false-orders. These two experiments show that AI provides a feasible and fundamentally novel threat, in the form of AIC. The importance of AIC as a distinct phenomenon has not yet been acknowledged. The literature on AI's ethical and social implications focuses on regulating and controlling AI's civil uses, rather than considering its possible role in crime (Kerr 2004).


Go read this NYT expose on a creepy new facial recognition database used by US police

#artificialintelligence

Hundreds of law enforcement agencies across the US have started using a new facial recognition system from Clearview AI, a new investigation by The New York Times has revealed. The database is made up of billions of images scraped from millions of sites including Facebook, YouTube, and Venmo. The Times says that Clearview AI's work could "end privacy as we know it," and the piece is well worth a read in its entirety. The use of facial recognition systems by police is already a growing concern, but the scale of Clearview AI's database, not to mention the methods it used to assemble it, is particularly troubling. The Clearview system is built upon a database of over three billion images scraped from the internet, a process which may have violated websites' terms of service.


How AI is preventing email phishing attacks

#artificialintelligence

Since its invention in 1970, email has undergone very little changes. Its ease of use has made it the most common method of business communication, used by 3.7 billion users worldwide. Simultaneously, it has become the most targeted intrusion point for cybercriminals, with devastating outcomes. When initially envisioned, email was built for connectivity. Network communication was in its early days, and merely creating a digital alternative for mailboxes was revolutionary and difficult enough.


A Rule-Based Model for Victim Prediction

arXiv.org Artificial Intelligence

In this paper, we proposed a novel automated model, called Vulnerability Index for Population at Risk (VIPAR) scores, to identify rare populations for their future shooting victimizations. Likewise, the focused deterrence approach identifies vulnerable individuals and offers certain types of treatments (e.g., outreach services) to prevent violence in communities. The proposed rule-based engine model is the first AI-based model for victim prediction. This paper aims to compare the list of focused deterrence strategy with the VIPAR score list regarding their predictive power for the future shooting victimizations. Drawing on the criminological studies, the model uses age, past criminal history, and peer influence as the main predictors of future violence. Social network analysis is employed to measure the influence of peers on the outcome variable. The model also uses logistic regression analysis to verify the variable selections. Our empirical results show that VIPAR scores predict 25.8% of future shooting victims and 32.2% of future shooting suspects, whereas focused deterrence list predicts 13% of future shooting victims and 9.4% of future shooting suspects. The model outperforms the intelligence list of focused deterrence policies in predicting the future fatal and non-fatal shootings. Furthermore, we discuss the concerns about the presumption of innocence right.