Goto

Collaborating Authors

Results


Researchers use AI to predict crime, biased policing in major U.S. cities like L.A.

Los Angeles Times

For once, algorithms that predict crime might be used to uncover bias in policing, instead of reinforcing it. A group of social and data scientists developed a machine learning tool it hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major U.S. cities, including Los Angeles. Instead of justifying more aggressive policing in those areas, however, the hope is the technology will lead to "changes in policy that result in more equitable, need-based resource allocation," including sending officials other than law enforcement to certain kinds of calls, according to a report published Thursday in the journal Nature Human Behavior. The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, forecasts crime by spotting patterns amid vast amounts of public data on property crimes and crimes of violence, learning from the data as it goes.


AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy

#artificialintelligence

Our model enables discovery of these connections." The new model isolates crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events. It divides the city into spatial tiles roughly 1,000 feet across and predicts crime within these areas instead of relying on traditional neighborhood or political boundaries, which are also subject to bias. The model performed just as well with data from seven other U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco. "We demonstrate the importance of discovering city-specific patterns for the prediction of reported crime, which generates a fresh view on neighborhoods in the city, allows us to ask novel questions, and lets us evaluate police action in new ways," Evans said. Chattopadhyay is careful to note that the tool's accuracy does not mean that it should be used to direct law enforcement, with police departments using it to swarm neighborhoods proactively to prevent crime. Instead, it should be added to a toolbox of urban policies and policing strategies to address crime. "We created a digital twin of urban environments.


Orange County man arrested, accused of stalking 'World of Warcraft' video game player

Los Angeles Times

A former Marine from Orange County has been arrested and faces federal charges for allegedly creating hundreds of Twitter accounts used to stalk a professional video game player who lives in Calgary, Canada, authorities said. Evan Baltierra, 29, was arrested Monday by FBI agents at his home in Trabuco Canyon on suspicion of stalking, according to federal prosecutors. He admitted to investigators he harassed the woman who made her living as a professional online gamer on the popular "War of Warcraft," authorities said. The suspect "orchestrated a campaign of harassment targeting the victim, her boyfriend, her friends and her boyfriend's family," according to court records. Baltierra and his attorney could not be reached for comment.


Church where shooting took place was home away from home for Taiwanese immigrants

Los Angeles Times

The Irvine Taiwanese Presbyterian Church has never had a home. It started in 1994 in borrowed space in another church in its namesake city. It eventually moved to another borrowed space in a Tustin church before settling at Geneva Presbyterian Church in Laguna Woods in 2012. On Sundays, the Taiwanese group worships at 10 a.m., while the Geneva group gathers separately at 10:30. The 100 or so church members, most of whom are senior citizens, worship in their native language -- not Mandarin but Taiwanese, a dialect that was once suppressed by the Kuomintang regime.


Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

Journal of Artificial Intelligence Research

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.


People trust AI fake faces more than real ones, study finds

#artificialintelligence

Fake faces created by artificial intelligence (AI) are considered more trustworthy than images of real people, a study has found. The results highlight the need for safeguards to prevent deep fakes, which have already been used for revenge porn, fraud and propaganda, the researchers behind the report say. The study – by Dr Sophie Nightingale from Lancaster University in the UK and Professor Hany Farid from the University of California, Berkeley, in the US – asked participants to identify a selection of 800 faces as real or fake, and to rate their trustworthiness. After three separate experiments, the researchers found the AI-created synthetic faces were on average rated 7.7% more trustworthy than the average rating for real faces. This is "statistically significant", they add.


California man robbed more than 20 gay men he met on Grindr dating app, DOJ says

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A Southern California man robbed more than 20 dates he met on a gay dating app and stabbed one victim in the chest during one robbery, federal prosecutors said Tuesday. Derrick Patterson, 22, a resident of the Los Angeles suburb of Compton, was arrested Monday by the FBI. His most recent robbery occurred on March 26 at a Beverly Hills hotel, authorities said.


People trust AI fake faces more than real ones, according to a new study

#artificialintelligence

Fake faces created by artificial intelligence (AI) are considered more trustworthy than images of real people, a new study has found. The results highlight the need for safeguards to prevent deep fakes, which have already been used for revenge porn, fraud and propaganda, the researchers behind the report say. The study - by Dr Sophie Nightingale from Lancaster University in the UK and Professor Hany Farid from the University of California, Berkeley, in the US - asked participants to identify a selection of 800 faces as real or fake, and to rate their trustworthiness. After three separate experiments, the researchers found the AI-created synthetic faces were on average rated 7.7% more trustworthy than the average rating for real faces. This is "statistically significant", they add.


The New Intelligence Game

#artificialintelligence

The relevance of the video is that the browser identified the application being used by the IAI as Google Earth and, according to the OSC 2006 report, the Arabic-language caption reads Islamic Army in Iraq/The Military Engineering Unit – Preparations for Rocket Attack, the video was recorded in 5/1/2006, we provide, in Appendix A, a reproduction of the screenshot picture made available in the OSC report. Now, prior to the release of this video demonstration of the use of Google Earth to plan attacks, in accordance with the OSC 2006 report, in the OSC-monitored online forums, discussions took place on the use of Google Earth as a GEOINT tool for terrorist planning. On August 5, 2005 the user "Al-Illiktrony" posted a message to the Islamic Renewal Organization forum titled A Gift for the Mujahidin, a Program To Enable You to Watch Cities of the World Via Satellite, in this post the author dedicated Google Earth to the mujahidin brothers and to Shaykh Muhammad al-Mas'ari, the post was replied in the forum by "Al-Mushtaq al-Jannah" warning that Google programs retain complete information about their users. This is a relevant issue, however, there are two caveats, given the amount of Google Earth users, it may be difficult for Google to flag a jihadist using the functionality in time to prevent an attack plan, one possible solution would be for Google to flag computers based on searched websites and locations, for instance to flag computers that visit certain critical sites, but this is a problem when landmarks are used, furthermore, and this is the second caveat, one may not use one's own computer to produce the search or even mask the IP address. On October 3, 2005, as described in the OSC 2006 report, in a reply to a posting by Saddam Al-Arab on the Baghdad al-Rashid forum requesting the identification of a roughly sketched map, "Almuhannad" posted a link to a site that provided a free download of Google Earth, suggesting that the satellite imagery from Google's service could help identify the sketch.


Technology Ethics in Action: Critical and Interdisciplinary Perspectives

arXiv.org Artificial Intelligence

This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.