propublica
Your Data Might Determine How Much You Pay for Eggs
A newly enacted New York law requires retailers to say whether your data influences the price of basic goods like a dozen eggs or toilet paper, but not how. If you're near Rochester, New York, the price for a carton of Target's Good & Gather eggs is listed as $1.99 on its website. It's unclear why the prices differ, but a new notice on Target's website offers a potential hint: "This price was set by an algorithm using your personal data." A recently enacted New York State law requires businesses that algorithmically set prices using customers' personal data to disclose that. According to the law, personal data includes any data that can be "linked or reasonably linked, directly or indirectly, with a specific consumer or device." The law doesn't require businesses to explicitly state what information about a person or device is being used or how each piece of information affects the final price a customer sees.
- North America > United States > New York > Monroe County > Rochester (0.25)
- North America > United States > California (0.05)
- North America > United States > Pennsylvania (0.05)
- (2 more...)
- Retail (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (0.77)
- Government > Regional Government > North America Government > United States Government (0.50)
Why You Need an AI Ethics Committee
Artificial intelligence poses a lot of ethical risks to businesses: It may promote bias, lead to invasions of privacy, and in the case of self-driving cars, even cause deadly accidents. Because AI is built to operate at scale, when a problem occurs, the impact is huge. Consider the AI that many health systems were using to spot high-risk patients in need of follow-up care. Researchers found that only 18% of the patients identified by the AI were Black—even though Black people accounted for 46% of the sickest patients. And the discriminatory AI was applied to at least 100 million patients. The sources of problems in AI are many. For starters, the data used to train it may reflect historical bias. The health systems’ AI was trained with data showing that Black people received fewer health care resources, leading the algorithm to infer that they needed less help. The data may undersample certain subpopulations. Or the wrong goal may be set for the AI. Such issues aren’t easy to address, and they can’t be remedied with a technical fix. You need a committee—comprising ethicists, lawyers, technologists, business strategists, and bias scouts—to review any AI your firm develops or buys to identify the ethical risks it presents and address how to mitigate them. This article describes how to set up such a committee effectively.
- North America > United States > Florida > Broward County (0.04)
- Asia > India (0.04)
Justitia ex Machina: The Case for Automating Morals
This piece was a finalist for the inaugural Gradient Prize. Machine Learning is a powerful technique to automatically learn models from data that have recently been the driving force behind several impressive technological leaps such as self-driving cars, robust speech recognition, and, arguably, better-than-human image recognition. We rely on these machine learning models daily; they influence our lives in ways we did not expect, and they are only going to become even more ubiquitous. Consider a couple of example machine learning models: 1) Detecting cats in images 2) Deciding which ads to show you online 3) Predicting which areas will suffer crime, and 4) Predicting how likely a criminal is to re-offend. The first two seem harmless enough.
- Health & Medicine (0.69)
- Transportation (0.56)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.47)
- Law (0.47)
Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI
Alikhademi, Kiana, Richardson, Brianna, Drobina, Emma, Gilbert, Juan E.
Many ML models are opaque to humans, producing decisions too complex for humans to easily understand. In response, explainable artificial intelligence (XAI) tools that analyze the inner workings of a model have been created. Despite these tools' strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for `fairwashing` by misleading users into trusting biased or incorrect models. In this paper, we created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness as well as their capacity to communicate these results to their users clearly. We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias. Developers can use our framework to suggest modifications needed in their toolkits to reduce issues likes fairwashing.
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.70)
Can AI Be a Racist Too?
This predisposition can make the AI show racism, sexism, or different kinds of discrimination. This is typically viewed as a political issue and disregarded by researchers. The outcome is that just non-technical people write on the point. These individuals frequently propose approach suggestions to build diversity among AI analysts. The irony is faltering: A black AI researcher can't assemble an AI any not quite the same as a white AI researcher.
Are Your Algorithms Upholding Your Standards of Fairness?
In the wake of recent high-profile AI bias scandals, companies have begun to realize that they need to rethink their AI strategy to include not just AI Fairness, but also Algorithmic Fairness more broadly as a fundamental tenet. At the Pragmatic Institute, we educate Fortune 500 companies about data science and AI. Through our work, we've discovered that many companies struggle to form a clear definition of algorithmic fairness for their organization. Without a clear definition, well-meaning fairness initiatives languish in the realm of good intentions and never arrive at meaningful impact. But defining fairness is not as easy as it may seem. Two examples highlight just how challenging this can be.
- North America > United States > Illinois > Cook County > Chicago (0.06)
- North America > United States > New York > Bronx County > New York City (0.05)
Dating app Plenty of Fish reveals it leaked private names and zip codes of users
Researchers discovered the dating app Plenty of Fish was leaking information that users had set to private on their profiles. User's names and zip codes were displayed in the app's API, allowing malicious actors to locate a user's exact location. Although the data was scrambled, experts were able to reveal the information using freely available tools designed to analyze network traffic, as first reported by TechCrunch. The discovery was made by The App Analyst, an expert in digital apps, who found that sensitive data was visible via Plenty of Fish's API on October 20th. A fix was developed and tested on November 5th and on December 18th, it confirmed the sensitive data was no longer present in its API.
Would background checks make dating apps safer?
Match Group, the largest dating app conglomerate in the US, doesn't perform background checks on any of its apps' free users. A ProPublica report today highlights a few incidents in which registered sex offenders went on dates with women who had no idea they were talking to a convicted criminal. These men then raped the women on their dates, leaving the women to report them to the police and to the apps' moderators. These women expected their dating apps to protect them, or at least vet users, only to discover that Match has little to no insight on who's using their apps. The piece walks through individual attacks and argues that the apps have no real case for not vetting their users.
Actually, it's about Ethics, AI, and Journalism: Reporting on and with Computation and Data
We live in a data society. Journalists are becoming data analysts and data curators, and computation is an essential tool for reporting. Data and computation reshape the way a reporter sees the world and composes a story. They also control the operation of the information ecosystem she sends her journalism into, influencing where it finds audiences and generates discussion. So every reporting beat is now a data beat, and computation is an essential tool for investigation. But digitization is affected by inequities, leaving gaps that often reflect the very disparities reporters seek to illustrate. Computation is creating new systems of power and inequality in the world. We rely on journalists, the "explainers of last resort"[1], to hold these new constellations of power to account. We report on computation, not just with computation. While a term with considerable history and mystery, artificial intelligence (AI) represents the most recent bundling of data and computation to optimize business decisions, automate tasks, and, from the point of view of a reporter, learn about the world. The relationship between a journalist and AI is not unlike the process of developing sources or cultivating fixers. As with human sources, artificial intelligences may be knowledgeable, but they are not free of subjectivetivity in their design -- they also need to be contextualized and qualified. Ethical questions of introducing AI in journalism abound. But since AI has once again captured the public imagination, it is hard to have a clear-eyed discussion about the issues involved with journalism's call to both report on and with these new computational tools. And so our article will alternate a discussion of issues facing the profession today with a "slant narrative" -- indicated because these sections are in italics. The slant narrative starts with the 1964 World's Fair and a partnership between IBM and The New York Times, winds through commentary by Joseph Weizenbaum, a famed figure in AI research in the 1960s, and ends in 1983 with the shuttering of one of the most ambitious information delivery systems of the time. The simplicity of the role of computation in the slant narrative will help us better understand our contemporary situation with AI. But we begin our article with context for the use of data and computation in journalism -- a short, and certainly incomplete, history before we settle into the rhythm of alternating narratives. Reporters depend on data, and through computation they make sense of that data. This reliance is not new. Joseph Pulitzer listed a series of topics that should be taught to aspiring journalists in his 1904 article "The College of Journalism."
- North America > United States > New York (0.05)
- North America > United States > Indiana > Monroe County > Bloomington (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (6 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Education > Educational Setting > Higher Education (0.67)
The pitfalls of a 'retrofit human' in AI systems
Stanislav Petrov is not a famous name in the computer science space, like Ada Lovelace or Grace Hopper, but his story serves as a critical lesson for developers of AI systems. Petrov, who passed away on May 19, 2017, was a lieutenant colonel in the Soviet Union's Air Defense Forces. On September 26, 1983, an alarm announced that the U.S. had launched five nuclear armed intercontinental ballistic missiles (ICBMs). His job, as the human in the loop of this technical detection system, was to escalate to leadership to launch Soviet missiles in retaliation, ensuring mutually assured destruction. As the sirens blared, he took a moment to pause and think critically. Why would the U.S. send only five missiles?