Goto

Collaborating Authors

 transparency report


Opt out: how to protect your data and privacy if you own a Tesla

The Guardian

Welcome to Opt Out, a semi-regular column in which we help you navigate your online privacy and show you how to say no to surveillance. The last column covered how to protect your phone and data privacy at the US border. If you'd like to skip to a section about a particular tip, click the "Jump to" menu at the top of this article. At the press of a button, your Tesla pulls itself out of parking spot with no one behind the wheel using a feature called Summon. It drives itself on highways using Autopilot. When you arrive at your destination, it can record nearby activity while parked with a feature called Sentry Mode.


Foundation Model Transparency Reports

Bommasani, Rishi, Klyman, Kevin, Longpre, Shayne, Xiong, Betty, Kapoor, Sayash, Maslej, Nestor, Narayanan, Arvind, Liang, Percy

arXiv.org Artificial Intelligence

Foundation models are critical digital technologies with sweeping societal impact that necessitates transparency. To codify how foundation model developers should provide transparency about the development and deployment of their models, we propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media. While external documentation of societal harms prompted social media transparency reports, our objective is to institutionalize transparency reporting for foundation models while the industry is still nascent. To design our reports, we identify 6 design principles given the successes and shortcomings of social media transparency reporting. To further schematize our reports, we draw upon the 100 transparency indicators from the Foundation Model Transparency Index. Given these indicators, we measure the extent to which they overlap with the transparency requirements included in six prominent government policies (e.g., the EU AI Act, the US Executive Order on Safe, Secure, and Trustworthy AI). Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions. We encourage foundation model developers to regularly publish transparency reports, building upon recommendations from the G7 and the White House.


Dating Apps Are Even Less Transparent Than Facebook and Google

Slate

As Valentine's Day approaches, couples across the country are preparing for this long-standing tradition--and there's a very good chance they met through online dating. But while dating apps can help people find a partner (or just a fun date), they can also subject users to incredible hate and harassment. Despite the fact that dating apps have accrued significant reach and influence, these companies provide very little transparency around how they keep users safe and how they moderate content. Much of the conversation around online platform accountability focuses on companies like Facebook and Google. But dating apps face many of the same issues.


How We Can Build Trustworthy AI

#artificialintelligence

Science fiction movies like'The Terminator' and'I, Robot' have exhibited what might happen in case artificial intelligence goes rogue. Such dystopian fantasies about AI are widely discussed by experts and researchers in the field of AI as well. Many of these experts believe that super-intelligent AI systems will pose a significant threat to humanity in the near future. And, considering the untold potential of AI, this may soon become a reality. Developers need to understand public concerns over the development of AI systems. There have been many reported instances where developers neglected these warnings and created AI systems that went rogue.


Government subpoenas for customer data in Amazon's cloud service rose 77 PERCENT over six months

Daily Mail - Science & tech

Amazon says that U.S. government requests for customer data have seen a substantial spike so far this year. As reported by TechCrunch, the most recent figures released from the company -- which date between January and June 2019 -- show a 14 percent increase in subpoenas and a nearly 35 percent increase in the number of search warrants. Information handed over by the company comes from several sources according to TechCrunch, including Amazon's Echo voice assistant, Alexa, its e-reader, the Kindle, and even its home security devices sold by Ring. The company also experienced an uptick in interest for its cloud services, Amazon Web Services, which separately reported a 77 percent uptick in the number of subpoena requests over the last six-month period. According to data released in the company's latest report, Amazon's response varied depending on the type of requests.


Privacy Risks of Explaining Machine Learning Models

Shokri, Reza, Strobel, Martin, Zick, Yair

arXiv.org Machine Learning

Can we trust black-box machine learning with its decisions? Can we trust algorithms to train machine learning models on sensitive data? Transparency and privacy are two fundamental elements of trust for adopting machine learning. In this paper, we investigate the relation between interpretability and privacy. In particular we analyze if an adversary can exploit transparent machine learning to infer sensitive information about its training set. To this end, we perform membership inference as well as reconstruction attacks on two popular classes of algorithms for explaining machine learning models: feature-based and record-based influence measures. We empirically show that an attacker, that only observes the feature-based explanations, has the same power as the state of the art membership inference attacks on model predictions. We also demonstrate that record-based explanations can be effectively exploited to reconstruct significant parts of the training set. Finally, our results indicate that minorities and special cases are more vulnerable to these type of attacks than majority groups.


FlipTest: Fairness Auditing via Optimal Transport

Black, Emily, Yeom, Samuel, Fredrikson, Matt

arXiv.org Machine Learning

Combining the concepts of individual and group fairness, we search for discrimination by matching individuals in different protected groups to each other, and comparing their classifier outcomes. Specifically, we formulate a GAN-based approximation of the optimal transport mapping, and use it to translate the distribution of one protected group to that of another, returning pairs of in-distribution samples that statistically correspond to one another. We then define the flipset: the set of individuals whose classifier output changes post-translation, which intuitively corresponds to the set of people who were harmed because of their protected group membership. To shed light on why the model treats a given subgroup differently, we introduce the transparency report: a ranking of features that are most associated with the model's behavior on the flipset. We show that this provides a computationally inexpensive way to identify subgroups that are harmed by model discrimination, including in cases where the model satisfies population-level group fairness criteria.


Facebook makes special tool for hiding stories from countries' citizens to get back into China, report claims

The Independent - Tech

Facebook has developed a special tool to keep countries from seeing stories criticial of their government. The site has been secretly working on a feature that allows it to geographically censor specific posts from people in the country. It appears to have been done as a way of getting back into China, an important market for the company but one with an intense censorship regime. The apparent tool was revealed at a time when Facebook was facing increased scrutiny of how it picks what appears in news feed. It has received special criticism for the way that it allows fake news to flourish on the service, and many have claimed that it helped Donald Trump win the Presidential election.


Bleeding Edge Roundup

#artificialintelligence

Researchers from Delft University of Technology in the Netherlands have created a rewritable data-storage device capable of storing information at the level of single atoms representing single bits of information. The technology, which is described in the current issue of Nature Nanotechnology, is capable of packing data as dense as 500 terabytes per square inch. Theoretically, the device could store the entire contents of the US Library of Congress within a 0.1-mm-wide cube--though the proof-of-concept demonstrated by the group topped out at 1 kilobyte. On Tuesday, DigitalGlobe, a satellite-imagery company, announced that it will provide high-resolution pictures of the planet's surface to Uber. DigitalGlobe is the primary provider of satellite imagery to Google, Apple, and the U.S. government.


How to make opaque AI decisionmaking accountable

#artificialintelligence

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque--it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g.