Goto

Collaborating Authors

 mcsherry



Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. In this paper the authors analyze theoretically two common graph clustering algorithms using low rank + sparsity, showing bounds on the parameter of these methods for them to work, and they present experimental validations of the results. The paper is very well written in general, although there are some minor typos. For instance, I think that the about in line 314 should be an above. Also, it seems more reasonable to me to put subsection 3.1.1


Private Queries with Sigma-Counting

Gao, Jun, Ding, Jie

arXiv.org Artificial Intelligence

Many data applications involve counting queries, where a client specifies a feasible range of variables and a database returns the corresponding item counts. A program that produces the counts of different queries often risks leaking sensitive individual-level information. A popular approach to enhance data privacy is to return a noisy version of the actual count. It is typically achieved by adding independent noise to each query and then control the total privacy budget within a period. This approach may be limited in the number of queries and output accuracy in practice. Also, the returned counts do not maintain the total order for nested queries, an important feature in many applications. This work presents the design and analysis of a new method, sigma-counting, that addresses these challenges. Sigma-counting uses the notion of sigma-algebra to construct privacy-preserving counting queries. We show that the proposed concepts and methods can significantly improve output accuracy while maintaining a desired privacy level in the presence of massive queries to the same data. We also discuss how the technique can be applied to address large and time-varying datasets.


83cdcec08fbf90370fcf53bdd56604ff-Reviews.html

Neural Information Processing Systems

We thank the reviewers for their time and their helpful comments. We address each reviewer by the ID that appears on our review page (3, 6, 7). Reviewer 3 We thank Reviewer 3 for bringing references [A] and [B] to our attention. We will mention these in the revised paper. The nonreciprocal recoding of [A] is indeed our asymmetric b-anonymity.


The Discussion About A.I. Feels New and Scary. But We've Had This Conversation Many Times Before.

Slate

At the latest congressional hearing on A.I., the hype was high. "Since the release of ChatGPT just over a year ago, it's become clear A.I. could soon disrupt nearly every facet of our economy," said Rep. Nancy Mace, chair of the U.S. congressional Subcommittee on Cybersecurity, Information Technology, and Government Innovation. "The A.I. genie is out of the bottle and it can't be put back in." A.I. does seem like a genie: The technology is new and mysterious, we aren't sure exactly how it works, and we know it is very powerful. We are also afraid of it: In a poll conducted in the summer of 2023, over half of Americans said they were more concerned than excited about A.I.; there is widespread speculation about what effects the technology will have on our economy, our jobs (lolsob), our education system, our art; and tech leaders have warned that the technology puts the fate of humanity at risk.


Visa on using advanced AI such as unsupervised learning to fight fraud

#artificialintelligence

Join executive leaders at the Data, Analytics, & Intelligent Automation Summit, presented by Accenture. The thing about fraud is that it's constantly changing -- looking at a past attack doesn't guarantee the next attack will look the same or target the same kind of victim -- and defenders have to continuously adapt. Visa utilizes artificial intelligence to analyze all of the transactions that go across the network and track large-scale transactional changes as part of its fraud detection efforts, Melissa McSherry, Visa's senior VP and global head of data, security, and identity products, said at VentureBeat's Transform 2021 virtual conference on Monday. Visa scores all of the transactions that go across the Visa network, which allows the company to define a set of behaviors that would be considered "normal." The team is "constantly" updating the model's view of history and updates the model itself to reflect the data on a fairly regular basis, McSherry said.


Visa uses AI to prevent $25bn in fraud

#artificialintelligence

Visa has announced new analysis showing Visa Advanced Authorization (VAA) using artificial intelligence (AI) helped financial institutions around the world prevent an estimated $25 billion in annual fraud. VAA is a comprehensive risk management tool that monitors and evaluates transaction authorizations on the Visa global payment network, VisaNet, in real time to help financial institutions promptly identify and respond to emerging fraud patterns and trends. Visa processed more than 127 billion transactions between merchants and financial institutions on VisaNet last year and employed AI to analyze 100 percent of the transactions--each in about one millisecond--so financial institutions can approve legitimate purchases while quickly identifying and preventing fraudulent transactions. "One of the toughest challenges in payments is separating good transactions made by account holders from bad ones attempted by fraudsters without adding friction to the process," said Melissa McSherry, senior vice president and global head of Data, Risk and Identity Products and Solutions, Visa. "Visa was the first payment network to apply neural network-based AI in 1993 to analyze the riskiness of transactions in real time, and the impact on fraud was immediate. By striking the right balance between human expertise and technology innovation, we continue to evolve our capabilities as new AI breakthroughs expand the realm of what's possible."


PYMNTS.com

#artificialintelligence

To steal a line from the Marvel Universe, "with great power comes great responsibility." To steal another line from the Hippocratic oath, penned centuries ago, "first, do no harm." Those two maxims extend into the world of artificial intelligence (AI) -- to the models built on machine learning and AI, and to the humans who come up with the models in the first place. After all -- technology without guiding principles, without scrutiny of the beginning goals and end outputs -- is just data in and data out, where unintended consequences may accrue. As Melissa McSherry, senior vice president and global head of credit and data products at Visa, told Karen Webster, the responsible use of AI that examines huge swathes of data (such as those housed in the payment giant's own databases), "is a vital consideration for AI practitioners."


Hey, Advertisement: Can We Talk?

AITopics Original Links

In online advertising lingo, the acronym CPC refers to "cost per click"--the amount an advertiser pays whenever someone clicks on an ad. If voice-recognition technology company Nuance gets its way, though, it could soon have an additional meaning: "cost per conversation." Nuance is today announcing Voice Ads, a platform that will let companies create ads that people can talk to on smartphones and tablets. Mike McSherry, vice president of advertising and content at Nuance, says these could range from car ads that let you ask questions about the vehicle shown to ads for a sports network that allow you to get information about who won last night's game or what time tonight's game starts. The company has lined up partnerships with several ad agencies including Digitas, OMD, and Leo Burnett, as well as with mobile ad distribution networks JumpTap, Millennial Media, and Ad Marvel.


Near-optimal Differentially Private Principal Components

Chaudhuri, Kamalika, Sarwate, Anand, Sinha, Kaushik

Neural Information Processing Systems

Principal components analysis (PCA) is a standard tool for identifying good low-dimensional approximations to data sets in high dimension. Many current data sets of interest contain private or sensitive information about individuals. Algorithms which operate on such data should be sensitive to the privacy risks in publishing their outputs. Differential privacy is a framework for developing tradeoffs between privacy and the utility of these outputs. In this paper we investigate the theory and empirical performance of differentially private approximations to PCA and propose a new method which explicitly optimizes the utility of the output. We demonstrate that on real data, there this a large performance gap between the existing methods and our method. We show that the sample complexity for the two procedures differs in the scaling with the data dimension, and that our method is nearly optimal in terms of this scaling.