Goto

Collaborating Authors

 remover


Box-Free Model Watermarks Are Prone to Black-Box Removal Attacks

An, Haonan, Hua, Guang, Lin, Zhiping, Fang, Yuguang

arXiv.org Artificial Intelligence

Box-free model watermarking is an emerging technique to safeguard the intellectual property of deep learning models, particularly those for low-level image processing tasks. Existing works have verified and improved its effectiveness in several aspects. However, in this paper, we reveal that box-free model watermarking is prone to removal attacks, even under the real-world threat model such that the protected model and the watermark extractor are in black boxes. Under this setting, we carry out three studies. 1) We develop an extractor-gradient-guided (EGG) remover and show its effectiveness when the extractor uses ReLU activation only. 2) More generally, for an unknown extractor, we leverage adversarial attacks and design the EGG remover based on the estimated gradients. 3) Under the most stringent condition that the extractor is inaccessible, we design a transferable remover based on a set of private proxy models. In all cases, the proposed removers can successfully remove embedded watermarks while preserving the quality of the processed images, and we also demonstrate that the EGG remover can even replace the watermarks. Extensive experimental results verify the effectiveness and generalizability of the proposed attacks, revealing the vulnerabilities of the existing box-free methods and calling for further research.


Repair Old Photos with AI Photo Restoration

#artificialintelligence

Most of our old photos are captured from the old cameras that are currently not in use. These gadgets were the best of their time but at the moment there is no use for those old cameras. In addition to this, most of them were black and white also. We all want to reimagine the past and intend to revitalize them by using any source. These past images can be restored by using several tools.


How to avoid being replaced by a robot at work

#artificialintelligence

Recently, I was at a party in San Francisco when a man approached me and introduced himself as the founder of a small artificial intelligence (AI) start-up. As soon as the founder figured out that I was a technology writer for The New York Times, he launched into a pitch for his company, which he said was trying to revolutionise the manufacturing sector using a new AI technique called "deep reinforcement learning". The founder explained that his company's AI could run millions of virtual simulations for any given factory, eventually arriving at the exact sequence of processes that would allow it to produce goods most efficiently. This AI, he said, would allow factories to replace entire teams of human production planners, along with most of the outdated software those people relied on. "We call it the Boomer Remover," he said.


Fairness in Credit Scoring: Assessment, Implementation and Profit Implications

Kozodoi, Nikita, Jacob, Johannes, Lessmann, Stefan

arXiv.org Machine Learning

The rise of algorithmic decision-making has spawned much research on fair machine learning (ML). Financial institutions use ML for building risk scorecards that support a range of credit-related decisions. Yet, the literature on fair ML in credit scoring is scarce. The paper makes two contributions. First, we provide a systematic overview of algorithmic options for incorporating fairness goals in the ML model development pipeline. In this scope, we also consolidate the space of statistical fairness criteria and examine their adequacy for credit scoring. Second, we perform an empirical study of different fairness processors in a profit-oriented credit scoring setup using seven real-world data sets. The empirical results substantiate the evaluation of fairness measures, identify more and less suitable options to implement fair credit scoring, and clarify the profit-fairness trade-off in lending decisions. Specifically, we find that multiple fairness criteria can be approximately satisfied at once and identify separation as a proper criterion for measuring the fairness of a scorecard. We also find fair in-processors to deliver a good balance between profit and fairness. More generally, we show that algorithmic discrimination can be reduced to a reasonable level at a relatively low cost.


Reverb: A Framework For Experience Replay

Cassirer, Albin, Barth-Maron, Gabriel, Brevdo, Eugene, Ramos, Sabela, Boyd, Toby, Sottiaux, Thibault, Kroiss, Manuel

arXiv.org Artificial Intelligence

A central component of training in Reinforcement Learning (RL) is Experience: the data used for training. The mechanisms used to generate and consume this data have an important effect on the performance of RL algorithms. In this paper, we introduce Reverb: an efficient, extensible, and easy to use system designed specifically for experience replay in RL. Reverb is designed to work efficiently in distributed configurations with up to thousands of concurrent clients. The flexible API provides users with the tools to easily and accurately configure the replay buffer. It includes strategies for selecting and removing elements from the buffer, as well as options for controlling the ratio between sampled and inserted elements. This paper presents the core design of Reverb, gives examples of how it can be applied, and provides empirical results of Reverb's performance characteristics.