Goto

Collaborating Authors

 baruch


A Little Is Enough: Circumventing Defenses For Distributed Learning

Neural Information Processing Systems

Distributed learning is central for large-scale training of deep-learning models. However, it is exposed to a security threat in which Byzantine participants can interrupt or control the learning process. Previous attack models assume that the rogue participants (a) are omniscient (know the data of all other participants), and (b) introduce large changes to the parameters. Accordingly, most defense mechanisms make a similar assumption and attempt to use statistically robust methods to identify and discard values whose reported gradients are far from the population mean. We observe that if the empirical variance between the gradients of workers is high enough, an attacker could take advantage of this and launch a non-omniscient attack that operates within the population variance. We show that the variance is indeed high enough even for simple datasets such as MNIST, allowing an attack that is not only undetected by existing defenses, but also uses their power against them, causing those defense mechanisms to consistently select the byzantine workers while discarding legitimate ones. We demonstrate our attack method works not only for preventing convergence but also for repurposing of the model behavior (``backdooring''). We show that less than 25\% of colluding workers are sufficient to degrade the accuracy of models trained on MNIST, CIFAR10 and CIFAR100 by 50\%, as well as to introduce backdoors without hurting the accuracy for MNIST and CIFAR10 datasets, but with a degradation for CIFAR100.


DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences

Feng, Chao, Li, Yunlong, Gao, Yuanzhe, Celdrán, Alberto Huertas, von der Assen, Jan, Bovet, Gérôme, Stiller, Burkhard

arXiv.org Artificial Intelligence

Federated learning (FL) has garnered significant attention as a prominent privacy-preserving Machine Learning (ML) paradigm. Decentralized FL (DFL) eschews traditional FL's centralized server architecture, enhancing the system's robustness and scalability. However, these advantages of DFL also create new vulnerabilities for malicious participants to execute adversarial attacks, especially model poisoning attacks. In model poisoning attacks, malicious participants aim to diminish the performance of benign models by creating and disseminating the compromised model. Existing research on model poisoning attacks has predominantly concentrated on undermining global models within the Centralized FL (CFL) paradigm, while there needs to be more research in DFL. To fill the research gap, this paper proposes an innovative model poisoning attack called DMPA. This attack calculates the differential characteristics of multiple malicious client models and obtains the most effective poisoning strategy, thereby orchestrating a collusive attack by multiple participants. The effectiveness of this attack is validated across multiple datasets, with results indicating that the DMPA approach consistently surpasses existing state-of-the-art FL model poisoning attack strategies.


A Little Is Enough: Circumventing Defenses For Distributed Learning

Neural Information Processing Systems

Distributed learning is central for large-scale training of deep-learning models. However, it is exposed to a security threat in which Byzantine participants can interrupt or control the learning process. Previous attack models assume that the rogue participants (a) are omniscient (know the data of all other participants), and (b) introduce large changes to the parameters. Accordingly, most defense mechanisms make a similar assumption and attempt to use statistically robust methods to identify and discard values whose reported gradients are far from the population mean. We observe that if the empirical variance between the gradients of workers is high enough, an attacker could take advantage of this and launch a non-omniscient attack that operates within the population variance.


Technology pioneer believes artificial intelligence technology will revolutionize online dating

FOX News

Log Off Movement CEO Emma Lembke and teacher Matt Miles discuss the impact of artificial intelligence on kids on'The Story.' More people are turning to dating apps to find a match, but one company is taking it a step further by using artificial intelligence (AI) to fuel a more efficient and personalized version of online dating, according to Lior Baruch, the co-founder and CEO AlgoAI Tech. "Maybe it was a website in the past, now it's an app, but it's kind of the same," Baruch told Fox News Digital of the traditional form of online dating. "You go into a website to type few details about yourself, they ask you a few questions, you answer them. You get either one, two, three options, or you see tons of options in front of you that you just choose from, like it's kind of a meat market. If you're not, people can stay there for years and I'm not exaggerating."


The Religion of Problem Solving

#artificialintelligence

Welcome to Decade of 2020, a newsletter with a relentless focus on how the next 10 years will affect the middle class. Forewarned is forearmed, they say. If you'd like to sign up, you can do so here. The history of the electric rivalry between Thomas Alva Edison and Nikola Tesla is both fascinating and inspiring. The two geniuses butted heads while trying to solve a problem - the generation, and more importantly, the distribution of electrical energy to American households; Edison with his vision of a direct current future and Tesla with his revolutionary ideas of alternating current.


Zicklin Grad Students Take Top Spot in Pitney Bowes Data Challenge - Zicklin School of Business

#artificialintelligence

Nearly five dozen students from Baruch College and the Zicklin School of Business got to show off their data-crunching skills recently when they participated in the Baruch College – Pitney Bowes Data Challenge, held on May 1. The winning team of Zicklin graduate students -- Drace (Yilei) Zhan (MS Statistics, '20), Nishtha Ram (MS Quantitative Methods & Modeling, '21), Huimin Chen (MS Information Systems, '21), Kang Li (MS QMM, '20), and Rosario Campoverde (MBA, '20) -- outperformed 50 other undergraduate and graduate students across Baruch and Zicklin to take first place. The competition was the culmination of a year-long collaboration among Pitney Bowes and the Paul H. Chook Department of Information Systems and Statistics, the Graduate Career Management Center, and the Starr Career Development Center. The partnership included seminars held throughout the year on machine learning, design thinking, marketing analytics, and other topics, presented by Pitney Bowes data scientists; and a free bootcamp on Python and AWS that was led by Zicklin professors. It was funded by a $10,000 grant from the NYC/CUNY Workforce Development Initiative.


Gaps Emerge In Automotive Test

#artificialintelligence

Demands by automakers for zero defects over 18 years are colliding with real-world limitations of testing complex circuitry and interactions, and they are exposing a fundamental disconnect between mechanical and electronic expectations that could be very expensive to fix. This is especially apparent at leading-edge nodes, where much of the logic is being developed for AI systems and image sensing. While existing equipment for wafer, die and package inspection works well enough for most applications all the way down to 7nm, automakers' demands that chips remain functional for 18 years under harsh road conditions is a time-consuming process. So while 99% sampling may be good enough for a smart phone, it is not good enough for safety-critical functions. To make matters worse, automotive testing often requires synchronization between different components, both within and outside of a vehicle, and much more insight into where potential problems can arise. This is no longer just about using an automated test equipment (ATE) machine in a flow to sample a certain percentage of dies and wafers.