In his years-long career developing software for power grids, Stan McHann had never before heard the ominous noise that rang out last Wednesday. Standing in the middle of a utility command center, he flinched as a cyberattack tripped the breakers in all seven of the grid's low voltage substations, plunging the system into darkness. "I heard all the substations trip off and it was just like bam bam bam bam bam bam bam bam," McHann says. All you can do is say, OK, we have to start from scratch bringing the power back up. You just take a deep breath and dig in." Thankfully, what McHann experienced wasn't the first-ever blackout caused by a cyberattack in the United States. Instead, it was part of a live, week-long federal research exercise in which more than 100 grid and cybersecurity experts worked to restore power to an isolated, custom-built test grid. In doing so they faced not just blackout conditions and rough weather, but also a group of fellow researchers throwing a steady ...
With the increasing adoption of AI, inherent security and privacy vulnerabilities formachine learning systems are being discovered. One such vulnerability makes itpossible for an adversary to obtain private information about the types of instancesused to train the targeted machine learning model. This so-called model inversionattack is based on sequential leveraging of classification scores towards obtaininghigh confidence representations for various classes. However, for deep networks,such procedures usually lead to unrecognizable representations that are uselessfor the adversary. In this paper, we introduce a more realistic definition of modelinversion, where the adversary is aware of the general purpose of the attackedmodel (for instance, whether it is an OCR system or a facial recognition system),and the goal is to find realistic class representations within the corresponding lower-dimensional manifold (of, respectively, general symbols or general faces). To thatend, we leverage properties of generative adversarial networks for constructinga connected lower-dimensional manifold, and demonstrate the efficiency of ourmodel inversion attack that is carried out within that manifold.
W e consider membership inference attacks, one of the main privacy issues in machine learning. These recently developed attacks have been proven successful in determining, with confidence better than a random guess, whether a given sample belongs to the dataset on which the attacked machine learning model was trained. Several approaches have been developed to mitigate this privacy leakage but the tradeoff performance implications of these defensive mechanisms (i.e., accuracy and utility of the defended machine learning model) are not well studied yet. W e propose a novel approach of privacy leakage avoidance with switching ensembles (P ASE), which both protects against current membership inference attacks and does that with very small accuracy penalty, while requiring acceptable increase in training and inference time. W e test our P ASE method, along with the the current state-of-the-art P ATE approach, on three calibration image datasets and analyze their tradeoffs.
This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE. Light detection and ranging, or lidar, is a sensing technology based on laser light. It's similar to radar, but can have a higher resolution, since the wavelength of light is about 100,000 times smaller than radio wavelengths. For robots, this is very important: Since radar cannot accurately image small features, a robot equipped with only a radar module would have a hard time grasping a complex object.
Adversarial attacks against machine learning models are a rather hefty obstacle to our increasing reliance on these models. Due to this, provably robust (certified) machine learning models are a major topic of interest. Lipschitz continuous models present a promising approach to solving this problem. By leveraging the expressive power of a variant of neural networks which maintain low Lipschitz constants, we prove that three layer neural networks using the FullSort activation function are Universal Lipschitz function Approximators (ULAs). This both explains experimental results and paves the way for the creation of better certified models going forward. We conclude by presenting experimental results that suggest that ULAs are a not just a novelty, but a competitive approach to providing certified classifiers, using these results to motivate several potential topics of further research.