Goto

Collaborating Authors

 cert


Compression Aware Certified Training

Xu, Changming, Singh, Gagandeep

arXiv.org Artificial Intelligence

Deep neural networks deployed in safety-critical, resource-constrained environments must balance efficiency and robustness. Existing methods treat compression and certified robustness as separate goals, compromising either efficiency or safety. We propose CACTUS (Compression Aware Certified Training Using network Sets), a general framework for unifying these objectives during training. CACTUS models maintain high certified accuracy even when compressed. We apply CACTUS for both pruning and quantization and show that it effectively trains models which can be efficiently compressed while maintaining high accuracy and certifiable robustness. CACTUS achieves state-of-the-art accuracy and certified performance for both pruning and quantization on a variety of datasets and input specifications.


Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs

Nurlanov, Zhakshylyk, Schmidt, Frank R., Bernard, Florian

arXiv.org Artificial Intelligence

As deep learning models continue to advance and are increasingly utilized in real-world systems, the issue of robustness remains a major challenge. Existing certified training methods produce models that achieve high provable robustness guarantees at certain perturbation levels. However, the main problem of such models is a dramatically low standard accuracy, i.e. accuracy on clean unperturbed data, that makes them impractical. In this work, we consider a more realistic perspective of maximizing the robustness of a model at certain levels of (high) standard accuracy. To this end, we propose a novel certified training method based on a key insight that training with adaptive certified radii helps to improve both the accuracy and robustness of the model, advancing state-of-the-art accuracy-robustness tradeoffs. We demonstrate the effectiveness of the proposed method on MNIST, CIFAR-10, and TinyImageNet datasets. Particularly, on CIFAR-10 and TinyImageNet, our method yields models with up to two times higher robustness, measured as an average certified radius of a test set, at the same levels of standard accuracy compared to baseline approaches.


WordSig: QR streams enabling platform-independent self-identification that's impossible to deepfake

Critch, Andrew

arXiv.org Artificial Intelligence

Deepfakes can degrade the fabric of society by limiting our ability to trust video content from leaders, authorities, and even friends. Cryptographically secure digital signatures may be used by video streaming platforms to endorse content, but these signatures are applied by the content distributor rather than the participants in the video. We introduce WordSig, a simple protocol allowing video participants to digitally sign the words they speak using a stream of QR codes, and allowing viewers to verify the consistency of signatures across videos. This allows establishing a trusted connection between the viewer and the participant that is not mediated by the content distributor. Given the widespread adoption of QR codes for distributing hyperlinks and vaccination records, and the increasing prevalence of celebrity deepfakes, 2022 or later may be a good time for public figures to begin using and promoting QR-based self-authentication tools.


Computing Rule-Based Explanations of Machine Learning Classifiers using Knowledge Graphs

Dervakos, Edmund, Menis-Mastromichalakis, Orfeas, Chortaras, Alexandros, Stamou, Giorgos

arXiv.org Artificial Intelligence

The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that lately attracts many researchers. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel method for extracting and representing black-box explanations of its operation, in the form of first-order logic rules expressed in the terminology of the knowledge graph.


CERT: Contrastive Self-supervised Learning for Language Understanding

Fang, Hongchao, Wang, Sicheng, Zhou, Meng, Ding, Jiayuan, Xie, Pengtao

arXiv.org Machine Learning

Pretrained language models such as BERT, GPT have shown great effectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly defined on tokens, thus may not be able to capture sentence-level semantics very well. To address this issue, we propose CERT: Contrastive self-supervised Encoder Representations from Transformers, which pretrains language representation models using contrastive self-supervised learning at the sentence level. CERT creates augmentations of original sentences using back-translation. Then it finetunes a pretrained language encoder (e.g., BERT) by predicting whether two augmented sentences originate from the same sentence. CERT is simple to use and can be flexibly plugged into any pretraining-finetuning NLP pipeline. We evaluate CERT on 11 natural language understanding tasks in the GLUE benchmark where CERT outperforms BERT on 7 tasks, achieves the same performance as BERT on 2 tasks, and performs worse than BERT on 2 tasks. On the averaged score of the 11 tasks, CERT outperforms BERT. The data and code are available at https://github.com/UCSD-AI4H/CERT


Second-Order Provable Defenses against Adversarial Attacks

Singla, Sahil, Feizi, Soheil

arXiv.org Machine Learning

A robustness certificate is the minimum distance of a given input to the decision boundary of the classifier (or its lower bound). For {\it any} input perturbations with a magnitude smaller than the certificate value, the classification output will provably remain unchanged. Exactly computing the robustness certificates for neural networks is difficult since it requires solving a non-convex optimization. In this paper, we provide computationally-efficient robustness certificates for neural networks with differentiable activation functions in two steps. First, we show that if the eigenvalues of the Hessian of the network are bounded, we can compute a robustness certificate in the $l_2$ norm efficiently using convex optimization. Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network. We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness. Putting these results together leads to our proposed {\bf C}urvature-based {\bf R}obustness {\bf C}ertificate (CRC) and {\bf C}urvature-based {\bf R}obust {\bf T}raining (CRT). Our numerical results show that CRT leads to significantly higher certified robust accuracy compared to interval-bound propagation (IBP) based training. We achieve certified robust accuracy 69.79\%, 57.78\% and 53.19\% while IBP-based methods achieve 44.96\%, 44.74\% and 44.66\% on 2,3 and 4 layer networks respectively on the MNIST-dataset.


Using Machine Learning to Detect IP Hijacking - Schneier on Security

#artificialintelligence

That is it is not enough to detect a crime, you need to find the guilty persons or atleast their trail. Which a more technical solution to BGP would atleast resolve to a given point such as a PubKey certificate, chain etc. But currently there can be various reasons to make BGB attacks. Both of these can be done for opposit reasons. Take the case where a SigInt agency wants to look at traffic.


Relaxing and Restraining Queries for OBDA

Andreşel, Medina, Ibáñez-García, Yazmin, Ortiz, Magdalena, Šimkus, Mantas

arXiv.org Artificial Intelligence

In ontology-based data access (OBDA), ontologies have been successfully employed for querying possibly unstructured and incomplete data. In this paper, we advocate using ontologies not only to formulate queries and compute their answers, but also for modifying queries by relaxing or restraining them, so that they can retrieve either more or less answers over a given dataset. Towards this goal, we first illustrate that some domain knowledge that could be naturally leveraged in OBDA can be expressed using complex role inclusions (CRI). Queries over ontologies with CRI are not first-order (FO) rewritable in general. We propose an extension of DL-Lite with CRI, and show that conjunctive queries over ontologies in this extension are FO rewritable. Our main contribution is a set of rules to relax and restrain conjunctive queries (CQs). Firstly, we define rules that use the ontology to produce CQs that are relaxations/restrictions over any dataset. Secondly, we introduce a set of data-driven rules, that leverage patterns in the current dataset, to obtain more fine-grained relaxations and restrictions.


Games reviews roundup: Mario & Luigi: Superstar Saga; Knack 2; Ruiner

The Guardian

The game that launched the Mario & Luigi role-playing series in 2003 returns, and it's just as much fun to play now as it was then. No need to worry if you didn't play the original on the Game Boy Advance (or, indeed, hadn't been born) back in 2003, the 3DS version is all that's needed. It has exactly the same simple yet surprisingly subtle game mechanics, silly story and occasionally hilarious dialogue. All wrapped up in better sound, with lovely graphics rebuilt from scratch (which does make the lack of any 3D elements slightly surprising). Mario and Luigi scurry around, getting into (avoidable) fights, getting out of them in rather better shape if they time their jumps properly and level up along the way. And it's all done with three buttons: one for Mario, one for Luigi, and one for both of them at the same time.


Games reviews roundup: Mass Effect: Andromeda; Voez; Ghost Blade HD

The Guardian

PS4, Xbox One, PC, EA, cert: 16 Despite the lofty reputation that the original Mass Effect trilogy (2007-12)has garnered, it's crucial to remember that those games had no shortage of bugs, errors and glitches on release. Bearing this in mind will make the failings of Andromeda far more palatable. Chiefly, those irritants are in the domain of animation, with characters' facial features and physical movements feeling wooden and unnatural. These are real problems in a game where relationships are central to an investment in the universe. Set 600 years after the events of Mass Effect, you play either Sara or Scott Ryder, helping guide an ark vessel to a new home world in the Andromeda Galaxy, where new threats await.