Korbar, Bruno
Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization
Korbar, Bruno, Tran, Du, Torresani, Lorenzo
There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further fine-tuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.
Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization
Korbar, Bruno, Tran, Du, Torresani, Lorenzo
There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further fine-tuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.
Validating an Agent-Based Model of Human Password Behavior
Korbar, Bruno (Dartmouth College) | Blythe, Jim (University of Southern California) | Koppel, Ross (University of Pennsylvania) | Kothari, Vijay (Dartmouth College) | Smith, Sean W. (Dartmouth College)
The The valuation of a given security policy is often predicated varying extent to which a compromised account at one service upon assumptions that fail in practice (e.g, (Blythe, Koppel, can escalate to compromise accounts on other services and Smith 2013)). For example, a plethora of password further complicates matters. And we're just scratching the discussions begin with the password paradox: users must surface. In such complex environments, a mathematical pick strong passwords-so strong that the average user cannot analysis of security can quickly become unwieldy, while a remember them-yet they must never be written down.