If Facebook has an unofficial slogan, an equivalent to Google's "Don't Be Evil" or Apple's "Think Different," it is "Move Fast and Break Things." It means, at least in theory, that one should iterate to try news things and not be afraid of the possibility of failure. In 2021, however, with social media currently being blamed for a plethora of societal ills, the phrase should, perhaps, be modified to: "Move Fast and Fix Things." One of the many areas social media, not just Facebook, has been pilloried for is its spreading of certain images online. It's a challenging problem by any stretch of the imagination: Some 4,000 photo uploads are made to Facebook every single second.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today. Facebook has created an artificial intelligence system that may make it much more efficient for companies to train such software for a range of computer vision tasks, from facial recognition to functions needed for self-driving cars. The company unveiled the new system in a series of blog posts Thursday. Today, training machine-learning systems for such tasks often requires hundreds of thousands or even millions of labeled data sets.
Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy, surpassing the best self-supervised pretrained model by 1% and confirming that self-supervised learning works in a real world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving 77.9% top-1 with access to only 10% of ImageNet. Code: https://github.com/facebookresearch/vissl
At a time when many versions of AI rely on pre-established data sets for image recognition, Facebook has developed SEER (Self-supERvised) – a deep learning solution able to register images on the Internet independent of curated and labeled data sets. With major advances already underway in natural language processing (NLP) including machine translation, natural language interference and question answering, SEER uses an innovative billion-parameter, self-supervised computer vision model able to learn from any online image. Thus far, the Facebook AI team has tested SEER on one billion uncurated and unlabeled public Instagram images. The new program performed better than the most advanced self-supervised systems as well as self-supervised models on downstream tasks such as low-shot, object detection, image detection and segmentation. In fact, exposure to only 10 percent of the ImageNet data set still resulted in a 77.9 percent recognition rate by SEER.
As impressively capable as AI systems are these days, teaching machines to perform various tasks, whether its translating speech in real time or accurately differentiating between chihuahuas and blueberry muffins. But that process still involves some amount of hand holding and data curation by the humans training them. However the emergence of self supervised learning (SSL) methods, which have already revolutionized natural language processing, could hold the key to imbuing AI with some much needed common sense. Facebook's AI research division (FAIR) has now, for the first time, applied SSL to computer vision training. "We've developed SEER (SElf-supERvised), a new billion-parameter self-supervised computer vision model that can learn from any random group of images on the internet, without the need for careful curation and labeling that goes into most computer vision training today," Facebook AI researchers wrote in a blog post Thursday.