Collaborating Authors

Why Computers Don't Need to Match Human Intelligence


Speech and language are central to human intelligence, communication, and cognitive processes. Understanding natural language is often viewed as the greatest AI challenge--one that, if solved, could take machines much closer to human intelligence. In 2019, Microsoft and Alibaba announced that they had built enhancements to a Google technology that beat humans in a natural language processing (NLP) task called reading comprehension. This news was somewhat obscure, but I considered this a major breakthrough because I remembered what had happened four years earlier. In 2015, researchers from Microsoft and Google developed systems based on Geoff Hinton's and Yann Lecun's inventions that beat humans in image recognition.

Facebook says its new Instagram-trained A.I. represents a big leap forward for computer vision – Fortune


Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today. Facebook has created an artificial intelligence system that may make it much more efficient for companies to train such software for a range of computer vision tasks, from facial recognition to functions needed for self-driving cars. The company unveiled the new system in a series of blog posts Thursday. Today, training machine-learning systems for such tasks often requires hundreds of thousands or even millions of labeled data sets.

AI: Facebook's new algorithm was trained on one billion Instagram pics


Facebook's researchers have unveiled a new AI model that can learn from any random group of unlabeled images on the internet. Facebook's researchers have unveiled a new AI model that can learn from any random group of unlabeled images on the internet, in a breakthrough that, although still in its early stages, the team expects to generate a "revolution" in computer vision. Dubbed SEER (SElf-SupERvised), the model was fed one billion publicly available Instagram images, which had not previously been manually curated. But even without the labels and annotations that typically go into algorithm training, SEER was able to autonomously work its way through the dataset, learning as it was going, and eventually achieving top levels of accuracy on tasks such as object detection. The method, aptly named self-supervised learning, is already well-established in the field of AI: it consists of creating systems that can learn directly from the information they are given, without having to rely on carefully labeled datasets to teach them how to perform a task such as recognizing an object in a photo or translating a block of text.

La veille de la cybersécurité


Imagine having an artificial intelligence (AI) system that is capable of mimicking human language and intelligence. Given AI's capabilities, it seems simple, right? Despite recent advancements in AI (especially in the fields of natural language processing (NLP) and computer vision applications), mastering the unique complexities of human language continues to be one of AI's biggest challenges. According to IDC, worldwide revenues for the AI market are forecast to grow 16.4 percent year over year in 2021, as the market is expected to break the $500 billion mark by 2024. As companies continue to develop and deploy AI solutions to automate processes, solve complex problems and enhance customer experiences, many are realizing its shortcomings -- including the amount of data required to train machine learning (ML) algorithms and the flexibility of these algorithms in understanding human language.

Big Tech & Their Favourite Deep Learning Techniques


Interestingly, they all seem to have picked a particular school of thought in deep learning. With time, this pattern is becoming more and more clear. For instance, Facebook AI Research (FAIR) has been championing self-supervised learning (SSL) for quite some time, alongside releasing relevant papers and tech related to computer vision, image, text, video, and audio understanding. Even though many companies and research institutions seem to have their hands on every possible area within deep learning, a clear pattern is emerging. But, of course, all of them have their favourites. In this article, we will explore some of the recent work in their respective niche/popularised areas.