Goto

Collaborating Authors

Inductive Learning


The SwAV method

#artificialintelligence

In this post we discuss SwAV (Swapping Assignments between multiple Views of the same image) method from the paper "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments" by M. Caron et al. For those interested in coding, several code repositories about SwAV algorithm are on GitHub; if in doubt, take a look at the repo mentioned in the paper. Supervised learning works with labeled training data; for example, in supervised image classification algorithms a cat photo needs to be annotated and labeled as "cat". Self-supervised learning aims at obtaining features without using manual annotations. We will consider, in paticular, visual features.


Self-Supervised Learning for Anomaly Detection in Python: Part 2

#artificialintelligence

Self-supervised learning is one of the most popular fields in modern deep-learning research. As Yann Lecun likes to say self-supervised learning is the dark matter of intelligence and the way to create common sense in AI systems. The ideas and techniques of this paradigm attract many researchers to try and enlarge the application of self-supervised learning into new research fields. Of course, anomaly detection is not an exception. In Part 1 of this article, we discussed the definition of anomaly detection and a technique called Kernel Density Estimation.


Self-supervised Learning from 100 Million Medical Images

#artificialintelligence

Building accurate and robust artificial intelligence systems for medical image assessment requires not only the research and design of advanced deep learning models but also the creation of large and curated sets of annotated training examples. Constructing such datasets, however, is often very costly – due to the complex nature of annotation tasks and the high level of expertise required for the interpretation of medical images (e.g., expert radiologists). To counter this limitation, we propose a method for self-supervised learning of rich image features based on contrastive learning and online feature clustering. We propose to use these features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT and MR: 1) Significant increase in accuracy compared to the state-of-the-art (e.g., AUC boost of 3-7 detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); 2) Acceleration of model convergence during training by up to 85 detection of brain metastases in MR scans); 3) Increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field.


When to Use Deep Learning

#artificialintelligence

Most tasks that consist of mapping an input vector to an output vector, and that are easy for a person to do rapidly, can be accomplished via deep learning, given sufficiently large models and sufficiently large datasets of labeled training examples.


3 Main Approaches to Machine Learning Models - KDnuggets

#artificialintelligence

In September 2018, I published a blog about my forthcoming book on The Mathematical Foundations of Data Science. The central question we address is: How can we bridge the gap between mathematics needed for Artificial Intelligence (Deep Learning and Machine learning) with that taught in high schools (up to ages 17/18)? In this post, we present a chapter from this book called "A Taxonomy of Machine Learning Models." The book is now available for an early bird discount released as chapters. If you are interested in getting early discounted copies, please contact ajit.jaokar at feynlabs.ai.


#008 Shallow Neural Network - Master Data Science

#artificialintelligence

In this post we will see how to vectorize across multiple training examples. The outcome will be similar to what we saw in Logistic Regression. These equations tell us how, when given an input feature vector \(x \), we can generate predictions. If we have \(m \) training examples we need to repeat this proces \(m \) times. The notation \( a {[2](i)} \) means that we are talking about activation in the second layer that comes from \(i {th} \) training example.



Driverless cars step closer to our roads with new self-learning AI technology

#artificialintelligence

Computer scientists from Lancaster University have developed new AI technology that takes autonomous cars a step closer to our roads. Funded by global car manufacturer Ford, the three-year research project provides a step-change in AI car technology by enabling autonomous cars to recognise new and unexpected situations. Around the world, many different automotive brands, computing companies, and research teams, are developing autonomous car technologies and many of these are using a machine learning technique called'Deep Learning'. Deep Learning works by recognising patterns after the computer system has been shown a large number of different training examples. However, a fundamental drawback with Deep Learning algorithms is that they are unable to recognise scenarios that differ significantly from training examples and, unlike humans, they are incapable of exploring, improving and improvising.



Meta's prototype moderation AI only needs a few examples of bad behavior to take action

Engadget

Moderating content on today's internet is akin to a round of Whack-A-Mole with human moderators continually forced to react in realtime to changing trends, such as vaccine mis- and disinformation or intentional bad actors probing for ways around established personal conduct policies. Machine learning systems can help alleviate some of this burden by automating the policy enforcement process, however modern AI systems often require months of lead time to properly train and deploy (time mostly spent collecting and annotating the thousands, if not millions of, necessary examples). To shorten that response time, at least to a matter of weeks rather than months, Meta's AI research group (formerly FAIR) has developed a more generalized technology that requires just a handful of specific examples in order to respond to new and emerging forms of malicious content, called Few-Shot Learner (FSL). Few-shot learning is a relatively recent development in AI, essentially teaching the system to make accurate predictions based on a limited number of training examples -- quite the opposite of conventional supervised learning methods. For example, if you wanted to train a standard SL model to recognize pictures of rabbits, you feed it a couple hundred thousands of rabbit pictures and then you can present it with two images and ask if they both show the same animal. Thing is, the model doesn't know if the two pictures are of rabbits because it doesn't actually know what a rabbit is.