Goto

Collaborating Authors

Transfer Learning


AI learning: will machines acquire knowledge as naturally as children do?

#artificialintelligence

Watching a child learn is an extraordinary experience. As a proud dad, it delights and inspires me, and as an artificial intelligence (AI) professional, it reminds me that our journey into machine learning (ML) has only just begun. What is particularly incredible about babies and young children, of course, is that they learn incredibly quickly – drawing on building blocks of information and astounding us by picking things up naturally and incrementally. Is that too much to ask of machines? For now, the answer is yes.


All about Transfer Learning!

#artificialintelligence

Let's hop on to why we use Transfer Learning! So in Deep Learning, people always go for self-prepared and self-trained models. They always tend to make their models from scratch, which is a good approach. By making a model from scratch, you get complete access to it. You can dictate the play.


John Snow Labs Announces Free, Enterprise-Grade, No-Code Natural Language Processing Tools: Annotation Lab and NLP Server

#artificialintelligence

LEWES, Del., Oct. 05, 2021 (GLOBE NEWSWIRE) -- John Snow Labs, the Healthcare AI and NLP company and developer of the Spark NLP library, today announced that it will enable free access to its enterprise-grade Annotation Lab and NLP Server software for all users. This announcement comes on the first day of the company's annual NLP Summit, a free online event that brings together the AI community to discuss the most important trends, use cases, and solutions advancing natural language processing (NLP). The Annotation Lab, a robust data labeling and AI/ML solution for teams, enables users to annotate documents, images, and videos. The software automatically trains models using active learning and transfer learning. The simple and efficient project-based workflow helps users leverage real-time analytics on productivity, dataset bias, inter-annotator agreement, and more.


How does Transfer Learning work?

#artificialintelligence

The simple idea of transfer learning is, After Neural Network learned from one task, apply that knowledge to another related task. It is a powerful idea in Deep Learning. You all know in Computer vision and Natural Language Processing tasks required high computational costs and time. So, we can simplify those tasks using Transfer Learning. For example, after we trained a model using images to classify Cars, then that model we can use to recognize other vehicles like trucks.


Chapter 3 : Transfer Learning with ResNet50 -- from Dataloaders to Training

#artificialintelligence

I was given Xray baggage scan images by an airport to develop a model that performs automatic detection of dangerous objects (gun and knife). Given only a small amount of Xray images, I am using Domain Adaptation by first collecting a large number of normal (non-Xray) images of dangerous objects from the internet, training a model using only those normal images, then adapting the model to perform well on Xray images. In my previous post, I talked about iterative data collection process for web images of gun and knife to be used for domain adaptation. In this post, I will discuss transfer learning with ResNet50 using the scraped web images. For now, we won't worry about the Xray images and only focus on training the model with the web images. To read this post, it's recommended to have some knowledge about how to apply transfer learning using a model pre-trained on ImageNet in PyTorch. I won't explain every step in detail, but will share some useful tips that can answer questions like:


GitHub - pykale/pykale: Knowledge-Aware machine LEarning (KALE) from multiple sources in Python

#artificialintelligence

Very cool library with lots of great ideas on moving toward'green', efficient multimodal machine learning and AI. Kevin Carlberg, AI Research Science Manager at Facebook Reality Labs (quoted from tweet). PyKale is a PyTorch library for multimodal learning and transfer learning with deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via green machine learning concepts of reducing repetitions and redundancy, reusing existing resources, and recycling learning models across areas. PyKale aims to facilitate interdisciplinary, knowledge-aware machine learning research for graphs, images, texts, and videos in applications including bioinformatics, graph analysis, image/video recognition, and medical imaging.


Targeting Underrepresented Populations in Precision Medicine: A Federated Transfer Learning Approach

arXiv.org Machine Learning

The limited representation of minorities and disadvantaged populations in large-scale clinical and genomics research has become a barrier to translating precision medicine research into practice. Due to heterogeneity across populations, risk prediction models are often found to be underperformed in these underrepresented populations, and therefore may further exacerbate known health disparities. In this paper, we propose a two-way data integration strategy that integrates heterogeneous data from diverse populations and from multiple healthcare institutions via a federated transfer learning approach. The proposed method can handle the challenging setting where sample sizes from different populations are highly unbalanced. With only a small number of communications across participating sites, the proposed method can achieve performance comparable to the pooled analysis where individual-level data are directly pooled together. We show that the proposed method improves the estimation and prediction accuracy in underrepresented populations, and reduces the gap of model performance across populations. Our theoretical analysis reveals how estimation accuracy is influenced by communication budgets, privacy restrictions, and heterogeneity across populations. We demonstrate the feasibility and validity of our methods through numerical experiments and a real application to a multi-center study, in which we construct polygenic risk prediction models for Type II diabetes in AA population.


Geometry Based Machining Feature Retrieval with Inductive Transfer Learning

arXiv.org Artificial Intelligence

Manufacturing industries have widely adopted the reuse of machine parts as a method to reduce costs and as a sustainable manufacturing practice. Identification of reusable features from the design of the parts and finding their similar features from the database is an important part of this process. In this project, with the help of fully convolutional geometric features, we are able to extract and learn the high level semantic features from CAD models with inductive transfer learning. The extracted features are then compared with that of other CAD models from the database using Frobenius norm and identical features are retrieved. Later we passed the extracted features to a deep convolutional neural network with a spatial pyramid pooling layer and the performance of the feature retrieval increased significantly. It was evident from the results that the model could effectively capture the geometrical elements from machining features.


What is Transfer Learning? -- Idiot Developer

#artificialintelligence

Transfer Learning is a technique in machine learning where we reuse a pre-trained model to solve a different but related problem. It is one of the popular methods to train the deep neural network. It is generally used for image classification tasks where the amount of the dataset is small. In this article, we will go through what transfer learning is, how it works and the advantages it offers. Additionally, we will also cover the most common problems related to it.


On the Opportunities and Risks of Foundation Models

arXiv.org Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.