inductive transfer
- Asia > China (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Colorado > El Paso County > Colorado Springs (0.04)
- (3 more...)
Inductive Transfer Learning for Graph-Based Recommenders
Grötschla, Florian, Trachsel, Elia, Lanzendörfer, Luca A., Wattenhofer, Roger
Graph-based recommender systems are commonly trained in transductive settings, which limits their applicability to new users, items, or datasets. We propose NBF-Rec, a graph-based recommendation model that supports inductive transfer learning across datasets with disjoint user and item sets. Unlike conventional embedding-based methods that require retraining for each domain, NBF-Rec computes node embeddings dynamically at inference time. We evaluate the method on seven real-world datasets spanning movies, music, e-commerce, and location check-ins. NBF-Rec achieves competitive performance in zero-shot settings, where no target domain data is used for training, and demonstrates further improvements through lightweight fine-tuning. These results show that inductive transfer is feasible in graph-based recommendation and that interaction-level message passing supports generalization across datasets without requiring aligned users or items.
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > China (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Colorado > El Paso County > Colorado Springs (0.04)
- (3 more...)
Review for NeurIPS paper: Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks
This paper addresses the problem of inductive transfer with one-hidden-layer neural networks or linear models and proposes minimax lower bounds for these models. Three reviewers and AC agree that it is a well written paper which studies an important problem. The proposed fine-grained minimax rate for transfer learning is a nice contribution to this field. Although the setting is somewhat simple, this work is inspiring for studying inductive transfer with neural networks. There are still some minor concerns on the organization of the paper and the evaluation of the proposed lower bound, which should be fully addressed in the camera-ready version.
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
Myung, Sanghoon, Huh, In, Jang, Wonik, Choe, Jae Myung, Ryu, Jisu, Kim, Dae Sin, Kim, Kee-Eung, Jeong, Changwook
Inductive transfer learning aims to learn from a small amount of training data for the target task by utilizing a pre-trained model from the source task. Most strategies that involve large-scale deep learning models adopt initialization with the pre-trained model and fine-tuning for the target task. However, when using over-parameterized models, we can often prune the model without sacrificing the accuracy of the source task. This motivates us to adopt model pruning for transfer learning with deep learning models. In this paper, we propose PAC-Net, a simple yet effective approach for transfer learning based on pruning. PAC-Net consists of three steps: Prune, Allocate, and Calibrate (PAC). The main idea behind these steps is to identify essential weights for the source task, fine-tune on the source task by updating the essential weights, and then calibrate on the target task by updating the remaining redundant weights. Under the various and extensive set of inductive transfer learning experiments, we show that our method achieves state-of-the-art performance by a large margin.
- Europe > Latvia > Lubāna Municipality > Lubāna (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Research Report (0.64)
- Workflow (0.46)
Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions
Pugantsov, Alexander, McCreadie, Richard
Transfer learning approaches have shown to significantly improve performance on downstream tasks. However, it is common for prior works to only report where transfer learning was beneficial, ignoring the significant trial-and-error required to find effective settings for transfer. Indeed, not all task combinations lead to performance benefits, and brute-force searching rapidly becomes computationally infeasible. Hence the question arises, can we predict whether transfer between two tasks will be beneficial without actually performing the experiment? In this paper, we leverage explainability techniques to effectively predict whether task pairs will be complementary, through comparison of neural network activation between single-task models. In this way, we can avoid grid-searches over all task and hyperparameter combinations, dramatically reducing the time needed to find effective task pairs. Our results show that, through this approach, it is possible to reduce training time by up to 83.5% at a cost of only 0.034 reduction in positive-class F1 on the TREC-IS 2020-A dataset.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > Scotland (0.04)
Geometry Based Machining Feature Retrieval with Inductive Transfer Learning
Kamal, N S, HB, Barathi Ganesh, VV, Sajith Variyar, V, Sowmya, KP, Soman
Manufacturing industries have widely adopted the reuse of machine parts as a method to reduce costs and as a sustainable manufacturing practice. Identification of reusable features from the design of the parts and finding their similar features from the database is an important part of this process. In this project, with the help of fully convolutional geometric features, we are able to extract and learn the high level semantic features from CAD models with inductive transfer learning. The extracted features are then compared with that of other CAD models from the database using Frobenius norm and identical features are retrieved. Later we passed the extracted features to a deep convolutional neural network with a spatial pyramid pooling layer and the performance of the feature retrieval increased significantly. It was evident from the results that the model could effectively capture the geometrical elements from machining features.
- North America > Canada > Ontario > Toronto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > India > Tamil Nadu (0.04)
Inductive Transfer for Neural Architecture Optimization
Wistuba, Martin, Pedapati, Tejaswini
The recent advent of automated neural network architecture search led to several methods that outperform state-of-the-art human-designed architectures. However, these approaches are computationally expensive, in extreme cases consuming GPU years. We propose two novel methods which aim to expedite this optimization problem by transferring knowledge acquired from previous tasks to new ones. First, we propose a novel neural architecture selection method which employs this knowledge to identify strong and weak characteristics of neural architectures across datasets. Thus, these characteristics do not need to be rediscovered in every search, a strong weakness of current state-of-the-art searches. Second, we propose a method for learning curve extrapolation to determine if a training process can be terminated early. In contrast to existing work, we propose to learn from learning curves of architectures trained on other datasets to improve the prediction accuracy for novel datasets. On five different image classification benchmarks, we empirically demonstrate that both of our orthogonal contributions independently lead to an acceleration, without any significant loss in accuracy.
- Europe > Italy (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- Oceania > Australia (0.04)
- (16 more...)