transfer
Supervising the Transfer of Reasoning Patterns in VQA
Methods for Visual Question Anwering (VQA) are notorious for leveraging dataset biases rather than performing reasoning, hindering generalization. It has been recently shown that better reasoning patterns emerge in attention layers of a state-of-the-art VQA model when they are trained on perfect (oracle) visual inputs. This provides evidence that deep neural networks can learn to reason when training conditions are favorable enough. However, transferring this learned knowledge to deployable models is a challenge, as much of it is lost during the transfer.We propose a method for knowledge transfer based on a regularization term in our loss function, supervising the sequence of required reasoning operations.We provide a theoretical analysis based on PAC-learning, showing that such program prediction can lead to decreased sample complexity under mild hypotheses. We also demonstrate the effectiveness of this approach experimentally on the GQA dataset and show its complementarity to BERT-like self-supervised pre-training.
Transfer learning under latent space model
Fang, Kuangnan, Qin, Ruixuan, Fan, Xinyan
Latent space model plays a crucial role in network analysis, and accurate estimation of latent variables is essential for downstream tasks such as link prediction. However, the large number of parameters to be estimated presents a challenge, especially when the latent space dimension is not exceptionally small. In this paper, we propose a transfer learning method that leverages information from networks with latent variables similar to those in the target network, thereby improving the estimation accuracy for the target. Given transferable source networks, we introduce a two-stage transfer learning algorithm that accommodates differences in node numbers between source and target networks. In each stage, we derive sufficient identification conditions and design tailored projected gradient descent algorithms for estimation. Theoretical properties of the resulting estimators are established. When the transferable networks are unknown, a detection algorithm is introduced to identify suitable source networks. Simulation studies and analyses of two real datasets demonstrate the effectiveness of the proposed methods.
- Asia > China > Fujian Province > Xiamen (0.04)
- Europe > Middle East > Cyprus > Nicosia > Nicosia (0.04)
Reviews: Transfer of Value Functions via Variational Methods
Update: ----------- I had a look at the author response: It seems reasonable, contains a lot of additional information / additional experiments which do address my main concerns with the paper. Had these comparisons been part of the paper in the first place I would have voted for accepting the paper. I am now a bit on the fence about this as the paper could be accepted but will require a major revision, I will engage in discussion with the other reviewers and ultimately the AC has to decide whether such big changes to the experimental section are acceptable within the review process. Original review: --------------------- The paper presents a method for transfer learning via a variational inference formulation in a reinforcement learning (RL) setting. The proposed approach is sound, novel and interesting and could be widely applicable (it make no overly restrictive assumptions on the form of the learned (Q-)value function).
Reviews: Successor Features for Transfer in Reinforcement Learning
This paper presents a RL optimization scheme and a theoretical analysis of its transfer performance. While the components of this work aren't novel, it combines them in an interesting, well-presented way that sheds new light. The definition of transfer given in Lines 89–91 is nonstandard. It seems to be missing the assumption that t is not in T. The role of T' is a bit strange, making this a requirement for "additional transfer" rather than just transfer. It should be better clarified that this is a stronger requirement than transfer, and explained what it's good for -- the paper shows this stronger property holds, but never uses it.
Successor Features for Transfer in Reinforcement Learning
Andre Barreto, Will Dabney, Remi Munos, Jonathan J. Hunt, Tom Schaul, Hado P. van Hasselt, David Silver
Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment's dynamics remain the same. Our approach rests on two key ideas: successor features, a value function representation that decouples the dynamics of the environment from the rewards, and generalized policy improvement, a generalization of dynamic programming's policy improvement operation that considers a set of policies rather than a single one. Put together, the two ideas lead to an approach that integrates seamlessly within the reinforcement learning framework and allows the free exchange of information across tasks. The proposed method also provides performance guarantees for the transferred policy even before any learning has taken place. We derive two theorems that set our approach in firm theoretical ground and present experiments that show that it successfully promotes transfer in practice, significantly outperforming alternative methods in a sequence of navigation tasks and in the control of a simulated robotic arm.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Learning Transfer Learning. Transfer learning is the process of…
This concept is commonly studied in the field of machine learning, where it is used to refer to the practice of storing knowledge gained from solving one problem and applying it to a different," related problem. Transfer learning is often viewed as a design methodology, as it involves applying previously learned information to new situations in order to improve the efficiency and effectiveness of the learning process. In other words, transfer learning allows individuals or machine learning algorithms to build upon their existing knowledge and skills in order to solve new problems. Transfer learning involves taking knowledge and skills acquired in one context and applying them to a different, but related situation. For example, if you have learned how to recognize cars, that knowledge could be useful in learning how to recognize trucks. Similarly, if you have learned how to ride a motorbike, that knowledge may be transferable to learning how to ride an e-scooter.
Technoloev Transfer
We use our experience with the Dipmeter Advisor system for well-log interpretation as a case study to examine the development of commercial expert systems. We discuss the nature of these systems as we see them in the coming decade, characteristics of the evolution process, development methods, and skills required in the development team. We argue that the tools and ideas of rapid prototyping and successive refinement accelerate the development process. We note that different types of people are required at different stages of expert system development: Those who are primarily knowledgeable in the domain, but who can use the framework to expand the domain knowledge; and those who can actually design and build expert system tools and components We also note that traditional programming skills continue to be required in the development of commercial expert systems Finally, we discuss the problem of technology transfer and compare our experience with some of the traditional wisdom of expert system development. We have observed during this effort that the development of a commercial expert system imposes a substantially different set of constraints and requirements in terms of characteristics and methods of development than those seen in the research environment.
Transfer Learning Progress and Potential
As evidenced by the articles in this special issue, transfer learning has come a long way in the past five or so years, partially because of DARPA's Transfer Learning program, which sponsored much of the work reported in this issue. There is a Transfer Learning Toolkit for Matlab available on the web. Transfer learning has developed techniques for classification, regression, and clustering (as summarized in Pan and Yang's 2009 survey) and for complex interactive tasks that are often best addressed by reinforcement learning techniques. However, there is a more practical and more feasible goal for transfer learning against which progress is being made. An engineering-oriented goal of artificial intelligence that could be enabled by transfer learning is the ability to construct a large number of diverse applications not from scratch, but by taking advantage of knowledge already acquired and formally represented for other purposes.
- Government > Regional Government > North America Government > US Government (1.00)
- Government > Military (1.00)
Transfer Learning through Analogy in Games
We find that a major benefit of analogy is that it reduces the extent to which the source domain must be generalized before transfer. We describe two techniques in particular, minimal ascension and metamapping, that enable analogies to be drawn even when comparing descriptions using different relational vocabularies. Evidence for the effectiveness of these techniques is provided by a large-scale external evaluation, involving a substantial number of novel distant analogs. This is the objective of transfer learning, in which transferred knowledge guides the learning process in a broad range of new situations. In near transfer, the source and target domains are very similar and solutions can be transferred almost verbatim.
Special Issue on Structured Knowledge Transfer
Its goal is to capture, in a general form, the internal structure of the objects, relations, strategies, and processes used to solve tasks drawn from a source domain, and exploit that knowledge to improve performance in a target domain. A Note from the AI Magazine Editor in Chief: Part Two of the Structured Knowledge Transfer special issue will be published in the summer 2011 issue (volume 32 number 2) of AI Magazine. Articles in this issue will include: "Knowledge Transfer between Automated Planners," by Susana Fernández, Ricardo Aler, and Daniel Borrajo "Transfer Learning by Reusing Structured Knowledge," by Qiang Yang, Vincent W. Zheng, Bin Li, and Hankz Hankui Zhuo "An Application of Transfer to American Football: From Observation of Raw Video to Control in a Simulated Environment," by David J. Stracuzzi, Alan Fern, Kamal Ali, Robin Hess, Jervis Pinto, Nan Li, Tolga Könik, and Dan Shapiro "Toward a Computational Model of Transfer," by Daniel Oblinger While the field of psychology has studied transfer learning in people for many years, AI has only recently taken up the challenge. The topic received initial attention with work on inductive transfer in the 1990s, while the number of workshops and conferences has noticeably increased in the last five years. This special issue represents the state of the art in the subarea of transfer learning that focuses on the acquisition and reuse of structured knowledge.
- Government (0.71)
- Information Technology (0.67)