Generalizable Deep Reinforcement Learning

#artificialintelligence 

Transfer learning is all the rage in the machine learning community these days. Transfer learning serves as the basis for many of the managed AutoML services that Google, Salesforce, IBM, and Azure provide. It now figures prominently in the latest NLP research -- appearing in Google's Bidirectional Encoder Representations from Transformers (BERT) model and in Sebastian Ruder and Jeremy Howard's Universal Language Model Fine-tuning for Text Classification (ULMFIT). As Sebastian writes in his blog post, 'NLP's ImageNet moment has arrived': We're also starting to see examples of neural networks that can handle multiple tasks using transfer learning across domains. Paras Chopra has an excellent tutorial for one PyTorch network that can conduct an image search based on a textual description, search for similar images and words, and write captions for images (link to his post below).