If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks.
"When we try to pick out anything by itself, we find it hitched to everything else in the universe" – John Muir Often in real-world tasks, there isn't enough data to take full advantage of deep learning. However, it is possible to leverage other datasets to reach a critical mass. Sharing knowledge across diverse datasets leads to more general knowledge, deeper insights and more well-informed decisions. This is especially true in domains like healthcare, where data for any particular task can be expensive or dangerous to collect. Modeling datasets separately wastes useful structure that could be shared between them.
If you're already familiar with deep learning, by this time, you got that this is a multi-output problem because we're trying to solve this mutiple tasks at the same time. As we're going to use keras for implementation, a multi-output model can be implemented through Functional API, and not sequential API. As per the data, we've 5 tasks at the hand, out of which face alignment is the main one. So, we're going to train the model for these 5 tasks together using a multi-output model. We will train the main task(face alignment) with different auxiliary tasks to evaluate the effectiveness of deep multi-task learning.
Skin lesion identification is a key step toward dermatological diagnosis. When describing a skin lesion, it is very important to note its body site distribution as many skin diseases commonly affect particular parts of the body. To exploit the correlation between skin lesions and their body site distributions, in this study, we investigate the possibility of improving skin lesion classification using the additional context information provided by body location. Specifically, we build a deep multi-task learning (MTL) framework to jointly optimize skin lesion classification and body location classification (the latter is used as an inductive bias). Our MTL framework uses the state-of-the-art ImageNet pretrained model with specialized loss functions for the two related tasks. Our experiments show that the proposed MTL based method performs more robustly than its standalone (single-task) counterpart.