If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Google's TensorFlow and Facebook's PyTorch are the most popular machine learning frameworks. The former has a two-year head start over PyTorch (released in 2016). TensorFlow's popularity reportedly declined after PyTorch bursted into the scene. However, Google released a more user-friendly TensorFlow 2.0 in January 2019 to recover lost ground. PyTorch is emerging as a leader in terms of papers in leading research conferences.
It is frequent to encounter class imbalance when developing models for real-world applications. This occurs when there are substantially more instances associated with one class than with the other. For example, in a Credit Risk Modeling project, when looking at the status of loans in historical data, most of the loans being granted have probably been paid in full. If models susceptible to class imbalance are used, defaulted loans would probably not have much relevance in the training process, as the overall loss continues to decrease when the model focuses on the majority class. To make the model pay more attention to examples where the loan was defaulted, class weights can be used so that the prediction error is larger when an instance of the underrepresented class is incorrectly classified.
Editor's Note: Multi-objective optimization (MOO) is used for many products at LinkedIn (such as the homepage feed) to help balance different behaviors in our ecosystem. There are two parts to how we work with multiple objectives: the first is about training high-fidelity models to predict member behavior (e.g., probability a member will click an article). The second is around trading off different objectives for a unified member experience based on utility to the LinkedIn ecosystem (e.g., a comment is much more valuable than a click). This post will focus on the first part of multi-objective optimization, where we utilize a multi-task, deep learning model to create higher fidelity consumption models; for more information on the second part, objective tradeoffs, see this article from KDnuggets about automatically tuning this tradeoff for faster model iteration. LinkedIn's members rely on the homepage feed for a variety of content including updates from their network, industry articles, and new job opportunities.
It's used for speech recognition, machine translation, computer vision and natural language processing. Deep Learning has applications in medical diagnosis, server optimisation, data centre security, autonomous driving and more. Below, we have listed down seven resources to learn Deep Learning. The Association of Data Scientists offers online courses to provide in-depth knowledge of various areas within machine learning and data science. Most of these courses are available as videos for self-paced learning along with relevant Colab notebooks.
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications. TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well. TensorFlow provides stable Python and C APIs, as well as non-guaranteed backward compatible API for other languages.
In this tutorial, you will learn how to tune the hyperparameters of a deep neural network using scikit-learn, Keras, and TensorFlow. Optimizing your hyperparameters is critical when training a deep neural network. There are many knobs, dials, and parameters to a network -- and worse, the networks themselves are not only challenging to train but also slow to train as well (even with GPU acceleration). Failure to properly optimize the hyperparameters of your deep neural network may lead to subpar performance. Luckily, there is a way for us to search the hyperparameter search space and find optimal values automatically -- we will cover such methods today.
The best way to compare two frameworks is to code something up in both of them. I've written a companion jupyter notebook for this post and you can get it here. All code will be provided in the post too. First, let's code a simple approximator for the following function in both frameworks: We will try to find unknown parameter phi given data x and function values f(x). Yes, using stochastic gradient descent for this is an overkill and analytical solution may be found easily, but this problem will serve our purpose well as a simple example.
Building machine learning versions could be compared to creating a house. Obviously, a hammer is a fantastic tool if you come across a nail, however it's unnecessary to use it if digging a pit. The same holds for machine learning model improvement -- there's not any"only tool to rule them all" but a comprehensive set of resources to use to fix a specific issue. Machine learning is a multidisciplinary field spanning the bounds of maths, technology, and applications development. But that is not all -- that the data scientist wants not just to be aware of the issue but to have the domain knowledge to provide a usable answer. The same is true for a builder that wants to construct a home -- not merely the understanding of placing bricks together is demanded, but also a vision of a home and the basic understanding of its objective is essential.