If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In this article, I have shared a 3-month plan to learn mathematics for machine learning. As we know, almost all machine learning algorithms make use of concepts of Linear Algebra, Calculus, Probability & Statistics, etc. Some advanced algorithms and techniques also make use of subjects such as Measure Theory(a superset of probability theory), convex and non-convex optimization, and much more. To understand the machine learning algorithms and conduct research in machine learning and its related fields, the knowledge of mathematics becomes a requirement. The plan that I have shared in this article can be used to prepare for data science interviews, to strengthen mathematical concepts, or to start researching in machine learning. The plan will not only help in understanding the intuition behind machine learning but can also be used in many other advanced fields such as statistical signal processing, computational electrodynamics, etc.
A lot of attention is being given now to the idea of Machine Learning Pipelines, which are meant to automate and orchestrate the various steps involved in training a machine learning model; however, it's not always made clear what the benefits are of modeling machine learning workflows as automated pipelines. When tasked with training a new ML model, most Data Scientists and ML Engineers will probably start by developing some new Python scripts or interactive notebooks that perform the data extraction and preprocessing necessary to construct a clean set of data on which to train the model. Then, they might create several additional scripts or notebooks to try out different types of models or different machine learning frameworks. And finally, they'll gather and explore metrics to evaluate how each model performed on a test dataset, and then determine which model to deploy to production. This is obviously an over-simplification of a true machine learning workflow, but the key point is that this general approach requires a lot of manual involvement, and is not reusable or easily repeatable by anyone but the engineer(s) that initially developed it.
The second leading cause of accidental injury and death around the world each year is being experienced by 30 per cent of people over the age of 65. Movendo Technology, in partnership with Galliera Hospital in Genoa, Italy has finished a 2-year clinical trial with 150 elderly participants which has resulted in the creation of the Silver Index. This index is an objective measure that predicts the risk of falls in the elderly and suggests specific exercises and protocols to minimize these identified risks. The foundation of this 20-minute evaluation is a proprietary AI-based algorithm which combines the robotic measurements of hunova, a programmable robotic medical device for both objective, functional evaluation and therapy. Through the evaluation of 130 parameters in a routine of seven exercises, the index can predict the risks with 95 per cent accuracy and fifteen per cent improvement for traditional evaluation measures.
And it is in this higher dimension that the Monte Carlo method particularly shines as compared to Riemann sum based approaches. We introduced the concept of Monte Carlo integration and illustrated how it differs from the conventional numerical integration methods. We also showed a simple set of Python codes to evaluate a one-dimensional function and assess the accuracy and speed of the techniques. The broader class of Monte Carlo simulation techniques is more exciting and is used in a ubiquitous manner in fields related to artificial intelligence, data science, and statistical modeling. For example, the famous Alpha Go program from DeepMind used a Monte Carlo search technique to be computationally efficient in the high-dimensional space of the game Go. Numerous such examples can be found in practice.
Note: The colab GPU runtime is approximate 12 hours post that, the runtime will be disconnected, the data stored will be lost. Since in our case each epoch would take 2 hours and need to train for more than 20 epochs to observe the preliminary results. Here are some of the results obtained after training the model for 30 epochs.