Goto

Collaborating Authors

Statistical Learning


Exploring different optimization algorithms

#artificialintelligence

Machine learning is a field of study in the broad spectrum of artificial intelligence (AI) that can make predictions using data without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as recommendation engines, computer vision, spam filtering and so much more. They perform extraordinary well where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data-- over and over, faster and faster -- is a recent development. One of the most overwhelmingly represented machine learning techniques is a neural network.


Introduction to Time Series Analysis and Forecasting in R

#artificialintelligence

Time series analysis and forecasting is one of the key fields in statistical programming. Due to modern technology the amount of available data grows substantially from day to day. They also know that decisions based on data gained in the past, and modeled for the future, can make a huge difference. Proper understanding and training in time series analysis and forecasting will give you the power to understand and create those models. This can make you an invaluable asset for your company/institution and will boost your career!


Time Series Analysis of Air Passenger Machine Learning Project

#artificialintelligence

Here we are importing all libraries numpy for numerical analysis, pandas for data frame handling, datetime for date & time columns, adfuller, acf, pacf for time series statistical tools, rcParams for figure dimension sizes. It is a kind of univariate dataset. To read the month alone columns we use head for a view of first 5 rows. We are trying here that all the data points are collected on every 15th of every month. Now, we are trying to figure out the total number of passengers.


Using Orange to Build a Machine Learning Model

#artificialintelligence

Orange is an open-source, GUI based platform that is popularly used for rule mining and easy data analysis. The reason behind the popularity of this platform is it is completely code-free. Researchers, students, non-developers and business analysts use platforms like Orange to get a good understanding of the data at hand and also quickly build machine learning models to understand the relationship between the data points better. Orange is a platform built on Python that lets you do everything required to build machine learning models without code. Orange includes a wide range of data visualisation, exploration, preprocessing and modelling techniques. Not only does it become handy in machine learning, but it is also very useful for associative rule mining of numbers, text and even network analysis.


Introduction to Machine Learning in R

#artificialintelligence

This course material is aimed at people who are already familiar with ... What you'll learn This course is about the fundamental concepts of machine learning, facusing on neural networks. This topic is getting very hot nowadays because these learning algorithms can be used in several fields from software engineering to investment banking. Learning algorithms can recognize patterns which can help detect cancer for example. We may construct algorithms that can have a very good guess about stock prices movement in the market.


Clustering using k-Means with implementation

#artificialintelligence

Clustering is a technique to find natural groups in the data. If we show the above picture to a kid, he can identify that there are four types of animals. He may not know the names of all of them, but he can still identify that there are four different types and he can do this independently, without the help of an adult. As we don't need an adult to supervise, clustering is an unsupervised technique. The three motivations can be listed as follows.


How to use Deep Learning for Time Series Forecasting

#artificialintelligence

For a long time, I heard that the problem of time series could only be approached by statistical methods (AR[1], AM[2], ARMA[3], ARIMA[4]). These techniques are generally used by mathematicians who try to improve them continuously to constrain stationary and non-stationary time series. A friend of mine (mathematician, professor of statistics, and specialist in non-stationary time series) offered me several months ago to work on the validation and improvement of techniques to reconstruct the lightcurve of stars. Indeed, the Kepler satellite[11], like many other satellites, could not continuously measure the intensity of the luminous flux of nearby stars. The Kepler satellite was dedicated between 2009 and 2016 to search for planets outside our Solar System called extrasolar planets or exoplanets. As you have understood, we are going to travel a little further than our planet Earth and deep dive into a galactic journey whose machine learning will be our vessel.


8 Clustering Algorithms in Machine Learning that All Data Scientists Should Know

#artificialintelligence

There are three different approaches to machine learning, depending on the data you have. You can go with supervised learning, semi-supervised learning, or unsupervised learning. In supervised learning you have labeled data, so you have outputs that you know for sure are the correct values for your inputs. That's like knowing car prices based on features like make, model, style, drivetrain, and other attributes. With semi-supervised learning, you have a large data set where some of the data is labeled but most of it isn't. This covers a large amount of real world data because it can be expensive to get an expert to label every data point.


Mean Average Precision for Clients

#artificialintelligence

Disclaimer: This project was created for my clients because it's rather challenging to explain such a complex metric simply, therefore don't expect to see much of math or equations here, and please remember that I try to keep it simple. Accuracy is the most vanilla metric out there. Imagine we are doing classification of whether there is a dog in a picture. In order to test our classifier, we prepare a test set with pictures of both containing dogs and not. We then apply our classifier to every picture and get the predicted classes.


Predictive maintenance and decision support systems in heavy industry

#artificialintelligence

Digital transformation is one of the top priorities for industrial companies. The largest players are already moving in this direction, for many years continuously working to improve production efficiency and launching large-scale optimisation programs. They're called advanced analytics or digital innovation, and at their core, the technology could be summarised under artificial intelligence. In all cases, the efforts to utilise AI models or data analytics systems are part of a bigger digital transformation effort of the progressing companies. In an industrial context, such strategies for cost-saving and process optimisation often start from pilot projects, or top management directives for digital change guide them. In general, changes in processes or investments in capital-intensive and competitive industries require large sums of money. Traditional capital expenditures usually stretch over a long period, so a current financial standing may not allow for a complete physical overhaul of the plants or facilities. These high costs lead to the search for cheaper alternatives.