Goto

Collaborating Authors

Native scoring in SQL Server 2017 using R

#artificialintelligence

Native scoring is a much overlooked feature in SQL Server 2017 (available only under Windows and only on-prem), that provides scoring and predicting in pre-build and stored machine learning models in near real-time. Depending on the definition of real-time, and what does it mean for your line of business, I will not go into the definition of real-time, but for sure, we can say scoring 10.000 rows in a second from a mediocre client computer (similar to mine) . Native scoring in SQL Server 2017 comes with couple of limitations, but also with a lot of benefits. Overall, if you are looking for a faster predictions in your enterprise and would love to have a faster code and solution deployment, especially integration with other applications or building API in your ecosystem, native scoring with PREDICT function will surely be advantage to you. Although not all of the predictions/scores are supported, majority of predictions can be done using regression models or decision trees models (it is estimated that both type (with derivatives of regression models and ensemble methods) of algorithms are used in 85% of the predictive analytics).


Deep learning-based prediction of piled-up status and payload distribution of bulk material

#artificialintelligence

The piled-up status of bulk material in a haul truck body determines the load balance, hence affects the mining operations’ efficiency. Prediction of Piled-up Status and Payload Distribution (PSPD) of bulk material contributes to providing optimal dumping positions to improve the vehicle’s stress state and service life. This work introduces a novel deep learning-based PSPD prediction method from images. A two-stage prediction-regression CNN model is designed to automatically extract image features to obtain the PSPD of the current state. The PSPD prediction is accomplished via a backward-propagation neural network (BPNN).


Prediction Driven Behavior: Learning Predictions that Drive Fixed Responses

AAAI Conferences

We introduce a new method for robot control that combines prediction learning with a fixed, crafted response---the robot learns to make a temporally-extended prediction during its normal operation, and the prediction is used to select actions as part of a fixed behavioral response. Our method is inspired by Pavlovian conditioning experiments in which an animal's behavior adapts as it learns to predict an event. Surprisingly the animal's behavior changes even in the absence of any benefit to the animal (i.e. the animal is not modifying its behavior to maximize reward). Our method for robot control combines a fixed response with online prediction learning, thereby producing an adaptive behavior. This method is different from standard non-adaptive control methods and also from adaptive reward-maximizing control methods. We show that this method improves upon the performance of two reactive controls, with visible benefits within 2.5 minutes of real-time learning on the robot. In the first experiment, the robot turns off its motors when it predicts a future over-current condition, which reduces the time spent in unsafe over-current conditions and improves efficiency. In the second experiment, the robot starts to move when it predicts a human-issued request, which reduces the apparent latency of the human-robot interface.


[D] Market prediction using NN • r/MachineLearning

#artificialintelligence

Does anyone here have experience with using ML models to predict markets? I've found it very challenging so far, and I need help. This is how far I've gotten: Plots at the top, in light green background, are predictions using training data. Plots at the bottom, in light blue background, are predictions using testing data. Blue lines are historical prices of a stock/cryptocurrency. Red lines are predicted future 5 minute prices, made at time at which the blue line ends.


[D] Any work on penalizing classifications for being too accurate? • r/MachineLearning

@machinelearnbot

So I stumbled across a weird effect on mistake when training some CNNs on the MNIST dataset. I had implemented the gradient of the softmax layer incorrectly (I was multiplying it by an additional output * (1 - output)), but the odd thing was that I was getting better testing predictions.