Results


Adapt DevOps to cognitive and artificial intelligence systems

#artificialintelligence

These new applications require a new way of thinking about the development process. Traditional application development has been enhanced by the idea of DevOps, which forces operational considerations into development time, execution, and process. In this tutorial, we outline a "cognitive DevOps" process that refines and adapts the best parts of DevOps for new cognitive applications. Specifically, we cover applying DevOps to the training process of cognitive systems including training data, modeling, and performance evaluation. A cognitive or artificial intelligence (AI) system fundamentally exhibits capabilities such as understanding, reasoning, and learning from data.


8 Proven Ways for boosting the "Accuracy" of a Machine Learning Model

@machinelearnbot

Enhancing a model performance can be challenging at times. I'm sure, a lot of you would agree with me if you've found yourself stuck in a similar situation. You try all the strategies and algorithms that you've learnt. Yet, you fail at improving the accuracy of your model. You feel helpless and stuck.


Why hasn't AI taken off yet in monitoring? – Breathe Publication – Medium

#artificialintelligence

There's a lot of talk about the applicability of artificial intelligence (AI) and deep learning to taming the vast quantities of data that modern Operations teams and their tools deal with. Analyst reports frequently tout AI capabilities, no matter how minor, as a strength of a product, and the lack of them as a weakness. Yet no effective use of AI seems to have emerged and claimed wide adoption in Network Operations or Server Monitoring. Why not? (Disclaimer: LogicMonitor does not currently have deep learning or other AI capabilities). Part of the issue is that AI is a soft definition.


Summary of Unintuitive Properties of Neural Networks

@machinelearnbot

Neural network are powerful learning models especially deep learning networks on visual and speech recognition problems. In spite of having made a lot of efforts (e.g., a researcher created a popular toolkit called Deep Visualization Toolbox) to capture step by step how a neural network get trained, what we can see inside these layers is still very intricate. For a deep acoustic model used by Android voice search, a Google research team showed that nearly all of the improvement by training an ensemble of deep neural nets can be distilled into a single neural net of the same size which is much easier to deploy. In an experiment to answer how the pre-training work, it was empirically shown the influence of pre-training in terms of model capacity, training example number, and architecture depth.


Machine Learning for Humans, Part 5: Reinforcement Learning

#artificialintelligence

In reinforcement learning (RL) there's no answer key, but your reinforcement learning agent still has to decide how to act to perform its task. Say we're playing a game where our mouse is seeking the ultimate reward of cheese at the end of the maze ( 1000 points), or the lesser reward of water along the way ( 10 points). This strategy is called the epsilon-greedy strategy, where epsilon is the percent of the time that the agent takes a randomly selected action rather than taking the action that is most likely to maximize reward given what it knows so far (in this case, 20%). Andrej Karpathy's Pong from Pixels provides an excellent walkthrough on using deep reinforcement learning to learn a policy for the Atari game Pong that takes raw pixels from the game as the input (state) and outputs a probability of moving the paddle up or down (action).


Machine Learning 1.0 Over Coffee - DZone AI

@machinelearnbot

They are broken in supervised and unsupervised techniques, with supervised learning taking an input data set to train your model on, and with unsupervised no datasets are provided. This involves building a table of four results -- true positives, true negatives, false positive and false negatives. Bagging splits the training data into multiple input sets, boosting works by building a series of increasingly complex models. There are complimentary techniques used in any successful machine learning problem -- these include data management and visualization, and software languages such as Python and Java have a variety of libraries that can be used for your projects.


Support Vector Machine (SVM) Tutorial: Learning SVMs From Examples

@machinelearnbot

In this case, finding a line that passes between the red and green clusters, and then determining which side of this line a score tuple lies on, is a good algorithm. While the above plot shows a line and data in two dimensions, it must be noted that SVMs work in any number of dimensions; and in these dimensions, they find the analogue of the two-dimensional line. For example, in three dimensions they find a plane (we will see an example of this shortly), and in higher dimensions they find a hyperplane -- a generalization of the two-dimensional line and three-dimensional plane to an arbitrary number of dimensions. We looked at the easy case of perfectly linearly separable data in the last section.


Data Science Simplified Part 7: Log-Log Regression Models

@machinelearnbot

In the last few blog posts of this series, we discussed simple linear regression model. We discussed multivariate regression model and methods for selecting the right model. Fernando tests the model performance on test data set. Simple linear regression models made regression simple.


Data Science Simplified Part 6: Model Selection Methods

@machinelearnbot

The adjusted r-squared is the chosen evaluation metrics for multivariate linear regression models. Imagine that there are 100 variables; the number of models created based on the forward stepwise method is 100 * 101/2 1 i.e. The model will estimate price using engine size, horse power, and width of the car. Fernando tests the model performance on test data set.


BuzzFeed News Trained A Computer To Search For Hidden Spy Planes. This Is What We Found.

#artificialintelligence

We then examined the model's performance, from its estimated errors in classifying the training data. This output shows that overall, the estimated classification error rate was 3.7%. However for the target surveil class, representing likely surveillance aircraft, the estimated error rate was 20.6%. The output shows that the model classified 69 planes as likely surveillance aircraft.