Goto

Collaborating Authors

Performance Analysis


What is the Performance Measure, learning algorithm?

#artificialintelligence

In order to gauge the skills of a machine learning algorithm, we must design a quantitative measure of its performance. Usually, this performance measure P is restricted to the task T being administered by the system. Accuracy is simply the proportion of examples that the model produces the right output. We will also obtain equivalent information by measuring the error rate, the proportion of examples for which the model produces incorrect output. The 0–1 loss on a specific example is 0 if it's correctly classified and 1 if it's not.


RStudio AI Blog: Starting to think about AI Fairness

#artificialintelligence

The topic of AI fairness metrics is as important to society as it is confusing. Confusing it is due to a number of reasons: terminological proliferation, abundance of formulae, and last not least the impression that everyone else seems to know what they're talking about. This text hopes to counteract some of that confusion by starting from a common-sense approach of contrasting two basic positions: On the one hand, the assumption that dataset features may be taken as reflecting the underlying concepts ML practitioners are interested in; on the other, that there inevitably is a gap between concept and measurement, a gap that may be bigger or smaller depending on what is being measured. In contrasting these fundamental views, we bring together concepts from ML, legal science, and political philosophy.


Confusion Matrix

#artificialintelligence

In machine learning, a confusion matrix is an nxn matrix such that each row represents the true classification of a given piece of data and each column represents the predicted classification (or vise versa). By looking at a confusion matrix, one can determine the accuracy of the model by looking at the values on the diagonal to determine the number of correct classifications - a good model will have high values along the diagonal and low values off the diagonal. Further, one can tell where the model is struggling by assessing the highest values not on the diagonal. Together, these analyses are useful to identify cases where the accuracy may be high but the model is consistently misclassifying the same data. Here is an example of a confusion matrix created by a neural network analyzing the MNIST dataset.


ShotSpotter: AI at it's Worst

#artificialintelligence

Sixty-five-year-old Michael Williams was released from jail last month after spending almost a year in jail on a murder charge. The "gunshot" sound that pointed the finger at Williams was initially classified as a firework by the AI. After the charges were dropped due to "insufficient evidence" it was revealed that one of ShotSpotter's human "reviewers" had changed the data to fit the crime, reclassifying the sound as a gunshot instead of a firework [1]. The case highlighted the dangers that the system poses to civil liberties and brings to question how much power we should give to AI "witnesses", especially those that can easily be tampered with. Shotspotter is a patented acoustic gunshot detection system of microphones, algorithms, and human reviewers that alerts police to potential gunfire [2].


Why Darktrace Installs a Hooli Box

#artificialintelligence

A thought-leader in cyber technology, Adam Mansour has over 15 years' experience spanning endpoint, network and cloud systems security; audits and architecture; building and managing SOCs; and software development. He is the creator of the IntelliGO Managed Detection and Response platform, acquired by ActZero. When you hear cybersecurity firm Darktrace's customers talk about their experience with the company, they will tell you about'the box' from Darktrace they installed. The idea behind the box is that it allows you to see malicious network traffic and coordinate to the cloud directly so you can react quickly. The main customer feedback is that the box was pretty and showed them lots of nice graphics -- beautiful network maps, gorgeous matrixes, pipe diagrams.


Address AI Bias with Fairness Criteria & Tools

#artificialintelligence

AI biases are common, persistent, and hard to address. We wish people see what AI can do but not its flaws. But this is like driving a Lamborghini with the check engine light on. It may run fine for the next few weeks but accidents are waiting to happen. To address the problem, we need to know what is fairness. Can it be judged or evaluated? In the previous article, we look at the complexity of AI bias. All AI designs need to follow the laws if applicable. In this section, we will discuss these issues. Sensitive characters are bias factors that are practically or morally irrelevant to a decision.


ML.NET (Hands-On Machine Learning with ML.NET)

#artificialintelligence

After adding the dataset to our project, we have to create now the input class for our model. Therefore we are going to add a new class in the subfolder "Objects" called SentimentData. Column 0 represents our input text, while column 1 stands for the output label. So far, we have created the input class for the model, but we also need an output class, which contains the output properties after running the model. Let's create a class called SentimentPrediction in the same Objects folder as our SentimentData class.


Titanic Predictions with LDA

#artificialintelligence

The titanic is one of the most iconic and at the same time saddest stories in the history of human beings. There are barely any individuals who are not familiar with its story and how lucky some people were on that liner, because of certain characteristics that they took with them. Whether they were kids or had a higher purchasing power, there was a pattern to follow when predicting the probability of getting a safe boat, leaving unharmed the ship. The cleaning of the data is by far the most challenging part in most of the machine learning projects since you can extremely improve (or harm) your model according to the individual features and the types of features you train your model with. For feature selection, we will go through three main aspects.


Machine Learning in Medicine -- Journal Club

#artificialintelligence

The use of machine learning techniques in biomedical research has exploded over the past few years, as exemplified by the dramatic increase in the number of journal articles indexed on PubMed by the term "machine learning", from 3,200 in 2015 to over 18,000 in 2020. While substantial scientific advancements have been made possible thanks to machine learning, the inner working of most machine learning algorithms remains foreign to many clinicians, most of whom are quite familiar with traditional statistical methods but have little formal training on advanced computer algorithms. Unfortunately, journal reviewers and editors are sometimes content with accepting machine learning as a black box operation and fail to analyze the results produced by machine learning models with the same level of scrutiny that is applied to other clinical and basic science research. The goal of this journal club is to help readers develop the knowledge and skills necessary to digest and critique biomedical journal articles involving the use of machine learning techniques. It is hard for a reviewer to know what questions to ask if he/she does not understand how these algorithms work.


Predictive capacity of AI models

#artificialintelligence

Predictive machine-learning models based on neural networks are extremely powerful when judging large data sets. But understanding them is notoriously difficult. Neural networks are trained using labeled data sets. How well they perform is validated using a labeled test set. This is where model accuracy, confusion matrices, ROCs, etc. come in handy.