Goto

Collaborating Authors

Accuracy


Coming to grips with actual false positive and false negative rates - Ai

#artificialintelligence

While $12.7 billion of this figure goes to another merchant when a customer is turned away, it must to be noticed that false declines "are also making for a less efficient digital economy". This is because "$7.6 billion of potential spending never came about as the shopper lost interest. In the same report, a senior industry executive pointed out that re-visiting risk appetite is vital. Also, a "lot of sins can be hidden in the name of #fraud prevention, because fraud teams aren't always incentivised to have a very rigorous statistical measure of false positives and false negatives". "Many companies just don't want to get on the MasterCard and Visa chargeback programmes, and that's the guiding principle.


Lions say Matt Stafford's test was a false positive

FOX News

The Detroit Lions removed Matthew Stafford from the COVID-IR list, saying he received a false-positive test result -- and drawing the ire of the quarterback's wife toward the NFL. The list was created for players who either test positive for COVID-19 or have been in close contact with an infected person. Stafford was listed on it Saturday, but the team said Tuesday his testing sequence for the pre-entry period was: negative, negative, false positive -- then the next three tests were all negative. "To be clear, Matthew does NOT have COVID-19 and never has had COVID-19 and the test in question was a False-Positive," the team said in a statement. "Also, all of Matthew's family have been tested and everyone is negative."


How Traditional Machine Learning Is Holding Cybersecurity Back

#artificialintelligence

While global cybersecurity spending now surpasses $100 billion annually, 64 percent of enterprises were compromised in 2018, according to a study by the Ponemon Institute. The standard answer is that wily cyber-criminals are employing ever-evolving, increasingly sophisticated attack methods, part of a never-ending game of cat-and-mouse in which they all too often outsmart the good guys. This is undoubtedly true – but the root of the problem is that traditional machine learning-based cybersecurity solutions fail to keep up with the growing sophistication of today's cyber threats, both those that are created by hackers and AI alike. Why does machine learning so often come up short – and how should cybersecurity evolve to meet the scale and complexity of the challenge? There's no question that machine learning has driven significant improvements in cybersecurity.


Analyzing the Performance of the Classification Models in Machine Learning

#artificialintelligence

Confusion matrix (also called Error matrix) is used to analyze how well the Classification Models (like Logistic Regression, Decision Tree Classifier, etc.) performs. Why do we analyze the performance of the models? Analyzing the performance of the models helps us to find and eliminate the bias and variance problem if exist and it also helps us to fine-tune the model so that the model produces more accurate results. Confusion Matrix is usually applied to Binary classification problems but can be extended to Multi-class classification problems as well. Concepts are comprehended better when illustrated with examples so let us consider an example.


Fraud prediction; a challenge for machine learning algorithms

#artificialintelligence

Fraud is a billion-dollar business and expands rapidly year by year. Thousands of people fall victim to it. Fraud always includes a false statement, misinterpretation, or deceitful conduct. Common varieties of fraud offenses include identity theft, insurance fraud, credit/debit card fraud, and mail fraud. The PwC global economic crime survey of 2018 (PwC, 2018) found that about half of the 7,200 surveyed enterprises had already experienced fraud of some kind. This is an increase compared to the PwC survey conducted in 2016 (PwC, 2016), in which slightly more than a third of organizations surveyed had experienced economic crime.


Face masks frustrating facial recognition technology, US agency says

The Independent - Tech

A new study has found that the masks which protect people from spreading the coronavirus also have a second use, breaking facial recognition algorithms. Researchers from the National Institute of Standards and Technology have found that the best facial recognition algorithms had significantly higher error rates when trying to identify someone wearing a cloth covering. The researchers tested one-to-one matching algorithms, where a photo is compared to a different photo of the same person. This verification method is commonly used to unlock smartphones, or check passports. It drew digital masks onto the faces in a trove of border crossing photographs, and then compared those photos against another database of unmasked people seeking visas and other immigration benefits.


NIST study finds that masks defeat most facial recognition algorithms

#artificialintelligence

In a report published today by the National Institutes of Science and Technology (NIST), a physical sciences laboratory and non-regulatory agency of the U.S. Department of Commerce, researchers attempted to evaluate the performance of facial recognition algorithms on faces partially covered by protective masks. They report that the 89 commercial facial recognition algorithms from Panasonic, Canon, Tencent, and others they tested had error rates between 5% and 50% in matching digitally applied masks with photos of the same person without a mask. "With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces," Mei Ngan, a NIST computer scientist and a coauthor of the report, said in a statement. "We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind."


Interesting AI/ML Articles You Should Read This Week (July 4)

#artificialintelligence

This week I came across several articles that challenge the development and utilization of AI-based system across several domains. This week I came across several articles that challenge the development and utilization of AI-based system across several domains. I've never had to genuinely reflect on the philosophical and legal aspects of my contributions as a machine learning practitioner, but this has changed after reading some interesting articles that present the consequences of AI advancement that are happening now, and those that are yet to happen. Our lives today could look entirely different tomorrow. Would you let a machine learning model that has a failure rate of 98% and a false positive rate of 81% into production?


Threats of a Replication Crisis in Empirical Computer Science

Communications of the ACM

Andy Cockburn (andy.cockburn@canterbury.ac.nz) is a professor at the University of Cantebury, Christchurch, New Zealand, where he is head of the HCI and Multimedia Lab. Pierre Dragicevic is a research scientist at Inria, Orsay, France.