Goto

Collaborating Authors

Machine Learning


Elucidating Bias, Variance, Under-fitting, and Over-fitting.

#artificialintelligence

Overfitting, underfitting, and bias-variance tradeoff are foundational concepts in machine learning. They are important because they explain the state of a model based on their performance. The best way to understand these terms is to see them as a tradeoff between the bias and the variance of the model. Overfitting occurs when a statistical model or machine learning algorithm captures the noise of the data. Intuitively, overfitting occurs when the model or the algorithm fits the data too well.


Responsible AI/ML Alleviates Technological Risks and Security Concerns

#artificialintelligence

Artificial Intelligence (AI) and Machine Learning (ML) is transforming industries and solving important and real-world challenges at a scale. The technology is maturing rapidly with seemingly limitless applications. These vast openings carry with it a deep responsibility to build AI that works for everyone. AI applications have demonstrated its ability to automate daily works while also augmenting human capacity with new insight. However, with great power comes great responsibility.


COVID-19 smartphone app can tell if you're an asymptomatic carrier - by the way you cough - Study Finds

#artificialintelligence

As millions of people worldwide battle the symptoms of COVID-19, a group of "silent patients" may not even know they're sick and spreading the virus. Asymptomatic people, by definition, have no physical symptoms of the illnesses they carry. Researchers at the Massachusetts Institute of Technology (MIT) however, say they may be showing symptoms after all -- in the sound of their cough. Their study has created an artificial intelligence program that can identify if someone has coronavirus by the way their coughing sounds. Researchers programmed their AI model with thousands of different recorded coughs from both healthy and sick volunteers.


The Beginners' Guide to the ROC Curve and AUC

#artificialintelligence

In the previous article here, you have understood classification evaluation metrics such as Accuracy, Precision, Recall, F1-Score, etc. In this article, we will go through another important evaluation metric AUC-ROC score. ROC curve (Receiver Operating Characteristic curve) is a graph showing the performance of a classification model at different probability thresholds. ROC graph is created by plotting FPR Vs. TPR where FPR (False Positive Rate) is plotted on the x-axis and TPR (True Positive Rate) is plotted on the y-axis for different probability threshold values ranging from 0.0 to 1.0.


Machine learning with PySpark

#artificialintelligence

In this article, I am going to share a few machine learning work I have done in spark using PySpark. Machine Learning is one of the hot application of artificial intelligence (AI). AI is a much bigger ecosystem with many amazing applications. Machine learning in simple terms is the ability to automatically learn by the machine and improve from experience without explicitly programmed. The learning process starts with observation of data, then it finds the pattern in date and makes a better decision on learning from data.


Machine Learning is Conquering Explicit Programming

#artificialintelligence

Before we proceed further to this post let us first understand what is a binary classification. So let's understand this by a very simple instance. You are at home and it's lunchtime, your mom comes to you and asks if you are hungry and want to have your lunch, your answer will be either "yes" or "no". You only have two options to reply i.e. binary options. Let's take another example of a student who has just received his result of grade 12 the result will be "passed" or "failed".


Why is Python so popular among Data Scientists?

#artificialintelligence

The ability to extract insights from massive amounts of data decides your enterprise's success. This is where data scientists and analysts interpret data and derive insights to help identify opportunities and make strategic decisions. For effective analysis of data, data scientists need to be equipped with the best tools for analyzing, reporting, and visualization. Languages such as C, C, Java and Javascript help understand data. That's a tricky question to answer.


Global Big Data Conference

#artificialintelligence

While many know UK company Ocado as an online grocery retailer, it's really one of the most innovative tech companies in the world. Ocado was founded in 2000 as an entirely online experience and therefore never had a brick-and-mortar store to serve its customers, who number 580,000 each day. Its technology expertise came about out of necessity as it began to build the software and hardware it needed to be efficient, productive, and competitive. Today, Ocado uses artificial intelligence (AI) and machine learning in many ways throughout its business. Since 2000, Ocado tried to piece together the technology they needed to succeed by purchasing products off the shelf.


How the Army plans to revolutionize tanks with artificial intelligence

#artificialintelligence

Even as the U.S. Army attempts to integrate cutting edge technologies into its operations, many of its platforms remain fundamentally in the 20th century. The way tank crews operate their machine has gone essentially unchanged over the last 40 years. At a time when the military is enamored with robotics, artificial intelligence and next generation networks, operating a tank relies entirely on manual inputs from highly trained operators. "Currently, tank crews use a very manual process to detect, identify and engage targets," explained Abrams Master Gunner Sgt. "Tank commanders and gunners are manually slewing, trying to detect targets using their sensors. Once they come across a target they have to manually select the ammunition that they're going to use to service that target, lase the target to get an accurate range to it, and a few other factors."


Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence

#artificialintelligence

Graphics chip giant Nvidia mopped up the floor with its competition in a benchmark set of tests released Wednesday afternoon, demonstrating better performance on a host of artificial intelligence tasks. The benchmark, called MLPerf, announced by the MLPerf organization, an industry consortium that administers the tests, showed Nvidia getting better speed on a variety of tasks that use neural networks, from categorizing images to recommending which products a person might like. Predictions are the part of AI where a trained neural network produces output on real data, as opposed to the training phase when the neural network system is first being refined. Benchmark results on training tasks were announced by MLPerf back in July. Many of the scores on the test results pertain to Nvidia's T4 chip that has been in the market for some time, but even more impressive results were reported for its A100 chips unveiled in May.