Collaborating Authors


Artificial Intelligence in Nephrology: How Can Artificial Intelligence Augment Nephrologists' Intelligence?


Background: Artificial intelligence (AI) now plays a critical role in almost every area of our daily lives and academic disciplines due to the growth of computing power, advances in methods and techniques, and the explosion of the amount of data; medicine is not an exception. Rather than replacing clinicians, AI is augmenting the intelligence of clinicians in diagnosis, prognosis, and treatment decisions. Summary: Kidney disease is a substantial medical and public health burden globally, with both acute kidney injury and chronic kidney disease bringing about high morbidity and mortality as well as a huge economic burden. Even though the existing research and applied works have made certain contributions to more accurate prediction and better understanding of histologic pathology, there is a lot more work to be done and problems to solve. Key Messages: AI applications of diagnostics and prognostics for high-prevalence and high-morbidity types of nephropathy in medical-resource-inadequate areas need special attention; high-volume and high-quality data need to be collected and prepared; a consensus on ethics and safety in the use of AI technologies needs to be built. Artificial intelligence (AI) now plays a critical role in almost every area of our daily lives and academic disciplines; medicine is not an exception.

Why the high accuracy in classification is not always correct?


Classification accuracy is a statistic that describes a classification model's performance by dividing the number of correct predictions by the total number of predictions. It is simple to compute and comprehend, making it the most often used statistic for assessing classifier models. But not in every scenario accuracy score is to be considered the best metric to evaluate the model. In this article, we will discuss the reasons not to believe in the accuracy performance parameter completely. Following are the topics to be covered.

The "Hello World" of Tensorflow - KDnuggets


Tensorflow is an open-source end-to-end machine learning framework that makes it easy to train and deploy the model. It consists of two words - tensor and flow. A tensor is a vector or a multidimensional array that is a standard way of representing the data in deep learning models. Flow implies how the data moves through a graph by undergoing the operations called nodes. It is used for numerical computation and large-scale machine learning by bundling various algorithms together.

Analysing Fairness in Machine Learning (with Python)


It is no longer enough to build models that make accurate predictions. We also need to make sure that those predictions are fair. Doing so will reduce the harm of biased predictions. As a result, you will go a long way in building trust in your AI systems. To correct bias we need to start by analysing fairness in data and models. You can see a summary of the approaches we will cover below. Understanding why a model is unfair is more complicated. This is why we will first do an exploratory fairness analysis. This will help you identify potential sources of bias before you start modelling. We will then move on to measuring fairness. This is done by applying different definitions of fairness. We will discuss the theory behind these approaches. Along the way, we will also be applying them using Python. We will discuss key pieces of code and you can find the full project on GitHub. You should still be able to follow the article even if you do not want to use the Python code.

ShotSpotter: AI at its Worst -


Editor's Note: It has come to our attention that several statements in this article have been based on sources that have later been recanted and are factually incorrect. Court documents from the case show that ShotSpotter accurately showed the location of the gunfire as reported in both the real-time alert, as well as in the forensic report. The initial alert was classified as a possible firework, but through their standard procedure of human analysis, it was determined within one minute to be gunfire. The evidence that ShotSpotter provided was later withdrawn by the prosecution and had no bearing on the results of the case. Sixty-five-year-old Michael Williams was released from jail last month after spending almost a year in jail on a murder charge.

A Simple Guide to Machine Learning Visualisations - KDnuggets


An important step in developing machine learning models is to evaluate the performance. Depending on the type of machine learning problem that you are dealing with, there is generally a choice of metrics to choose from to perform this step. However, simply looking at one or two numbers in isolation cannot always enable us to make the right choice for model selection. For example, a single error metric doesn't give us any information about the distribution of the errors. It does not answer questions like is the model wrong in a big way a small number of times, or is it producing lots of smaller errors?

AI glossary: Artificial Intelligence terms - Dataconomy


The most completed list of Artificial Intelligence terms as a dictionary is here for you. Artificial intelligence is already all around us. As AI becomes increasingly prevalent in the workplace, it's more important than ever to keep up with the newest words and use types. Leaders in the field of artificial intelligence are well aware that it is revolutionizing business. So, how much do you know about it? You'll discover concise definitions for automation tools and phrases below. It's no surprise that the world is moving ahead quickly thanks to artificial intelligence's wonders. Technology has introduced new values and creativity to our personal and professional lives. While frightening at times, the rapid evolution has been complemented by artificial intelligence (AI) technology with new aspects. It has provided us with new phrases to add to our everyday vocab that we have never heard of before.

I ran 80,000 simulations to investigate different p-value adjustments


However, in a surprise to approximately no one who works professionally with data, we do not live in an ideal world. A variety of pressures compel many practitioners to perform tens, hundreds, or even thousands of significance tests on the same data set. Some reasons for doing this are better than others but, independent of even the very best motivations: this practice basically breaks everyday statistics. The assurance of a getting small p-value–that chance alone would spur null differences to appear this distinct only 5%, 1%, 0.1% of the time–is moot when you're playing the odds hundreds, thousands, or tens of thousands of times. A really really big number divided by a big number [or, equivalently here, multiplied by a small proportion] is still a really really big number.

Automate your Machine Learning development pipeline with PyCaret


Data science is not easy, we all know that. Even programming requires a lot of your cycles to get fully onboarded. Don't get me wrong, I love being a developer to some extent, but is hard. You can read and watch a ton of videos about how easy is to get into programming, but as with everything in life, if you are not passionate, you may find some roadblocks along the way. I get it, you may be thinking, "Nice way to start a post!, I'm out dude", but, let me tell you that even though becoming a data scientist is a challenge, as we are becoming more data-centric, data-aware, and data-dependent, you need to sort these issues out to become a specialist, that's part of the journey.