Goto

Collaborating Authors

Dermatology


Does This Artificial Intelligence Think Like A Human? - Liwaiwai

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Does this artificial intelligence think like a human?

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Do Humans and AI Think Alike?

#artificialintelligence

MIT researchers developed a method that helps a user understand a machine-learning model's reasoning, and how that reasoning compares to that of a human. A new technique compares the reasoning of a machine-learning model to that of a human, so the user can see patterns in the model's behavior. In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated.


Does this artificial intelligence think like a human?

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Does this artificial intelligence think like a human?

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Impact of AI and Robotics in the Healthcare Industry

#artificialintelligence

AI and Robotics are already working in several healthcare establishments. Additionally, in the dermatology sector, AI is detecting skin cancer. The process of detecting skin cancer involves a technology, "MelaFind," that uses infrared light to evaluate the skin condition. Afterward, with its sophisticated algorithms, AI evaluates the scanned data to determine skin cancer's seriousness. AI and Robotics require more unveiling and continued experimentation to become an integral part of the industry and bring innovations through these emerging technologies. The ubiquitous growth of these two technologies has the potential to transform numerous aspects of healthcare.


AI-produced images can't fix diversity issues in dermatology databases

#artificialintelligence

Image databases of skin conditions are notoriously biased towards lighter skin. Rather than wait for the slow process of collecting more images of conditions like cancer or inflammation on darker skin, one group wants to fill in the gaps using artificial intelligence. It's working on an AI program to generate synthetic images of diseases on darker skin -- and using those images for a tool that could help diagnose skin cancer. "Having real images of darker skin is the ultimate solution," says Eman Rezk, a machine learning expert at McMaster University in Canada working on the project. "Until we have that data, we need to find a way to close the gap."


Setting the standard for AI in dermatology - AIMed

#artificialintelligence

Dr. Rubeta Matin, NHS Consultant Dermatologist, reveals the challenges of setting up a new national skin database to support the development of dermatological AI in the UK It's common knowledge that the chances of survival increase dramatically if melanoma is detected and treated early. However, many algorithm-based applications that claim to identify potentially dangerous-looking pigment on the skin have not been formally and appropriately validated in intervention studies. There are also not many systematic and rigorous reviews to discover the true accuracy of these skin cancer diagnosing algorithms, especially those that were tested in an artificial research setting that may not be representative of the real world. It's reasons like these that drive dermatologists to question whether the false assurance given by these applications may delay individuals from seeking medical advice. Last February, a new study published in the BMJ revealed mobile applications that assess the risks of suspicious moles may not be reliable enough to detect all forms of skin cancer.


Maintaining fairness across distribution shift: do we have viable solutions for real-world applications?

arXiv.org Machine Learning

Fairness and robustness are often considered as orthogonal dimensions when evaluating machine learning models. However, recent work has revealed interactions between fairness and robustness, showing that fairness properties are not necessarily maintained under distribution shift. In healthcare settings, this can result in e.g. a model that performs fairly according to a selected metric in "hospital A" showing unfairness when deployed in "hospital B". While a nascent field has emerged to develop provable fair and robust models, it typically relies on strong assumptions about the shift, limiting its impact for real-world applications. In this work, we explore the settings in which recently proposed mitigation strategies are applicable by referring to a causal framing. Using examples of predictive models in dermatology and electronic health records, we show that real-world applications are complex and often invalidate the assumptions of such methods. Our work hence highlights technical, practical, and engineering gaps that prevent the development of robustly fair machine learning models for real-world applications. Finally, we discuss potential remedies at each step of the machine learning pipeline.


Beyond Visual Image: Automated Diagnosis of Pigmented Skin Lesions Combining Clinical Image Features with Patient Data

arXiv.org Artificial Intelligence

Among the most common types of skin cancer are basal cell carcinoma, squamous cell carcinoma and melanoma. According to the who (2018), currently, between 2 and 3 million non-melanoma skin cancers and 132.000 melanoma skin cancer occur every year in the world. Melanoma is by far the most dangerous form of skin cancer, causing more than 75% of all skin cancer deaths (Allen, 2016). Early diagnosis of the disease plays an important role in reducing the mortality rate with a chance of cure greater than 90% (SBD, 2018). The diagnosis of pigmented skin lesions (PSLs) can be made by invasive and non-invasive methods. One of the most common non-invasive methods was presented by Soyer et al. (1987). The method allows the visualization of morphological structures not visible to the naked eye with the use of an instrument called dermatoscope. When compared to the clinical diagnosis, the use of dermatoscope by experts makes the diagnosis of PSLs easier, increasing by 10-27% the diagnostic sensitivity (Mayer et al., 1997).