Goto

Collaborating Authors

dermatology


Does This Artificial Intelligence Think Like A Human? - Liwaiwai

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study

#artificialintelligence

Background: Composition of tissue types within a wound is a useful indicator of its healing progression. Tissue composition is clinically used in wound healing tools (eg, Bates-Jensen Wound Assessment Tool) to assess risk and recommend treatment. However, wound tissue identification and the estimation of their relative composition is highly subjective. Consequently, incorrect assessments could be reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. Objective: This study aimed to measure inter- and intrarater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians and determine if an objective assessment of tissue types (ie, size and amount) can be achieved using deep neural networks. Methods: A data set of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter- and intrarater agreement study. The data set was split into 3 subsets with 50% overlap between subsets to measure intrarater agreement. In this study, 4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled by the 5 wound clinicians at 1-week intervals using a browser-based image annotation tool. In addition, 2 deep convolutional neural network architectures were developed for wound segmentation and tissue segmentation and were used in sequence in the workflow. These models were trained using 465,187 and 17,000 image-label pairs, respectively. This is the largest and most diverse reported data set used for training deep learning models for wound and wound tissue segmentation. The resulting models offer robust performance in diverse imaging conditions, are unbiased toward skin tones, and could execute in near real time on mobile devices. Results: A poor to moderate interrater agreement in identifying tissue types in chronic wound images was reported. A very poor Krippendorff α value of .014 for interrater variability when identifying epithelization was observed, whereas granulation was most consistently identified by the clinicians. The intrarater intraclass correlation (3,1), however, indicates that raters were relatively consistent when labeling the same image multiple times over a period. Our deep learning models achieved a mean intersection over union of 0.8644 and 0.7192 for wound and tissue segmentation, respectively. A cohort of wound clinicians, by consensus, rated 91% (53/58) of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. Conclusions: The interrater agreement study validates that clinicians exhibit considerable variability when identifying and visually estimating wound tissue proportion. The proposed deep learning technique provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately and could have a significant impact on wound care when deployed at scale.


Does this artificial intelligence think like a human?

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Using artificial intelligence to diagnose cancer

#artificialintelligence

During her Ph.D., Dr. Qurrat Ul Ain developed a computer-aided diagnostic system that can identify certain characteristics of the disease from a photograph of a skin lesion. "Skin cancer has certain unique visual features that help to differentiate it from normal skin," Dr. Qurrat Ul Ain says. "These include color, texture, and the shape of lesions. By showing our artificial intelligence program images of cancerous skin, we were able to teach it to identify cancer when shown other photographs." Dr. Qurrat Ul Ain's diagnostic system achieved a 100% accuracy rating in identifying images of melanoma based on the more than 600 images tested so far.


Using artificial intelligence to diagnose cancer

#artificialintelligence

Te Herenga Waka-Victoria University of Wellington PhD graduate Dr Qurrat Ul Ain has developed an artificial intelligence programme that could help diagnose skin cancer, using just a photograph. During her PhD, Dr Qurrat Ul Ain developed a computer-aided diagnostic system that can identify certain characteristics of the disease from a photograph of a skin lesion. "Skin cancer has certain unique visual features that help to differentiate it from normal skin," Dr Qurrat Ul Ain says. "These include colour, texture, and the shape of lesions. By showing our artificial intelligence programme images of cancerous skin, we were able to teach it to identify cancer when shown other photographs."


Do Humans and AI Think Alike?

#artificialintelligence

MIT researchers developed a method that helps a user understand a machine-learning model's reasoning, and how that reasoning compares to that of a human. A new technique compares the reasoning of a machine-learning model to that of a human, so the user can see patterns in the model's behavior. In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated.


Does this artificial intelligence think like a human?

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Does this artificial intelligence think like a human?

#artificialintelligence

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.


Impact of AI and Robotics in the Healthcare Industry

#artificialintelligence

AI and Robotics are already working in several healthcare establishments. Additionally, in the dermatology sector, AI is detecting skin cancer. The process of detecting skin cancer involves a technology, "MelaFind," that uses infrared light to evaluate the skin condition. Afterward, with its sophisticated algorithms, AI evaluates the scanned data to determine skin cancer's seriousness. AI and Robotics require more unveiling and continued experimentation to become an integral part of the industry and bring innovations through these emerging technologies. The ubiquitous growth of these two technologies has the potential to transform numerous aspects of healthcare.


AI-produced images can't fix diversity issues in dermatology databases

#artificialintelligence

Image databases of skin conditions are notoriously biased towards lighter skin. Rather than wait for the slow process of collecting more images of conditions like cancer or inflammation on darker skin, one group wants to fill in the gaps using artificial intelligence. It's working on an AI program to generate synthetic images of diseases on darker skin -- and using those images for a tool that could help diagnose skin cancer. "Having real images of darker skin is the ultimate solution," says Eman Rezk, a machine learning expert at McMaster University in Canada working on the project. "Until we have that data, we need to find a way to close the gap."