Goto

Collaborating Authors

Observational Study


Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study

#artificialintelligence

Background: Composition of tissue types within a wound is a useful indicator of its healing progression. Tissue composition is clinically used in wound healing tools (eg, Bates-Jensen Wound Assessment Tool) to assess risk and recommend treatment. However, wound tissue identification and the estimation of their relative composition is highly subjective. Consequently, incorrect assessments could be reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. Objective: This study aimed to measure inter- and intrarater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians and determine if an objective assessment of tissue types (ie, size and amount) can be achieved using deep neural networks. Methods: A data set of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter- and intrarater agreement study. The data set was split into 3 subsets with 50% overlap between subsets to measure intrarater agreement. In this study, 4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled by the 5 wound clinicians at 1-week intervals using a browser-based image annotation tool. In addition, 2 deep convolutional neural network architectures were developed for wound segmentation and tissue segmentation and were used in sequence in the workflow. These models were trained using 465,187 and 17,000 image-label pairs, respectively. This is the largest and most diverse reported data set used for training deep learning models for wound and wound tissue segmentation. The resulting models offer robust performance in diverse imaging conditions, are unbiased toward skin tones, and could execute in near real time on mobile devices. Results: A poor to moderate interrater agreement in identifying tissue types in chronic wound images was reported. A very poor Krippendorff α value of .014 for interrater variability when identifying epithelization was observed, whereas granulation was most consistently identified by the clinicians. The intrarater intraclass correlation (3,1), however, indicates that raters were relatively consistent when labeling the same image multiple times over a period. Our deep learning models achieved a mean intersection over union of 0.8644 and 0.7192 for wound and tissue segmentation, respectively. A cohort of wound clinicians, by consensus, rated 91% (53/58) of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. Conclusions: The interrater agreement study validates that clinicians exhibit considerable variability when identifying and visually estimating wound tissue proportion. The proposed deep learning technique provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately and could have a significant impact on wound care when deployed at scale.


Two more AI ethics researchers follow Timnit Gebru out of Google

Engadget

Google has two lost prominent members of its Ethical AI research group, reports Bloomberg. On Wednesday, researcher Alex Hanna and software engineer Dylan Baker left the company to join Timnit Gebru's Distributed AI Research Institute. Gebru founded the nonprofit in December following her controversial exit from the tech giant in 2020. Up until the end of that year, Gebru was one of the co-leads of Google's Ethical AI research group. After publishing a paper the company said didn't meet its bar for publication, Gebru claims Google fired her.


Predicting treatment effects from observational studies using machine learning methods: A simulation study

arXiv.org Machine Learning

Measuring treatment effects in observational studies is challenging because of confounding bias. Confounding occurs when a variable affects both the treatment and the outcome. Traditional methods such as propensity score matching estimate treatment effects by conditioning on the confounders. Recent literature has presented new methods that use machine learning to predict the counterfactuals in observational studies which then allow for estimating treatment effects. These studies however, have been applied to real world data where the true treatment effects have not been known. This study aimed to study the effectiveness of this counterfactual prediction method by simulating two main scenarios: with and without confounding. Each type also included linear and non-linear relationships between input and output data. The key item in the simulations was that we generated known true causal effects. Linear regression, lasso regression and random forest models were used to predict the counterfactuals and treatment effects. These were compared these with the true treatment effect as well as a naive treatment effect. The results show that the most important factor in whether this machine learning method performs well, is the degree of non-linearity in the data. Surprisingly, for both non-confounding \textit{and} confounding, the machine learning models all performed well on the linear dataset. However, when non-linearity was introduced, the models performed very poorly. Therefore under the conditions of this simulation study, the machine learning method performs well under conditions of linearity, even if confounding is present, but at this stage should not be trusted when non-linearity is introduced.


Ghost in the machine or monkey with a typewriter--generating titles for Christmas research articles in The BMJ using artificial intelligence: observational study

#artificialintelligence

Objective To determine whether artificial intelligence (AI) can generate plausible and engaging titles for potential Christmas research articles in The BMJ . Design Observational study. Setting Europe, Australia, and Africa. Participants 1 AI technology (Generative Pre-trained Transformer 3, GPT-3) and 25 humans. Main outcome measures Plausibility, attractiveness, enjoyability, and educational value of titles for potential Christmas research articles in The BMJ generated by GPT-3 compared with historical controls. Results AI generated titles were rated at least as enjoyable (159/250 responses (64%) v 346/500 responses (69%); odds ratio 0.9, 95% confidence interval 0.7 to 1.2) and attractive (176/250 (70%) v 342/500 (68%); 1.1, 0.8 to 1.4) as real control titles, although the real titles were rated as more plausible (182/250 (73%) v 238/500 (48%); 3.1, 2.3 to 4.1). The AI generated titles overall were rated as having less scientific or educational merit than the real controls (146/250 (58%) v 193/500 (39%); 2.0, 1.5 to 2.6); this difference, however, became non-significant when humans curated the AI output (146/250 (58%) v 123/250 (49%); 1.3, 1.0 to 1.8). Of the AI generated titles, the most plausible was “The association between belief in conspiracy theories and the willingness to receive vaccinations,” and the highest rated was “The effects of free gourmet coffee on emergency department waiting times: an observational study.” Conclusions AI can generate plausible, entertaining, and scientifically interesting titles for potential Christmas research articles in The BMJ ; as in other areas of medicine, performance was enhanced by human intervention. Dataset and full reproducible code are available at .


Automated deep learning-based paradigm for high-risk plaque detection in B-mode common carotid ultrasound scans: an asymptomatic Japanese cohort study

#artificialintelligence

BACKGROUND: The death due to stroke is caused by embolism of the arteries which is due to the rupture of the atherosclerotic lesions in carotid arteries. The lesion formation is over time, and thus, early screening is recommended for asymptomatic and moderate-risk patients. The previous techniques adopted conventional methods or semi-automated and, more recently, machine learning solutions. A handful of studies have emerged based on solo deep learning (SDL) models such as UNet architecture. METHODS: The proposed research is the first to adopt hybrid deep learning (HDL) artificial intelligence models such as SegNet-UNet.


Deep Bayesian Estimation for Dynamic Treatment Regimes with a Long Follow-up Time

arXiv.org Artificial Intelligence

Causal effect estimation for dynamic treatment regimes (DTRs) contributes to sequential decision making. However, censoring and time-dependent confounding under DTRs are challenging as the amount of observational data declines over time due to a reducing sample size but the feature dimension increases over time. Long-term follow-up compounds these challenges. Another challenge is the highly complex relationships between confounders, treatments, and outcomes, which causes the traditional and commonly used linear methods to fail. We combine outcome regression models with treatment models for high dimensional features using uncensored subjects that are small in sample size and we fit deep Bayesian models for outcome regression models to reveal the complex relationships between confounders, treatments, and outcomes. Also, the developed deep Bayesian models can model uncertainty and output the prediction variance which is essential for the safety-aware applications, such as self-driving cars and medical treatment design. The experimental results on medical simulations of HIV treatment show the ability of the proposed method to obtain stable and accurate dynamic causal effect estimation from observational data, especially with long-term follow-up. Our technique provides practical guidance for sequential decision making, and policy-making.


Google fires second AI ethics researcher following internal investigation

#artificialintelligence

Google has fired Margaret Mitchell, co-lead of the ethical AI team, after she used an automated script to look through her emails in order to find evidence of discrimination against her coworker Timnit Gebru. The news was first reported by Axios. Mitchell's firing comes one day after Google announced a reorganization to its AI teams working on ethics and fairness. Marian Croak, a vice president in the engineering organization, is now leading "a new center of expertise on responsible AI within Google Research," according to a blog post. Mitchell joined Google in 2016 as a senior research scientist, according to her LinkedIn. Two years later, she helped start the ethical AI team alongside Gebru, a renowned researcher known for her work on bias in facial recognition technology.


Formation of Social Ties Influences Food Choice: A Campus-Wide Longitudinal Study

arXiv.org Artificial Intelligence

Nutrition is a key determinant of long-term health, and social influence has long been theorized to be a key determinant of nutrition. It has been difficult to quantify the postulated role of social influence on nutrition using traditional methods such as surveys, due to the typically small scale and short duration of studies. To overcome these limitations, we leverage a novel source of data: logs of 38 million food purchases made over an 8-year period on the Ecole Polytechnique Federale de Lausanne (EPFL) university campus, linked to anonymized individuals via the smartcards used to make on-campus purchases. In a longitudinal observational study, we ask: How is a person's food choice affected by eating with someone else whose own food choice is healthy vs. unhealthy? To estimate causal effects from the passively observed log data, we control confounds in a matched quasi-experimental design: we identify focal users who at first do not have any regular eating partners but then start eating with a fixed partner regularly, and we match focal users into comparison pairs such that paired users are nearly identical with respect to covariates measured before acquiring the partner, where the two focal users' new eating partners diverge in the healthiness of their respective food choice. A difference-in-differences analysis of the paired data yields clear evidence of social influence: focal users acquiring a healthy-eating partner change their habits significantly more toward healthy foods than focal users acquiring an unhealthy-eating partner. We further identify foods whose purchase frequency is impacted significantly by the eating partner's healthiness of food choice. Beyond the main results, the work demonstrates the utility of passively sensed food purchase logs for deriving insights, with the potential of informing the design of public health interventions and food offerings.


a retrospective cohort study with Brazilian data

#artificialintelligence

Predicting the disease outcome in COVID-19 positive patients through Machine Learning: a retrospective cohort study with Brazilian data.


Longitudinal Study

#artificialintelligence

While longitudinal studies themselves don't have a direct correlation to machine learning, their data does. Machine learning algorithms can use longitudinal data to understand and infer trends, changes over time, and possibilities of specific occurrences. The incorporation of deep learning technology has led to improved predictions of cardiovascular disease and an enriched understanding of the importance of genetic markers in understanding health risks.