Goto

Collaborating Authors

 pain level


From Deception to Perception: The Surprising Benefits of Deepfakes for Detecting, Measuring, and Mitigating Bias

Liu, Yizhi, Padmanabhan, Balaji, Viswanathan, Siva

arXiv.org Artificial Intelligence

Individuals from minority groups, even with equivalent qualifications, consistently receive fewer opportunities in critical areas such as employment, education, and healthcare. Yet, empirically demonstrating the existence of such pervasive bias, let alone measuring the extent of bias or correcting it, remains a significant challenge. Over several decades, researchers have utilized a range of experimental methodologies to test for biases in real-life situations (Bertrand and Duflo 2017). Audit studies, among the earliest of such methods, match two individuals who are similar in all respects except for sensitive characteristics like race, to test decision-makers' biases (Ayres and Siegelman 1995). A significant limitation of this method, however, is the inherent impossibility of achieving an exact match between two individuals, precluding perfect comparability (Heckman 1998). Correspondence studies have emerged as a predominant experimental approach for measuring biases (Guryan and Charles 2013, Bertrand and Mullainathan 2004). They create identical fictional profiles with manipulated attributes like race to assess differential treatment. However, these studies traditionally manipulate solely textual information, which may not reflect contemporary decision-making scenarios increasingly influenced by visual cues like facial images, as seen in recent hiring processes (Acquisti and Fong 2020, Ruffle and Shtudiner 2015). This reliance on text limits their effectiveness, as modern contexts often involve multimedia elements, making it challenging to measure real-world biases accurately or correct them based on such incomplete information (Armbruster et al. 2015).


An Oversampling-enhanced Multi-class Imbalanced Classification Framework for Patient Health Status Prediction Using Patient-reported Outcomes

Yan, Yang, Chen, Zhong, Xu, Cai, Shen, Xinglei, Shiao, Jay, Einck, John, Chen, Ronald C, Gao, Hao

arXiv.org Artificial Intelligence

Patient-reported outcomes (PROs) directly collected from cancer patients being treated with radiation therapy play a vital role in assisting clinicians in counseling patients regarding likely toxicities. Precise prediction and evaluation of symptoms or health status associated with PROs are fundamental to enhancing decision-making and planning for the required services and support as patients transition into survivorship. However, the raw PRO data collected from hospitals exhibits some intrinsic challenges such as incomplete item reports and imbalance patient toxicities. To the end, in this study, we explore various machine learning techniques to predict patient outcomes related to health status such as pain levels and sleep discomfort using PRO datasets from a cancer photon/proton therapy center. Specifically, we deploy six advanced machine learning classifiers -- Random Forest (RF), XGBoost, Gradient Boosting (GB), Support Vector Machine (SVM), Multi-Layer Perceptron with Bagging (MLP-Bagging), and Logistic Regression (LR) -- to tackle a multi-class imbalance classification problem across three prevalent cancer types: head and neck, prostate, and breast cancers. To address the class imbalance issue, we employ an oversampling strategy, adjusting the training set sample sizes through interpolations of in-class neighboring samples, thereby augmenting minority classes without deviating from the original skewed class distribution. Our experimental findings across multiple PRO datasets indicate that the RF and XGB methods achieve robust generalization performance, evidenced by weighted AUC and detailed confusion matrices, in categorizing outcomes as mild, intermediate, and severe post-radiation therapy. These results underscore the models' effectiveness and potential utility in clinical settings.


Automatic pain recognition from Blood Volume Pulse (BVP) signal using machine learning techniques

Pouromran, Fatemeh, Lin, Yingzi, Kamarthi, Sagar

arXiv.org Artificial Intelligence

Physiological responses to pain have received increasing attention among researchers for developing an automated pain recognition sensing system. Though less explored, Blood Volume Pulse (BVP) is one of the candidate physiological measures that could help objective pain assessment. In this study, we applied machine learning techniques on BVP signals to device a non-invasive modality for pain sensing. Thirty-two healthy subjects participated in this study. First, we investigated a novel set of time-domain, frequency-domain and nonlinear dynamics features that could potentially be sensitive to pain. These include 24 features from BVP signals and 20 additional features from Inter-beat Intervals (IBIs) derived from the same BVP signals. Utilizing these features, we built machine learning models for detecting the presence of pain and its intensity. We explored different machine learning models, including Logistic Regression, Random Forest, Support Vector Machines, Adaptive Boosting (AdaBoost) and Extreme Gradient Boosting (XGBoost). Among them, we found that the XGBoost offered the best model performance for both pain classification and pain intensity estimation tasks. The ROC-AUC of the XGBoost model to detect low pain, medium pain and high pain with no pain as the baseline were 80.06 %, 85.81 %, and 90.05 % respectively. Moreover, the XGboost classifier distinguished medium pain from high pain with ROC-AUC of 91%. For the multi-class classification among three pain levels, the XGBoost offered the best performance with an average F1-score of 80.03%. Our results suggest that BVP signal together with machine learning algorithms is a promising physiological measurement for automated pain assessment. This work will have a national impact on accurate pain assessment, effective pain management, reducing drug-seeking behavior among patients, and addressing national opioid crisis.


Interpret Your Care: Predicting the Evolution of Symptoms for Cancer Patients

Bhati, Rupali, Jones, Jennifer, Durand, Audrey

arXiv.org Artificial Intelligence

Cancer treatment is an arduous process for patients and causes many side-effects during and post-treatment. The treatment can affect almost all body systems and result in pain, fatigue, sleep disturbances, cognitive impairments, etc. These conditions are often under-diagnosed or under-treated. In this paper, we use patient data to predict the evolution of their symptoms such that treatment-related impairments can be prevented or effects meaningfully ameliorated. The focus of this study is on predicting the pain and tiredness level of a patient post their diagnosis. We implement an interpretable decision tree based model called LightGBM on real-world patient data consisting of 20163 patients. There exists a class imbalance problem in the dataset which we resolve using the oversampling technique of SMOTE. Our empirical results show that the value of the previous level of a symptom is a key indicator for prediction and the weighted average deviation in prediction of pain level is 3.52 and of tiredness level is 2.27.


Intelligent Sight and Sound: A Chronic Cancer Pain Dataset

Ordun, Catherine, Cha, Alexandra N., Raff, Edward, Gaskin, Byron, Hanson, Alex, Rule, Mason, Purushotham, Sanjay, Gulley, James L.

arXiv.org Artificial Intelligence

Cancer patients experience high rates of chronic pain throughout the treatment process. Assessing pain for this patient population is a vital component of psychological and functional well-being, as it can cause a rapid deterioration of quality of life. Existing work in facial pain detection often have deficiencies in labeling or methodology that prevent them from being clinically relevant. This paper introduces the first chronic cancer pain dataset, collected as part of the Intelligent Sight and Sound (ISS) clinical trial, guided by clinicians to help ensure that model findings yield clinically relevant results. The data collected to date consists of 29 patients, 509 smartphone videos, 189,999 frames, and self-reported affective and activity pain scores adopted from the Brief Pain Inventory (BPI). Using static images and multi-modal data to predict self-reported pain levels, early models show significant gaps between current methods available to predict pain today, with room for improvement. Due to the especially sensitive nature of the inherent Personally Identifiable Information (PII) of facial images, the dataset will be released under the guidance and control of the National Institutes of Health (NIH).


AI in Healthcare: AI in Pain Management, a New Application

#artificialintelligence

Artificial Intelligence has been playing a growing role in the world in the last few decades. What most don't understand is artificial intelligence introduces itself in numerous structures that sway everyday life. Signing into your social media, email, car ride services, and online shopping platforms, etc. all include artificial intelligence algorithms to improve customer experience. AI in healthcare is growing quickly; explicitly, in diagnostics and treatment management. As of late, AI applications in healthcare have sent huge waves across medical services, fuelling a conversation of whether AI doctors will in the end supplant human doctors in the future.


AI helps assess pain levels in people with sickle cell disease

New Scientist

AI algorithms can assess the pain that someone with sickle cell disease is experiencing by using just their vital signs. Doing so could ensure people receive the most suitable pain management therapy for their condition. "There's always a trade-off between giving people sufficient medicine to reduce the pain and giving people too much medication so that they have bad side effects or a higher risk of addiction," says Daniel Abrams at Northwestern University in Illinois. But since pain is subjective, it is difficult to measure in a standardised way. Abrams and his colleagues set out to determine whether physiological data that is already routinely taken – including body temperature, heart rate and blood pressure – could be used to devise a system that assesses pain levels in a more objective manner.


AI could make healthcare fairer--by helping us believe what patients say

MIT Technology Review

The paper looks at a specific clinical example of the disparities that exist in the treatment of knee osteoarthritis, an ailment which causes chronic pain. Assessing the severity of that pain helps doctors prescribe the right treatment, including physical therapy, medication, or surgery. This is traditionally done by a radiologist reviewing an X-ray of the patient's knee and scoring their pain on the Kellgren–Lawrence grade (KLG), which calculates pain levels based on the presence of different radiographic features, like the degree of missing cartilage or structural damage. But data collected by the National Institute of Health found that doctors using this method systematically score Black patients far below the severity of pain that they say they're experiencing. Patients self-report their pain levels using a survey that asks about pain during various activities, such as fully straightening their knee.


Pain Intensity Estimation from Mobile Video Using 2D and 3D Facial Keypoints

Lee, Matthew, Kennedy, Lyndon, Girgensohn, Andreas, Wilcox, Lynn, Lee, John Song En, Tan, Chin Wen, Sng, Ban Leong

arXiv.org Machine Learning

For the more than 300 million surgeries performed worldwide every year, managing post-surgical pain is critical for successful surgical outcomes. Pain is the most prominent post-surgical concern, with an estimated 86% of surgical patients in the United States experiencing pain after surgery, with 75% of these patients reporting at least moderate to extreme pain [1]. Higher postoperative pain is associated with more postoperative complications [2], indicating the importance of pain management. Furthermore, the use of opioid analgesics is a powerful tool for managing pain but can pose risks of adverse drug events (experienced by 10% of surgical patients), leading to prolonged length of stay, high hospitalization costs, and potentially addiction [3]. Thus, regular and careful pain assessment is important for balancing between pain relief and potential side effects of powerful opioid analgesics [4]. However, one of the challenges of pain management is accurately assessing the pain level of patients. Pain is inherently subjective, and the personal experience of pain is difficult to observe and measure objectively by those not experiencing it [5] (e.g., care providers). The standard practice used in clinical care requires patients to self-report their pain intensity level using a numeric or visual scale, such as the popular Numerical Pain Rating Scale [6].


Researchers use AI to examine EHR data for insights into medical device effectiveness

#artificialintelligence

EHRs are a potential treasure trove of information beyond the needs and circumstances of specific patients, but who has time to pore through them to find it? It turns out, perhaps, another machine. That's the thinking behind a study published recently at npj Digital Medicine that specifically explored using deep machine learning methods to extract information regarding the post-surgical safety performance of implanted medical devices, in this case using hip replacements as a test case. To be sure, any medical device has to meet safety standards set by the FDA, but as Nigam Shah, PhD, associate professor of medicine and biomedical data science at Stanford and the study's lead author, pointed out, "The safety standards required by the FDA are for initial approval of the device's use. What we need is a scalable way -- beyond self-reporting -- to see how safe and effective these devices are in a population after years of use."