Goto

Collaborating Authors

 hypoxemia


Predicting Intraoperative Hypoxemia with Hybrid Inference Sequence Autoencoder Networks

Liu, Hanyang, Montana, Michael C., Li, Dingwen, Renfroe, Chase, Kannampallil, Thomas, Lu, Chenyang

arXiv.org Artificial Intelligence

We present an end-to-end model using streaming physiological time series to predict near-term risk for hypoxemia, a rare, but life-threatening condition known to cause serious patient harm during surgery. Inspired by the fact that a hypoxemia event is defined based on a future sequence of low SpO2 (i.e., blood oxygen saturation) instances, we propose the hybrid inference network (hiNet) that makes hybrid inference on both future low SpO2 instances and hypoxemia outcomes. hiNet integrates 1) a joint sequence autoencoder that simultaneously optimizes a discriminative decoder for label prediction, and 2) two auxiliary decoders trained for data reconstruction and forecast, which seamlessly learn contextual latent representations that capture the transition from present states to future states. All decoders share a memory-based encoder that helps capture the global dynamics of patient measurement. For a large surgical cohort of 72,081 surgeries at a major academic medical center, our model outperforms strong baselines including the model used by the state-of-the-art hypoxemia prediction system. With its capability to make real-time predictions of near-term hypoxemic at clinically acceptable alarm rates, hiNet shows promise in improving clinical decision making and easing burden of perioperative care.


Achieving fairness in medical devices

Science

The hardware or software that operates medical devices can be biased. A biased device is one that operates in a manner that disadvantages certain demographic groups and influences health inequity. As one measure of fairness, reducing bias is related to increasing fairness in the operation of a medical device. Initiatives to promote fairness are rapidly growing in a range of technical disciplines, but this growth is not rapid enough for medical engineering. Although computer science companies terminate lucrative but biased facial recognition systems, biased medical devices continue to be sold as commercial products. It is important to address bias in medical devices now. This can be achieved by studying where and how bias arises, and understanding these can inform mitigation strategies. Bias in medical devices can be divided into three broad forms (see the figure). A medical device can exhibit physical bias, where physical principles are biased against certain demographics. Once data are collected, computational bias, which pertains to the distribution, processing, and computation of data that are used to operate a device, must be considered. Subsequent implementation in clinical settings can lead to interpretation bias, where clinical staff or other users may interpret device outputs differently based on demographics. The physical working principle of a medical device is biased when it exhibits an undesirable performance variation across demographic groups. An example of physical bias occurs in the context of optical biosensors that use light to monitor vital signs. A pulse oximeter uses two colors of light (one in near-infrared and the other in visible light) to measure blood oxygenation. Through the pulse oximeter, it is possible to diagnose occult hypoxemia, low levels of arterial oxygen saturation that are not detectable from symptoms. However, a recent study found that Black patients had about three times the frequency of undiagnosed occult hypoxemia as measured by pulse oximeters ([ 1 ][1]). Dark skin tones respond differently to these wavelengths of light, particularly visible light. Because hypoxemia relates to mortality, such a biased medical device could lead to disparate mortality outcomes for Black and dark-skinned patients. Physical bias is not restricted to skin color. For example, the mechanical design of implants for hip replacement exhibits a potentially troubling gender disparity. The three-dimensional models used to design hip-joint implants sometimes do not account for the distinct bone structure of female hips ([ 2 ][2]). This could lead to alignment issues and relatively poor outcomes for affected females. This problem was one motivation for the development of gender-specific implants. Fortunately, physical challenges can also be addressed through unexpected technical innovation, such as in the example of the remote plethysmograph. This device measures heart rate through visual changes in skin color. Because visual cues are biased, researchers developed an alternative approach using motion cues to estimate heart rate. Because motions are visible on the surface of skin, the technique is less biased by subsurface melanin content ([ 3 ][3]). With the goal of promoting fairness, an exciting technical direction of studying motion cues instead of color cues has been advanced. ![Figure][4] Measuring fairness Fairness can be quantified based on ϵ-bias. Fairness is maximized when ϵ = 0, achieving a state of 0-bias. GRAPHIC: N. DESAI/ SCIENCE Computational workflows are becoming more tightly coupled with devices, which increases the number of entry points where computational bias can invade medical technologies. An aspect of computational bias is dataset bias. Consider the following example from x-ray imaging: Diagnostic algorithms can learn patterns from x-ray imaging datasets of thoracic conditions. However, these imaging datasets often contain a surprising imbalance, where females are underrepresented. For example, despite having a sample size of more than 100,000 images, frequently used chest x-ray databases are ∼60% male and ∼40% female ([ 4 ][5]). This imbalance worsens the quality of diagnosis for female patients. A solution is to ensure that datasets are balanced. Somewhat unexpectedly, balancing the gender representation to 50% female boosts diagnostic performance not only for females but also for males ([ 4 ][5]). Despite best efforts, demographic balancing of a dataset might not be possible. This could be due to conditions that present more often in one sex than the other. In such cases where balancing a dataset is truly infeasible, transfer learning can be used as a step toward a longer-term solution ([ 5 ][6]). Transfer learning could repurpose design parameters from task A (based on a balanced dataset) to task B (with an unbalanced dataset). In the future, it might be possible to balance a dataset using a human digital twin. These are computational models that can be programmed to reflect a desired race, sex, or morphological trait. Another form of computational bias is algorithm bias, where the mathematics of data processing disadvantages certain groups. Now, software algorithms are able to process video streams to detect the spontaneous blink rate of a human subject. This is helpful in diagnosing a variety of neurological disorders, including Parkinson's disease ([ 6 ][7]) and Tourette syndrome ([ 7 ][8]). Unfortunately, traditional image-processing systems have particular difficulty in detecting blinks for Asian individuals ([ 8 ][9]). The use of such poorly designed and biased algorithms ([ 9 ][10]) could produce or exacerbate health disparities between racial groups. Interpretation bias occurs when a medical device is subject to biased inference of readings. An example of a misinterpreted medical device is the spirometer, which measures lung capacity. The interpretation of spirometry data creates unfairness because certain ethnic groups, such as Black or Asian, are assumed to have lower lung capacity than white people: 15% lower for Black people and about 5% lower for Asian people. This assumption is based on earlier studies that may have incorrectly estimated innate lung capacity ([ 10 ][11]). Unfortunately, these “correction factors,” based on questionable assumptions, are applied to the interpretation of spirometer data. For example, before “correction,” a Black person's lung capacity might be measured to be lower than the lung capacity of a white person. After “correction” to a smaller baseline lung capacity, treatment plans would prioritize the white person, because it is expected that a Black person should have lower lung capacity, and so their capacity must be much lower than that of a white person before their reduction is considered a priority. ![Figure][4] Bias in medical devices A device can be biased if its design disadvantages certain groups on the basis of their physical attributes, such as skin color. For example, pulse oximeters (see the photo) detect changes in light passed through skin and are less effective in people with dark skin. Computational techniques are biased if training datasets are not representative of the population. Interpretation of results may be biased according to demographic groups, for example, with the use of “correction factors.” CREDIT: N. DESAI/ SCIENCE However well intentioned, errors in “correction” for race (or sex) can disadvantage the groups it seeks to protect. In the spirometer example, the device designers conflated a racial group's healthy lung capacity with their average lung capacity. This assumption does not account for socioeconomic distinctions across race: Individuals who live near motorways exhibit reduced lung capacity, and these individuals are often from disadvantaged ethnic groups. The spirometer is just one of several examples of systemic racism in medicine ([ 11 ][12]). If our society desires fair medical devices, it must reward a fair approach to innovation. It is inspiring to observe the speed at which the artificial intelligence (AI) community has recognized fairness in its endeavors. Authors can be encouraged by journals to address the societal implications of their technologies and include a “broader impacts” statement that is considered in peer review. This has already been introduced at an AI journal to encourage consideration of the diversity of potential users of their software ([ 12 ][13]). Fairness research in AI is increasingly garnering scholarly acclaim. For example, a seminal report highlighted the widespread problem of bias in face recognition, which found that darker-skinned females are misclassified at rates up to 34.7% while the maximum error rate for lighter-skinned males is only 0.8% ([ 13 ][14]). In response to concerns of fairness, action is being taken. For example, Amazon Inc. has recently banned the use of its facial-recognition products by police until bias concerns can be resolved. There is still a long way to go in addressing bias in AI, but some of the lessons learned can be repurposed to medical devices. A “fairness” statement for the evaluation of studies of medical devices could use the three categories of bias as a rubric: physical bias, computational bias, and interpretation bias. A medical-device study does not need to be perfectly unbiased to be reported. Indeed, it may not always be possible to remove all sources of bias. For example, an oximeter reliant on an optical sensor is likely to remain biased against dark skin ([ 1 ][1]). The fairness statement can consist of technical explanations for how attempts to mitigate bias failed and suggest technical compensations for disadvantaged groups (e.g., collect additional data points for dark-skinned people). This is consistent with the introduction of “positive biases,” where race-aware and gender-aware methodologies are explicitly designed to counteract negative bias ([ 14 ][15]). Additionally, the inclusion of fairness metrics in studies of medical devices could be considered. Choosing the right fairness metric of an algorithm is a quantitatively challenging computer science exercise ([ 15 ][16]) and can be abstracted here as “ϵ-bias,” where ϵ quantifies the degree of bias across subgroups. For example, 0-bias would be seen as perfectly fair. Achieving 0-bias on its own is trivial: Simply return a measurement that is consistently useless across demographics. The problem is to maximize performance and minimize ϵ-bias. This may present a Pareto trade-off, where maximizing the performance and minimizing bias are objectives at odds with each other. A Pareto curve can quantitatively display how changing device configuration varies the balance between performance and fairness (see the graph). Such analyses might be a useful inclusion in medical-device studies. Achieving fairness in medical devices is a key piece of the puzzle, but a piece nonetheless. Even if one manages to engineer a fair medical device, it could be used by a clinical provider who has conscious or subconscious bias. And even a fair medical device from an engineering perspective might be inaccessible to a range of demographic groups, owing to socioeconomic reasons. Several open questions remain. What is an acceptable trade-off between device performance and fairness? It is also important to consider how biases that are not easy to predict or easy to observe at scale can be dealt with. Race and sex are also part of human biology. How can positive biases be properly encoded into medical-device design? Diversity and inclusion have gained increasing attention, and the era of fair medical devices is only just beginning. 1. [↵][17]1. M. W. Sjoding et al ., N. Engl. J. Med. 383, 2477 (2020). [OpenUrl][18][CrossRef][19][PubMed][20] 2. [↵][21]1. C. W. Hartman et al ., Semin. Arthroplasty 20, 62 (2009). [OpenUrl][22] 3. [↵][23]1. G. Balakrishnan, 2. F. Durand, 3. J. Guttag , in Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2013), pp. 3430–3437. 4. [↵][24]1. A. J. Larrazabal, 2. N. Nieto, 3. V. Peterson, 4. D. H. Milone, 5. E. Ferrante , Proc. Natl. Acad. Sci. U.S.A. 117, 12592 (2020). [OpenUrl][25][Abstract/FREE Full Text][26] 5. [↵][27]1. F. Doshi-Velez et al. 1. S. Jabbour et al ., in Proceedings of the Fifth Machine Learning for Healthcare Conference, F. Doshi-Velez et al., Eds. (Proceedings of Machine Learning Research, 2020), pp. 750–782. 6. [↵][28]1. R. Sandyk , Int. J. Neurosci. 51, 99 (1990). [OpenUrl][29][CrossRef][30][PubMed][31][Web of Science][32] 7. [↵][33]1. C. N. Karson et al ., J. Nerv. Ment. Dis. 173, 566 (1985). [OpenUrl][34][PubMed][35] 8. [↵][36]1. J. Zou, 2. L. Schiebinger , Nature 559, 324 (2018). [OpenUrl][37][CrossRef][38][PubMed][39] 9. [↵][40]1. Z. Obermeyer et al ., Science 366, 447 (2019). [OpenUrl][41][Abstract/FREE Full Text][42] 10. [↵][43]1. L. Braun , Breathing Race into the Machine: The Surprising Career of the Spirometer from Plantation to Genetics (Univ. of Minnesota Press, 2014). 11. [↵][44]1. A. H. Wingfield , Science 369, 351 (2020). [OpenUrl][45][Abstract/FREE Full Text][46] 12. [↵][47]1. B. Hecht et al ., “It's time to do something: Mitigating the negative impacts of computing through a change to the peer review process,” ACM Future of Computing Blog, 29 March 2018; . 13. [↵][48]1. S. A. Friedler, 2. C. Wilson 1. J. Buolamwini, 2. T. Gebru , in Proceedings of the Conference on Fairness, Accountability and Transparency, S. A. Friedler, C. Wilson, Eds. (Proceedings of Machine Learning Research, 2018), pp. 77–91. 14. [↵][49]1. D. Cirillo et al ., NPG Digi. Med. 3, 81 (2020). [OpenUrl][50] 15. [↵][51]1. C. H. Papadimitirou 1. J. Kleinberg, 2. S. Mullainathan, 3. M. Raghavan , in Proceedings of the Eighth Innovations in Theoretical Computer Science Conference, C. H. Papadimitirou, Ed. (Schloss Dagstuhl, 2017), pp. 43:1–43:23. Acknowledgments: I thank P. Chari, L. Jalilian, K. Kabra, M. Savary, M. Majmudar, and the Engineering 87 class at UCLA for constructive feedback. I am supported by a National Science Foundation CAREER grant (IIS-2046737), Google Faculty Award, and Sony Imaging Young Faculty Award. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: pending:yes [5]: #ref-4 [6]: #ref-5 [7]: #ref-6 [8]: #ref-7 [9]: #ref-8 [10]: #ref-9 [11]: #ref-10 [12]: #ref-11 [13]: #ref-12 [14]: #ref-13 [15]: #ref-14 [16]: #ref-15 [17]: #xref-ref-1-1 "View reference 1 in text" [18]: {openurl}?query=rft.jtitle%253DN.%2BEngl.%2BJ.%2BMed.%26rft.volume%253D383%26rft.spage%253D2477%26rft_id%253Dinfo%253Adoi%252F10.1056%252FNEJMc2029240%26rft_id%253Dinfo%253Apmid%252F33326721%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [19]: /lookup/external-ref?access_num=10.1056/NEJMc2029240&link_type=DOI [20]: /lookup/external-ref?access_num=33326721&link_type=MED&atom=%2Fsci%2F372%2F6537%2F30.atom [21]: #xref-ref-2-1 "View reference 2 in text" [22]: {openurl}?query=rft.jtitle%253DSemin.%2BArthroplasty%26rft.volume%253D20%26rft.spage%253D62%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [23]: #xref-ref-3-1 "View reference 3 in text" [24]: #xref-ref-4-1 "View reference 4 in text" [25]: {openurl}?query=rft.jtitle%253DProc.%2BNatl.%2BAcad.%2BSci.%2BU.S.A.%26rft_id%253Dinfo%253Adoi%252F10.1073%252Fpnas.1919012117%26rft_id%253Dinfo%253Apmid%252F32457147%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [26]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoicG5hcyI7czo1OiJyZXNpZCI7czoxMjoiMTE3LzIzLzEyNTkyIjtzOjQ6ImF0b20iO3M6MjE6Ii9zY2kvMzcyLzY1MzcvMzAuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9 [27]: #xref-ref-5-1 "View reference 5 in text" [28]: #xref-ref-6-1 "View reference 6 in text" [29]: {openurl}?query=rft.jtitle%253DThe%2BInternational%2Bjournal%2Bof%2Bneuroscience%26rft.stitle%253DInt%2BJ%2BNeurosci%26rft.aulast%253DSandyk%26rft.auinit1%253DR.%26rft.volume%253D51%26rft.issue%253D1-2%26rft.spage%253D99%26rft.epage%253D103%26rft.atitle%253DThe%2Bsignificance%2Bof%2Beye%2Bblink%2Brate%2Bin%2Bparkinsonism%253A%2Ba%2Bhypothesis.%26rft_id%253Dinfo%253Adoi%252F10.3109%252F00207459009000515%26rft_id%253Dinfo%253Apmid%252F2265915%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [30]: /lookup/external-ref?access_num=10.3109/00207459009000515&link_type=DOI [31]: /lookup/external-ref?access_num=2265915&link_type=MED&atom=%2Fsci%2F372%2F6537%2F30.atom [32]: /lookup/external-ref?access_num=A1990DD55900014&link_type=ISI [33]: #xref-ref-7-1 "View reference 7 in text" [34]: {openurl}?query=rft.jtitle%253DThe%2BJournal%2Bof%2Bnervous%2Band%2Bmental%2Bdisease%26rft.stitle%253DJ%2BNerv%2BMent%2BDis%26rft.aulast%253DKarson%26rft.auinit1%253DC.%2BN.%26rft.volume%253D173%26rft.issue%253D9%26rft.spage%253D566%26rft.epage%253D569%26rft.atitle%253DEye-blink%2Brate%2Bin%2BTourette%2527s%2Bsyndrome.%26rft_id%253Dinfo%253Apmid%252F3860628%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [35]: /lookup/external-ref?access_num=3860628&link_type=MED&atom=%2Fsci%2F372%2F6537%2F30.atom [36]: #xref-ref-8-1 "View reference 8 in text" [37]: {openurl}?query=rft.jtitle%253DNature%26rft.volume%253D559%26rft.spage%253D324%26rft_id%253Dinfo%253Adoi%252F10.1038%252Fd41586-018-05707-8%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [38]: /lookup/external-ref?access_num=10.1038/d41586-018-05707-8&link_type=DOI [39]: /lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fsci%2F372%2F6537%2F30.atom [40]: #xref-ref-9-1 "View reference 9 in text" [41]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DObermeyer%26rft.auinit1%253DZ.%26rft.volume%253D366%26rft.issue%253D6464%26rft.spage%253D447%26rft.epage%253D453%26rft.atitle%253DDissecting%2Bracial%2Bbias%2Bin%2Ban%2Balgorithm%2Bused%2Bto%2Bmanage%2Bthe%2Bhealth%2Bof%2Bpopulations%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.aax2342%26rft_id%253Dinfo%253Apmid%252F31649194%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [42]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEyOiIzNjYvNjQ2NC80NDciO3M6NDoiYXRvbSI7czoyMToiL3NjaS8zNzIvNjUzNy8zMC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30= [43]: #xref-ref-10-1 "View reference 10 in text" [44]: #xref-ref-11-1 "View reference 11 in text" [45]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DWingfield%26rft.auinit1%253DA.%2BH.%26rft.volume%253D369%26rft.issue%253D6502%26rft.spage%253D351%26rft.epage%253D351%26rft.atitle%253DSystemic%2Bracism%2Bpersists%2Bin%2Bthe%2Bsciences%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.abd8825%26rft_id%253Dinfo%253Apmid%252F32703851%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [46]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEyOiIzNjkvNjUwMi8zNTEiO3M6NDoiYXRvbSI7czoyMToiL3NjaS8zNzIvNjUzNy8zMC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30= [47]: #xref-ref-12-1 "View reference 12 in text" [48]: #xref-ref-13-1 "View reference 13 in text" [49]: #xref-ref-14-1 "View reference 14 in text" [50]: {openurl}?query=rft.jtitle%253DNPG%2BDigi.%2BMed.%26rft.volume%253D3%26rft.spage%253D81%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [51]: #xref-ref-15-1 "View reference 15 in text"


Deep Transfer Learning for Physiological Signals

Chen, Hugh, Lundberg, Scott, Erion, Gabe, Kim, Jerry H., Lee, Su-In

arXiv.org Machine Learning

Deep learning is increasingly common in healthcare, yet transfer learning for physiological signals (e.g., temperature, heart rate, etc.) is under-explored. Here, we present a straightforward, yet performant framework for transferring knowledge about physiological signals. Our framework is called PHASE (PHysiologicAl Signal Embeddings). It i) learns deep embeddings of physiological signals and ii) predicts adverse outcomes based on the embeddings. PHASE is the first instance of deep transfer learning in a cross-hospital, cross-department setting for physiological signals. We show that PHASE's per-signal (one for each signal) LSTM embedding functions confer a number of benefits including improved performance, successful transference between hospitals, and lower computational cost.


Researchers Unveil AI System That Predicts Problems During Surgery

#artificialintelligence

A new artificial intelligence system developed by researchers at the University of Washington uses patient data to predict whether patients are at risk of abnormally low blood oxygen (hypoxia) during surgery. University of Washington (UW) researchers have developed an artificial intelligence (AI) system that uses patient data to predict whether patients are at risk of abnormally low blood oxygen (hypoxia) during surgery. The Prescience system also provides users with real-world explanations to support and explain its predictions. In collaboration with physicians, UW's Su-In Lee and colleagues trained Prescience on about 50,000 patient files, so the program could analyze data such as patient age and weight to calculate the likelihood of hypoxemia prior to surgery. The system also uses real-time data during surgery to predict when patients are in danger of hypoxemia, and a new AI model helps Prescience provide doctors a concise description of the prediction's underlying factors.


Hybrid Gradient Boosting Trees and Neural Networks for Forecasting Operating Room Data

Chen, Hugh, Lundberg, Scott, Lee, Su-In

arXiv.org Machine Learning

Time series data constitutes a distinct and growing problem in machine learning. As the corpus of time series data grows larger, deep models that simultaneously learn features and classify with these features can be intractable or suboptimal. In this paper, we present feature learning via long short term memory (LSTM) networks and prediction via gradient boosting trees (XGB). Focusing on the consequential setting of electronic health record data, we predict the occurrence of hypoxemia five minutes into the future based on past features. We make two observations: 1) long short term memory networks are effective at capturing long term dependencies based on a single feature and 2) gradient boosting trees are capable of tractably combining a large number of features including static features like height and weight. With these observations in mind, we generate features by performing "supervised" representation learning with LSTM networks. Augmenting the original XGB model with these features gives significantly better performance than either individual method.


Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning

Erion, Gabriel, Chen, Hugh, Lundberg, Scott M., Lee, Su-In

arXiv.org Machine Learning

We use a deep learning model trained only on a patient's blood oxygenation data (measurable with an inexpensive fingertip sensor) to predict impending hypoxemia (low blood oxygen) more accurately than trained anesthesiologists with access to all the data recorded in a modern operating room. We also provide a simple way to visualize the reason why a patient's risk is low or high by assigning weight to the patient's past blood oxygen values. This work has the potential to provide cutting-edge clinical decision support in low-resource settings, where rates of surgical complication and death are substantially greater than in high-resource areas.