Collaborating Authors

Government Relations & Public Policy

Small company beats Elon Musk's Neuralink in race to test brain chips in humans


A small company developing an implantable brain computer interface to help treat conditions like paralysis has received the go-ahead from the Food and Drug Administration (FDA) to kick off clinical trials of its flagship device later this year. New York-based Synchron announced Wednesday it has received FDA approval to begin an early feasibility study of its Stentrode implant later this year at Mount Sinai Hospital with six human subjects. The study will examine the safety and efficacy of its motor neuroprosthesis in patients with severe paralysis, with the hopes the device will allow them to use brain data to "control digital devices and achieve improvements in functional independence." "Patients begin using the device at home soon after implantation and may wirelessly control external devices by thinking about moving their limbs. The system is designed to facilitate better communication and functional independence for patients by enabling daily tasks like texting, emailing, online commerce and accessing telemedicine," the company said in a release.

The Question Medical AI Can't Answer


Artificial intelligence (AI) is at an inflection point in health care. A 50-year span of algorithm and software development has produced some powerful approaches to extracting patterns from big data. For example, deep-learning neural networks have been shown to be effective for image analysis, resulting in the first FDA-approved AI-aided diagnosis of an eye disease called diabetic retinopathy, using only photos of a patient's eye. However, the application of AI in the health care domain has also revealed many of its weaknesses, outlined in a recent guidance document from the World Health Organization (WHO). The document covers a lengthy list of topics, each of which are just as important as the last: responsible, accountable, inclusive, equitable, ethical, unbiased, responsive, sustainable, transparent, trustworthy and explainable AI.

New York company gets jump on Elon Musk's Neuralink with brain-computer interface in clinical trials

Daily Mail - Science & tech

Elon Musk might be well positioned in space travel and electric vehicles, but the world's second-richest person is taking a backseat when it comes to a brain-computer interface (BCI). New York-based Synchron announced Wednesday that it has received approval from the Food and Drug Administration to begin clinical trials of its Stentrode motor neuroprosthesis - a brain implant it is hoped could ultimately be used to cure paralysis. The FDA approved Synchron's Investigational Device Exemption (IDE) application, according to a release, paving the way for an early feasibility study of Stentrode to begin later this year at New York's Mount Sinai Hospital. New York-based Synchron announced Wednesday that it has received FDA approval to begin clinical trials of Stentrode, its brain-computer interface, beating Elon Musk's Neuralink to a crucial benchmark. The study will analyze the safety and efficacy of the device, smaller than a matchstick, in six patients with severe paralysis. Meanwhile, Musk has been touting Neuralink, his brain-implant startup, for several years--most recently showing a video of a monkey with the chip playing Pong using only signals from its brain.

Importance of AI and Machine Learning in the Healthcare Ecosystem


Healthcare innovation has helped healthcare providers offer better care and unlock new ways to enhanced treatment for larger population groups. Technology advancements such as Artificial Intelligence and machine learning can offer innovative solutions to the healthcare sector by improving care delivery options and automating tasks that can reduce administrative burden. The Healthcare Innovation Forum discusses how machine learning and AI have revolutionized healthcare through efficient data analysis which has facilitated the decision-making process. By integrating the power of AI and machine learning the healthcare ecosystem can benefit greatly through automation of manual tasks, analyzing large data to improve health outcome levels, and lowering healthcare costs. According to Business Insider, 30% of healthcare costs are related to administrative and operational tasks.

Healthcare Is Ailing. AI Can Help.


This year has shown, with startling clarity, how crucial a high-functioning healthcare system is not only to the well-being of populations but to the functioning of a society as a whole. Nearly every aspect of our modern lives is dependent on a healthy, able populace and that's why now, while we have this clarity of insight, is the time to invest in the technological upgrades to these systems that have long been on the horizon. We need to create a digital network of patient information. We need to begin the process of incorporating robotics into patient care, minimizing risks to both patient and provider in doing so. And we need to start using the awesome power of artificial intelligence (AI) and machine learning (ML) properly, implementing them at the forefront of diagnostics to seek out patterns and to identify previously unseen markers that could save not just one life but thousands.

Beware explanations from AI in health care


Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions ([ 1 ][1]). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion ([ 2 ][2]). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake ([ 3 ][3], [ 4 ][4]). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions ([ 5 ][5]). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable. It is important to first distinguish explainable from interpretable AI/ML. These are two very different types of algorithms with different ways of dealing with the problem of opacity—that AI predictions generated from a black box undermine trust, accountability, and uptake of AI. A typical AI/ML task requires constructing an algorithm that can take a vector of inputs (for example, pixel values of a medical image) and generate an output pertaining to, say, disease occurrence (for example, cancer diagnosis). The algorithm is trained on past data with known labels, which means that the parameters of a mathematical function that relate the inputs to the output are estimated from that data. When we refer to an algorithm as a “black box,” we mean that the estimated function relating inputs to outputs is not understandable at an ordinary human level (owing to, for example, the function relying on a large number of parameters, complex combinations of parameters, or nonlinear transformations of parameters). Interpretable AI/ML (which is not the subject of our main criticism) does roughly the following: Instead of using a black-box function, it uses a transparent (“white-box”) function that is in an easy-to-digest form, for example, a linear model whose parameters correspond to additive weights relating the input features and the output or a classification tree that creates an intuitive rule-based map of the decision space. Such algorithms have been described as intelligible ([ 6 ][6]) and decomposable ([ 7 ][7]). The interpretable algorithm may not be immediately understandable by everyone (even a regression requires a bit of background on linear relationships, for example, and can be misconstrued). However, the main selling point of interpretable AI/ML algorithms is that they are open, transparent, and capable of being understood with reasonable effort. Accordingly, some scholars argue that, under many conditions, only interpretable algorithms should be used, especially when they are used by governments for distributing burdens and benefits ([ 8 ][8]). However, requiring interpretability would create an important change to ML as it is being done today—essentially that we forgo deep learning altogether and whatever benefits it may entail. Explainable AI/ML is very different, even though both approaches are often grouped together. Explainable AI/ML, as the term is typically used, does roughly the following: Given a black-box model that is used to make predictions or diagnoses, a second explanatory algorithm finds an interpretable function that closely approximates the outputs of the black box. This second algorithm is trained by fitting the predictions of the black box and not the original data, and it is typically used to develop the post hoc explanations for the black-box outputs and not to make actual predictions because it is typically not as accurate as the black box. The explanation might, for instance, be given in terms of which attributes of the input data in the black-box algorithm matter most to a specific prediction, or it may offer an easy-to-understand linear model that gives similar outputs as the black-box algorithm for the same given inputs ([ 4 ][4]). Other models, such as so-called counterfactual explanations or heatmaps, are also possible ([ 9 ][9], [ 10 ][10]). In other words, explainable AI/ML ordinarily finds a white box that partially mimics the behavior of the black box, which is then used as an explanation of the black-box predictions. Three points are important to note: First, the opaque function of the black box remains the basis for the AI/ML decisions, because it is typically the most accurate one. Second, the white box approximation to the black box cannot be perfect, because if it were, there would be no difference between the two. It is also not focusing on accuracy but on fitting the black box, often only locally. Finally, the explanations provided are post hoc. This is unlike interpretable AI/ML, where the explanation is given using the exact same function that is responsible for generating the output and is known and fixed ex ante for all inputs. A substantial proportion of AI/ML-based medical devices that have so far been cleared or approved by the US Food and Drug Administration (FDA) use noninterpretable black-box models, such as deep learning ([ 1 ][1]). This may be because blackbox models are deemed to perform better in many health care applications, which are often of massively high dimensionality, such as image recognition or genetic prediction. Whatever the reason, to require an explanation of black-box AI/ML systems in health care at present entails using post hoc explainable AI/ML models, and this is what we caution against here. Explainable algorithms have been a relatively recent area of research, and much of the focus of tech companies and researchers has been on the development of the algorithms themselves—the engineering—and not on the human factors affecting the final outcomes. The prevailing argument for explainable AI/ML is that it facilitates user understanding, builds trust, and supports accountability ([ 3 ][3], [ 4 ][4]). Unfortunately, current explainable AI/ML algorithms are unlikely to achieve these goals—at least in health care—for several reasons. ### Ersatz understanding Explainable AI/ML (unlike interpretable AI/ML) offers post hoc algorithmically generated rationales of black-box predictions, which are not necessarily the actual reasons behind those predictions or related causally to them. Accordingly, the apparent advantage of explainability is a “fool's gold” because post hoc rationalizations of a black box are unlikely to contribute to our understanding of its inner workings. Instead, we are likely left with the false impression that we understand it better. We call the understanding that comes from post hoc rationalizations “ersatz understanding.” And unlike interpretable AI/ML where one can confirm the quality of explanations of the AI/ML outcomes ex ante, there is no such guarantee for explainable AI/ML. It is not possible to ensure ex ante that for any given input the explanations generated by explainable AI/ML algorithms will be understandable by the user of the associated output. By not providing understanding in the sense of opening up the black box, or revealing its inner workings, this approach does not guarantee to improve trust and allay any underlying moral, ethical, or legal concerns. There are some circumstances where the problem of ersatz understanding may not be an issue. For example, researchers may find it helpful to generate testable hypotheses through many different approximations to a black-box algorithm to advance research or improve an AI/ML system. But this is a very different situation from regulators requiring AI/ML-based medical devices to be explainable as a precondition of their marketing authorization. ### Lack of robustness For an explainable algorithm to be trusted, it needs to exhibit some robustness. By this, we mean that the explainability algorithm should ordinarily generate similar explanations for similar inputs. However, for a very small change in input (for example, in a few pixels of an image), an approximating explainable AI/ML algorithm might produce very different and possibly competing explanations, with such differences not being necessarily justifiable or understood even by experts. A doctor using such an AI/ML-based medical device would naturally question that algorithm. ### Tenuous connection to accountability It is often argued that explainable AI/ML supports algorithmic accountability. If the system makes a mistake, the thought goes, it will be easier to retrace our steps and delineate what led to the mistake and who is responsible. Although this is generally true of interpretable AI/ML systems, which are transparent by design, it is not true of explainable AI/ML systems because the explanations are post hoc rationales, which only imperfectly approximate the actual function that drove the decision. In this sense, explainable AI/ML systems can serve to obfuscate our investigation into a mistake rather than help us to understand its source. The relationship between explainability and accountability is further attenuated by the fact that modern AI/ML systems rely on multiple components, each of which may be a black box in and of itself, thereby requiring a fact finder or investigator to identify, and then combine, a sequence of partial post hoc explanations. Thus, linking explainability to accountability may prove to be a red herring. Explainable AI/ML systems not only are unlikely to produce the benefits usually touted of them but also come with additional costs (as compared with interpretable systems or with using black-box models alone without attempting to rationalize their outputs). ### Misleading in the hands of imperfect users Even when explanations seem credible, or nearly so, when combined with prior beliefs of imperfectly rational users, they may still drive the users further away from a real understanding of the model. For example, the average user is vulnerable to narrative fallacies, where users combine and reframe explanations in misleading ways. The long history of medical reversals—the discovery that a medical practice did not work all along, either failing to achieve its intended goal or carrying harms that outweighed the benefits—provides examples of the risks of narrative fallacy in health care. Relatedly, explanations in the form of deceptively simple post hoc rationales can engender a false sense of (over)confidence. This can be further exacerbated through users' inability to reason with probabilistic predictions, which AI/ML systems often provide ([ 11 ][11]), or the users' undue deference to automated processes ([ 2 ][2]). All of this is made more challenging because explanations have multiple audiences, and it would be difficult to generate explanations that are helpful for all of them. ### Underperforming in at least some tasks If regulators decide that the only algorithms that can be marketed are those whose predictions can be explained with reasonable fidelity, they thereby limit the system's developers to a certain subset of AI/ML algorithms. For example, highly nonlinear models that are harder to approximate in a sufficiently large region of the data space may thus be prohibited under such a regime. This will be fine in cases where complex models—like deep learning or ensemble methods—do not particularly outperform their simpler counterparts (characterized by fairly structured data and meaningful features, such as predictions based on relatively few patient medical records) ([ 8 ][8]). But in others, especially in cases with massively high dimensionality—such as image recognition or genetic sequence analysis—limiting oneself to algorithms that can be explained sufficiently well may unduly limit model complexity and undermine accuracy. If explainability should not be a strict requirement for AI/ML in health care, what then? Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness—in particular, how does it perform in the hands of its intended users? To accomplish this, regulators should place more emphasis on well-designed clinical trials, at least for some higher-risk devices, and less on whether the AI/ML system can be explained ([ 12 ][12]). So far, most AI/ML-based medical devices have been cleared by the FDA through the 510(k) pathway, requiring only that substantial equivalence to a legally marketed (predicate) device be demonstrated, without usually requiring any clinical trials ([ 13 ][13]). Another approach is to provide individuals added flexibility when they interact with a model—for example, by allowing them to request AI/ML outputs for variations of inputs or with additional data. This encourages buy-in from the users and reinforces the model's robustness, which we think is more intimately tied to building trust. This is a different approach to providing insight into a model's inner workings. Such interactive processes are not new in health care, and their design may depend on the specific application. One example of such a process is the use of computer decision aids for shared decision-making for antenatal counseling at the limits of gestational viability. A neonatologist and the prospective parents might use the decision aid together in such a way to show how various uncertainties will affect the “risk:benefit ratios of resuscitating an infant at the limits of viability” ([ 14 ][14]). This reflects a phenomenon for which there is growing evidence—that allowing individuals to interact with an algorithm reduces “algorithmic aversion” and makes them more willing to accept the algorithm's predictions ([ 2 ][2]). ### From health care to other settings Our argument is targeted particularly to the case of health care. This is partly because health care applications tend to rely on massively high-dimensional predictive algorithms where loss of accuracy is particularly likely if one insists on the ability of good black-box approximations with simple enough explanations, and expertise levels vary. Moreover, the costs of misclassifications and potential harm to patients are relatively higher in health care compared with many other sectors. Finally, health care traditionally has multiple ways of demonstrating the reliability of a product or process, even in the absence of explanations. This is true of many FDA-approved drugs. We might think of medical AI/ML as more like a credence good, where the epistemic warrant for its use is trust in someone else rather than an understanding of how it works. For example, many physicians may be quite ignorant of the underlying clinical trial design or results that led the FDA to believe that a certain prescription drug was safe and effective, but their knowledge that it has been FDA-approved and that other experts further scrutinize it and use it supplies the necessary epistemic warrant for trusting the drug. But insofar as other domains share some of these features, our argument may apply more broadly and hold some lessons for regulators outside health care as well. ### When interpretable AI/ML is necessary Health care is a vast domain. Many AI/ML predictions are made to support diagnosis or treatment. For example, Biofourmis's RhythmAnalytics is a deep neural network architecture trained on electrocardiograms to predict more than 15 types of cardiac arrhythmias ([ 15 ][15]). In cases like this, accuracy matters a lot, and understanding is less important when a black box achieves higher accuracy than a white box. Other medical applications, however, are different. For example, imagine an AI/ML system that uses predictions about the extent of a patient's kidney damage to determine who will be eligible for a limited number of dialysis machines. In cases like this, when there are overarching concerns of justice— that is, concerns about how we should fairly allocate resources—ex ante transparency about how the decisions are made can be particularly important or required by regulators. In such cases, the best standard would be to simply use interpretable AI/ML from the outset, with clear predetermined procedures and reasons for how decisions are taken. In such contexts, even if interpretable AI/ML is less accurate, we may prefer to trade off some accuracy, the price we pay for procedural fairness. We argue that the current enthusiasm for explainability in health care is likely overstated: Its benefits are not what they appear, and its drawbacks are worth highlighting. For health AI/ML-based medical devices at least, it may be preferable not to treat explainability as a hard and fast requirement but to focus on their safety and effectiveness. Health care professionals should be wary of explanations that are provided to them for black-box AI/ML models. Health care professionals should strive to better understand AI/ML systems to the extent possible and educate themselves about how AI/ML is transforming the health care landscape, but requiring explainable AI/ML seldom contributes to that end. 1. [↵][16]1. S. Benjamens, 2. P. Dhunnoo, 3. B. Meskó , NPJ Digit. Med. 3, 118 (2020). [OpenUrl][17][PubMed][18] 2. [↵][19]1. B. J. Dietvorst, 2. J. P. Simmons, 3. C. Massey , Manage. Sci. 64, 1155 (2018). [OpenUrl][20] 3. [↵][21]1. A. F. Markus, 2. J. A. Kors, 3. P. R. Rijnbeek , J. Biomed. Inform. 113, 103655 (2021). [OpenUrl][22][PubMed][18] 4. [↵][23]1. M. T. Ribeiro, 2. S. Singh, 3. C. Guestrin , in KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2016), pp. 1135–1144. 5. [↵][24]1. A. Bohr, 2. K. Memarzadeh 1. S. Gerke, 2. T. Minssen, 3. I. G. Cohen , in Artificial Intelligence in Healthcare, A. Bohr, K. Memarzadeh, Eds. (Elsevier, 2020), pp. 295–336. 6. [↵][25]1. Y. Lou, 2. R. Caruana, 3. J. Gehrke , in KDD '12: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2012), pp. 150–158. 7. [↵][26]1. Z. C. Lipton , ACM Queue 16, 1 (2018). [OpenUrl][27] 8. [↵][28]1. C. Rudin , Nat. Mach. Intell. 1, 206 (2019). [OpenUrl][29] 9. [↵][30]1. D. Martens, 2. F. Provost , Manage. Inf. Syst. Q. 38, 73 (2014). [OpenUrl][31] 10. [↵][32]1. S. Wachter, 2. B. Mittelstadt, 3. C. Russell , Harv. J. Law Technol. 31, 841 (2018). [OpenUrl][33] 11. [↵][34]1. R. M. Hamm, 2. S. L. Smith , J. Fam. Pract. 47, 44 (1998). [OpenUrl][35][PubMed][36] 12. [↵][37]1. S. Gerke, 2. B. Babic, 3. T. Evgeniou, 4. I. G. Cohen , NPJ Digit. Med. 3, 53 (2020). [OpenUrl][38] 13. [↵][39]1. U. J. Muehlematter, 2. P. Daniore, 3. K. N. Vokinger , Lancet Digit. Health 3, e195 (2021). [OpenUrl][40] 14. [↵][41]1. U. Guillen, 2. H. Kirpalani , Semin. Fetal Neonatal Med. 23, 25 (2018). [OpenUrl][42][PubMed][18] 15. [↵][43]Biofourmis, RhythmAnalytics (2020); [][44]. Acknowledgments: We thank S. Wachter for feedback on an earlier version of this manuscript. All authors contributed equally to the analysis and drafting of the paper. Funding: S.G. and I.G.C. were supported by a grant from the Collaborative Research Program for Biomedical Innovation Law, a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784). I.G.C. was also supported by Diagnosing in the Home: The Ethical, Legal, and Regulatory Challenges and Opportunities of Digital Home Health, a grant from the Gordon and Betty Moore Foundation (grant agreement number 9974). Competing interests: S.G. is a member of the Advisory Group–Academic of the American Board of Artificial Intelligence in Medicine. I.G.C. serves as a bioethics consultant for Otsuka on their Abilify MyCite product. I.G.C. is a member of the Illumina ethics advisory board. I.G.C. serves as an ethics consultant for Dawnlight. The authors declare no other competing interests. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: #ref-7 [8]: #ref-8 [9]: #ref-9 [10]: #ref-10 [11]: #ref-11 [12]: #ref-12 [13]: #ref-13 [14]: #ref-14 [15]: #ref-15 [16]: #xref-ref-1-1 "View reference 1 in text" [17]: {openurl}?query=rft.jtitle%253DNPJ%2BDigit.%2BMed.%26rft.volume%253D3%26rft.spage%253D118%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [18]: /lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fsci%2F373%2F6552%2F284.atom [19]: #xref-ref-2-1 "View reference 2 in text" [20]: {openurl}?query=rft.jtitle%253DManage.%2BSci.%26rft.volume%253D64%26rft.spage%253D1155%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [21]: #xref-ref-3-1 "View reference 3 in text" [22]: {openurl}?query=rft.jtitle%253DJ.%2BBiomed.%2BInform.%26rft.volume%253D113%26rft.spage%253D103655%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [23]: #xref-ref-4-1 "View reference 4 in text" [24]: #xref-ref-5-1 "View reference 5 in text" [25]: #xref-ref-6-1 "View reference 6 in text" [26]: #xref-ref-7-1 "View reference 7 in text" [27]: {openurl}?query=rft.jtitle%253DACM%2BQueue%26rft.volume%253D16%26rft.spage%253D1%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [28]: #xref-ref-8-1 "View reference 8 in text" [29]: {openurl}?query=rft.jtitle%253DNat.%2BMach.%2BIntell.%26rft.volume%253D1%26rft.spage%253D206%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [30]: #xref-ref-9-1 "View reference 9 in text" [31]: {openurl}?query=rft.jtitle%253DManage.%2BInf.%2BSyst.%2BQ.%26rft.volume%253D38%26rft.spage%253D73%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [32]: #xref-ref-10-1 "View reference 10 in text" [33]: {openurl}?query=rft.jtitle%253DHarv.%2BJ.%2BLaw%2BTechnol.%26rft.volume%253D31%26rft.spage%253D841%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [34]: #xref-ref-11-1 "View reference 11 in text" [35]: {openurl}?query=rft.jtitle%253DThe%2BJournal%2Bof%2Bfamily%2Bpractice%26rft.stitle%253DJ%2BFam%2BPract%26rft.aulast%253DHamm%26rft.auinit1%253DR.%2BM.%26rft.volume%253D47%26rft.issue%253D1%26rft.spage%253D44%26rft.epage%253D52%26rft.atitle%253DThe%2Baccuracy%2Bof%2Bpatients%2527%2Bjudgments%2Bof%2Bdisease%2Bprobability%2Band%2Btest%2Bsensitivity%2Band%2Bspecificity.%26rft_id%253Dinfo%253Apmid%252F9673608%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [36]: /lookup/external-ref?access_num=9673608&link_type=MED&atom=%2Fsci%2F373%2F6552%2F284.atom [37]: #xref-ref-12-1 "View reference 12 in text" [38]: {openurl}?query=rft.jtitle%253DNPJ%2BDigit.%2BMed.%26rft.volume%253D3%26rft.spage%253D53%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [39]: #xref-ref-13-1 "View reference 13 in text" [40]: {openurl}?query=rft.jtitle%253DLancet%2BDigit.%2BHealth%26rft.volume%253D3%26rft.spage%253D195e%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [41]: #xref-ref-14-1 "View reference 14 in text" [42]: {openurl}?query=rft.jtitle%253DSemin.%2BFetal%2BNeonatal%2BMed.%26rft.volume%253D23%26rft.spage%253D25%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [43]: #xref-ref-15-1 "View reference 15 in text" [44]:

AI Imaging specialist closes $66m funding round


Aidoc, a provider of artificial intelligence (AI) solutions for medical imaging, has announced a $66 million investment, bringing its total funding to $140 million. This Series C round, led by General Catalyst, follows a surge in demand for Aidoc's AI-driven solutions, including the largest clinical deployment of AI in healthcare through its partnership with Radiology Partners. Aidoc co-founder and CEO Elad Walach, said: "This investment comes after significant milestones; expanding our product lines, doubling our FDA clearances and quadrupling our customer base. We are experiencing a huge expansion, which is also a direct result of C-level executives adopting an AI strategy and integrating our platform as a must-have solution across clinical pathways. It is truly rewarding – and a great responsibility – to be the trusted partner of the most innovative health systems and physician practices across the globe." A pioneer in healthcare AI, Aidoc's FDA-cleared solutions analyse medical images for critical conditions and trigger actionable alerts directly in the imaging workflow supporting medical specialists in reducing turnaround time and improving quality of care.

FDA clears Carrot's smoking cessation sensor to be used without doctor oversight


Digital smoking-cessation company Carrot has landed an FDA expanded use indication in a new 510(k) clearance for its connected breath sensor that can detect a user's exposure to cigarette smoke. The new indication allows the tool, called the Pivot Carbon Monoxide Breath Sensor, to be purchased over the counter and used without the oversight of a doctor. Users can blow into the fob-sized sensor to get a reading of their carbon monoxide level. "This is a significant breakthrough in smoking cessation," Dr. David S. Utley, Carrot CEO, said in a statement. "The emergence of an over-the-counter breath sensor that can help people quit tobacco is comparable to when consumer-grade glucose meters became available, empowering people in their own diabetes care."

The Data Dilemma and Its Impact on AI in Healthcare and Life Sciences


There is no greater challenge for healthcare and life science organizations than ensuring that their digital transformation along with better data management will improve patient outcomes, increase operational efficiency and productivity, and better financial results. The drivers of healthcare and life science's transition from data rich to data driven are not new and include the race to manage cost and improve quality. Some new drivers include the growth of at risk contracting for providers, the threat of care delivery disruption by the retail industry and the impact of drug discovery in the challenge to balance speed to market with costs. Health and life science industries are data rich. IDC estimates that on average, approximately 270 GB of healthcare and life science data will be created for every person in the world in 2020. Transformation of data into insights creates the value for health and life science organizations coupled with organizations establishing a data driven culture.

Go Ahead, A.I. -- Surprise Us


Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics -- artificial intelligence (A.I.), particularly for healthcare. With the WHO releasing its "first global report" on A.I. -- Ethics & Governance of Artificial Intelligence for Health -- and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic. My prediction: it's not going to work out quite like we expect, and it probably shouldn't. "Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm," Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said in a statement. WHO's proposed six principles are: All valid points, but, as we're already learning, easier to propose than to ensure.