Collaborating Authors

Public Health

Initiative aims to spur innovation by connecting, analyzing data bases - Indianapolis Business Journal


Fueled with a $36 million grant from Lilly Endowment Inc., the Central Indiana Corporate Partnership has launched an initiative called AnalytiXIN to promote innovations in data science throughout Indiana. Build connections between Indiana's manufacturing and life sciences companies and the university researchers who can help them use artificial intelligence and advanced data analytics to tackle big challenges like reducing a factory's carbon footprint or improving worker health. "This is one way to ensure early that these kinds of critical collaborations are happening," said David Johnson, president and CEO of the Indianapolis-based Central Indiana Corporate Partnership. About half of the $36 million will be used to hire university-level data-science researchers, some of whom will be based at 16 Tech in Indianapolis. The other half will go toward the creation of "data lakes," or large data sets built from information from multiple contributors.

Optellum, Johnson & Johnson Collaborate on AI-Powered Lung-Health Initiative


This announcement accelerates Optellum's market entry, building on its FDA clearance earlier this year and deployments underway at hospitals in the USA and ongoing clinical trials in the United Kingdom. It identifies and tracks at-risk patients and assigns a Lung Cancer Prediction score to lung nodules: small lesions, frequently detected in chest Computed Tomography (CT) scans that may or may not be cancerous. The Optellum AI will be used to drive accurate early diagnosis and optimal treatment decisions with the aim of treating patients earlier, potentially at a pre-cancerous stage, increasing survival rates.

Should Parents Stock Up on At-Home COVID Tests?


He's 11-years-old and, until he can receive his shots, Gronvall's been using at-home COVID-19 test kits in order to determine if his sniffles are more than allergies or a slight cold. The test swabs are longer than a Q-tip, but easier on the nasal cavity than a flu diagnostic or the original "brain swab" used to test for COVID since early in the pandemic. "There's often a lot of stuff coming out of their nose," Gronvall said of her kids, with a slight chuckle, when we talked recently. As an associate professor at the Johns Hopkins Bloomberg School of Public Health, Gronvall knows the importance of testing. "We can't all rely on everybody being extra scrupulous and paying attention to all of the COVID restrictions," she said.

AI Scientist


Paige is a software company helping pathologists and clinicians make faster, more informed diagnostic and treatment decisions by mining decades of data from the world's experts in cancer care. We are leading a digital transformation in pathology by leveraging advanced Artificial Intelligence (AI) technology to create value for the oncology clinical team. We are the first company to develop clinical grade AI tools for the pathologist, which resulted in our receiving FDA breakthrough designation for our first product. Paige has also received FDA-clearance for our digital viewer, FullFocus . We have also established multiple relationships with biopharma, laboratory, and equipment manufacturers that enables Paige to develop an ecosystem ready to help patients receive better diagnoses and treatment.

Fauci says 'hopefully' making young kids wear masks won't have 'lasting negative impact'

FOX News

Lara Logan and Justin Goodman join'Fox News Primetime' to weigh in on the report that the NIAID, under Fauci's direction, performed painful experiments on dogs White House Chief Health Adviser Dr. Anthony Fauci said Monday that "hopefully" making young kids wear face masks won't have any "lasting negative impact" on them. During an interview with conservative radio host Hugh Hewitt, Dr. Fauci said it's important to keep an "open mind" about masking after the Centers for Disease Control and Prevention recommended that unvaccinated children ages 2 and older wear masks and that students wear masks in all K-12 schools, regardless of vaccination status, in light of the rapid spread of the COVID-19 delta variant. "It's not comfortable, obviously, for children to wear masks, particularly the younger children," he said. "But you know, what we're starting to see, Hugh, and I think it's going to unfold even more as the weeks go by, that this virus not only is so extraordinarily transmissible, but we're starting to see pediatric hospitals get more and more younger people and kids not only numerically, but what seems to be more severe disease. "Now we're tracking that, the CDC is tracking that really very carefully, so it's going to be a balance that we would feel very badly if we all of a sudden said OK, kids, don't wear masks, then you find out retrospectively that this virus in a very, very strange and unusual way is really hitting kids really hard," he continued. "But hopefully, this will be a temporary thing, temporary enough that it doesn't have any lasting negative impact on them." Hewitt pushed back, citing an editorial Sunday by The Wall Street Journal, titled, "The Case Against Masks for Children," which argues that long-term masking can cause physical and developmental issues in children and that there's little evidence to back up a mandate. "Facial expression are integral to human connection, particularly for younger children who are only learning how to signal fear, confusion and happiness," Hewitt said. "Covering a child's face mutes these nonverbal form of communications, can result in robotic and emotionless interaction.

How FDA Regulates Artificial Intelligence in Medical Products


Health care organizations are using artificial intelligence (AI)--which the U.S. Food and Drug Administration defines as "the science and engineering of making intelligent machines"--for a growing range of clinical, administrative, and research purposes. This AI software can, for example, help health care providers diagnose diseases, monitor patients' health, or assist with rote functions such as scheduling patients. Although AI offers unique opportunities to improve health care and patient outcomes, it also comes with potential challenges. AI-enabled products, for example, have sometimes resulted in inaccurate, even potentially harmful, recommendations for treatment.1 These errors can be caused by unanticipated sources of bias in the information used to build or train the AI, inappropriate weight given to certain data points analyzed by the tool, and other flaws. The regulatory framework governing these tools is complex. FDA regulates some--but not all--AI-enabled products used in health care, and the agency plays an important role in ensuring the safety and effectiveness of those products under its jurisdiction. The agency is currently considering how to adapt its review process for AI-enabled medical devices that have the ability to evolve rapidly in response to new data, sometimes in ways that are difficult to foresee.2 This brief describes current and potential uses of AI in health care settings and the challenges these technologies pose, outlines how and under what circumstances they are regulated by FDA, and highlights key questions that will need to be addressed to ensure that the benefits of these devices outweigh their risks.

The Question Medical AI Can't Answer


Artificial intelligence (AI) is at an inflection point in health care. A 50-year span of algorithm and software development has produced some powerful approaches to extracting patterns from big data. For example, deep-learning neural networks have been shown to be effective for image analysis, resulting in the first FDA-approved AI-aided diagnosis of an eye disease called diabetic retinopathy, using only photos of a patient's eye. However, the application of AI in the health care domain has also revealed many of its weaknesses, outlined in a recent guidance document from the World Health Organization (WHO). The document covers a lengthy list of topics, each of which are just as important as the last: responsible, accountable, inclusive, equitable, ethical, unbiased, responsive, sustainable, transparent, trustworthy and explainable AI.

New York company gets jump on Elon Musk's Neuralink with brain-computer interface in clinical trials

Daily Mail - Science & tech

Elon Musk might be well positioned in space travel and electric vehicles, but the world's second-richest person is taking a backseat when it comes to a brain-computer interface (BCI). New York-based Synchron announced Wednesday that it has received approval from the Food and Drug Administration to begin clinical trials of its Stentrode motor neuroprosthesis - a brain implant it is hoped could ultimately be used to cure paralysis. The FDA approved Synchron's Investigational Device Exemption (IDE) application, according to a release, paving the way for an early feasibility study of Stentrode to begin later this year at New York's Mount Sinai Hospital. New York-based Synchron announced Wednesday that it has received FDA approval to begin clinical trials of Stentrode, its brain-computer interface, beating Elon Musk's Neuralink to a crucial benchmark. The study will analyze the safety and efficacy of the device, smaller than a matchstick, in six patients with severe paralysis. Meanwhile, Musk has been touting Neuralink, his brain-implant startup, for several years--most recently showing a video of a monkey with the chip playing Pong using only signals from its brain.

Beware explanations from AI in health care


Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions ([ 1 ][1]). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion ([ 2 ][2]). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake ([ 3 ][3], [ 4 ][4]). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions ([ 5 ][5]). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable. It is important to first distinguish explainable from interpretable AI/ML. These are two very different types of algorithms with different ways of dealing with the problem of opacity—that AI predictions generated from a black box undermine trust, accountability, and uptake of AI. A typical AI/ML task requires constructing an algorithm that can take a vector of inputs (for example, pixel values of a medical image) and generate an output pertaining to, say, disease occurrence (for example, cancer diagnosis). The algorithm is trained on past data with known labels, which means that the parameters of a mathematical function that relate the inputs to the output are estimated from that data. When we refer to an algorithm as a “black box,” we mean that the estimated function relating inputs to outputs is not understandable at an ordinary human level (owing to, for example, the function relying on a large number of parameters, complex combinations of parameters, or nonlinear transformations of parameters). Interpretable AI/ML (which is not the subject of our main criticism) does roughly the following: Instead of using a black-box function, it uses a transparent (“white-box”) function that is in an easy-to-digest form, for example, a linear model whose parameters correspond to additive weights relating the input features and the output or a classification tree that creates an intuitive rule-based map of the decision space. Such algorithms have been described as intelligible ([ 6 ][6]) and decomposable ([ 7 ][7]). The interpretable algorithm may not be immediately understandable by everyone (even a regression requires a bit of background on linear relationships, for example, and can be misconstrued). However, the main selling point of interpretable AI/ML algorithms is that they are open, transparent, and capable of being understood with reasonable effort. Accordingly, some scholars argue that, under many conditions, only interpretable algorithms should be used, especially when they are used by governments for distributing burdens and benefits ([ 8 ][8]). However, requiring interpretability would create an important change to ML as it is being done today—essentially that we forgo deep learning altogether and whatever benefits it may entail. Explainable AI/ML is very different, even though both approaches are often grouped together. Explainable AI/ML, as the term is typically used, does roughly the following: Given a black-box model that is used to make predictions or diagnoses, a second explanatory algorithm finds an interpretable function that closely approximates the outputs of the black box. This second algorithm is trained by fitting the predictions of the black box and not the original data, and it is typically used to develop the post hoc explanations for the black-box outputs and not to make actual predictions because it is typically not as accurate as the black box. The explanation might, for instance, be given in terms of which attributes of the input data in the black-box algorithm matter most to a specific prediction, or it may offer an easy-to-understand linear model that gives similar outputs as the black-box algorithm for the same given inputs ([ 4 ][4]). Other models, such as so-called counterfactual explanations or heatmaps, are also possible ([ 9 ][9], [ 10 ][10]). In other words, explainable AI/ML ordinarily finds a white box that partially mimics the behavior of the black box, which is then used as an explanation of the black-box predictions. Three points are important to note: First, the opaque function of the black box remains the basis for the AI/ML decisions, because it is typically the most accurate one. Second, the white box approximation to the black box cannot be perfect, because if it were, there would be no difference between the two. It is also not focusing on accuracy but on fitting the black box, often only locally. Finally, the explanations provided are post hoc. This is unlike interpretable AI/ML, where the explanation is given using the exact same function that is responsible for generating the output and is known and fixed ex ante for all inputs. A substantial proportion of AI/ML-based medical devices that have so far been cleared or approved by the US Food and Drug Administration (FDA) use noninterpretable black-box models, such as deep learning ([ 1 ][1]). This may be because blackbox models are deemed to perform better in many health care applications, which are often of massively high dimensionality, such as image recognition or genetic prediction. Whatever the reason, to require an explanation of black-box AI/ML systems in health care at present entails using post hoc explainable AI/ML models, and this is what we caution against here. Explainable algorithms have been a relatively recent area of research, and much of the focus of tech companies and researchers has been on the development of the algorithms themselves—the engineering—and not on the human factors affecting the final outcomes. The prevailing argument for explainable AI/ML is that it facilitates user understanding, builds trust, and supports accountability ([ 3 ][3], [ 4 ][4]). Unfortunately, current explainable AI/ML algorithms are unlikely to achieve these goals—at least in health care—for several reasons. ### Ersatz understanding Explainable AI/ML (unlike interpretable AI/ML) offers post hoc algorithmically generated rationales of black-box predictions, which are not necessarily the actual reasons behind those predictions or related causally to them. Accordingly, the apparent advantage of explainability is a “fool's gold” because post hoc rationalizations of a black box are unlikely to contribute to our understanding of its inner workings. Instead, we are likely left with the false impression that we understand it better. We call the understanding that comes from post hoc rationalizations “ersatz understanding.” And unlike interpretable AI/ML where one can confirm the quality of explanations of the AI/ML outcomes ex ante, there is no such guarantee for explainable AI/ML. It is not possible to ensure ex ante that for any given input the explanations generated by explainable AI/ML algorithms will be understandable by the user of the associated output. By not providing understanding in the sense of opening up the black box, or revealing its inner workings, this approach does not guarantee to improve trust and allay any underlying moral, ethical, or legal concerns. There are some circumstances where the problem of ersatz understanding may not be an issue. For example, researchers may find it helpful to generate testable hypotheses through many different approximations to a black-box algorithm to advance research or improve an AI/ML system. But this is a very different situation from regulators requiring AI/ML-based medical devices to be explainable as a precondition of their marketing authorization. ### Lack of robustness For an explainable algorithm to be trusted, it needs to exhibit some robustness. By this, we mean that the explainability algorithm should ordinarily generate similar explanations for similar inputs. However, for a very small change in input (for example, in a few pixels of an image), an approximating explainable AI/ML algorithm might produce very different and possibly competing explanations, with such differences not being necessarily justifiable or understood even by experts. A doctor using such an AI/ML-based medical device would naturally question that algorithm. ### Tenuous connection to accountability It is often argued that explainable AI/ML supports algorithmic accountability. If the system makes a mistake, the thought goes, it will be easier to retrace our steps and delineate what led to the mistake and who is responsible. Although this is generally true of interpretable AI/ML systems, which are transparent by design, it is not true of explainable AI/ML systems because the explanations are post hoc rationales, which only imperfectly approximate the actual function that drove the decision. In this sense, explainable AI/ML systems can serve to obfuscate our investigation into a mistake rather than help us to understand its source. The relationship between explainability and accountability is further attenuated by the fact that modern AI/ML systems rely on multiple components, each of which may be a black box in and of itself, thereby requiring a fact finder or investigator to identify, and then combine, a sequence of partial post hoc explanations. Thus, linking explainability to accountability may prove to be a red herring. Explainable AI/ML systems not only are unlikely to produce the benefits usually touted of them but also come with additional costs (as compared with interpretable systems or with using black-box models alone without attempting to rationalize their outputs). ### Misleading in the hands of imperfect users Even when explanations seem credible, or nearly so, when combined with prior beliefs of imperfectly rational users, they may still drive the users further away from a real understanding of the model. For example, the average user is vulnerable to narrative fallacies, where users combine and reframe explanations in misleading ways. The long history of medical reversals—the discovery that a medical practice did not work all along, either failing to achieve its intended goal or carrying harms that outweighed the benefits—provides examples of the risks of narrative fallacy in health care. Relatedly, explanations in the form of deceptively simple post hoc rationales can engender a false sense of (over)confidence. This can be further exacerbated through users' inability to reason with probabilistic predictions, which AI/ML systems often provide ([ 11 ][11]), or the users' undue deference to automated processes ([ 2 ][2]). All of this is made more challenging because explanations have multiple audiences, and it would be difficult to generate explanations that are helpful for all of them. ### Underperforming in at least some tasks If regulators decide that the only algorithms that can be marketed are those whose predictions can be explained with reasonable fidelity, they thereby limit the system's developers to a certain subset of AI/ML algorithms. For example, highly nonlinear models that are harder to approximate in a sufficiently large region of the data space may thus be prohibited under such a regime. This will be fine in cases where complex models—like deep learning or ensemble methods—do not particularly outperform their simpler counterparts (characterized by fairly structured data and meaningful features, such as predictions based on relatively few patient medical records) ([ 8 ][8]). But in others, especially in cases with massively high dimensionality—such as image recognition or genetic sequence analysis—limiting oneself to algorithms that can be explained sufficiently well may unduly limit model complexity and undermine accuracy. If explainability should not be a strict requirement for AI/ML in health care, what then? Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness—in particular, how does it perform in the hands of its intended users? To accomplish this, regulators should place more emphasis on well-designed clinical trials, at least for some higher-risk devices, and less on whether the AI/ML system can be explained ([ 12 ][12]). So far, most AI/ML-based medical devices have been cleared by the FDA through the 510(k) pathway, requiring only that substantial equivalence to a legally marketed (predicate) device be demonstrated, without usually requiring any clinical trials ([ 13 ][13]). Another approach is to provide individuals added flexibility when they interact with a model—for example, by allowing them to request AI/ML outputs for variations of inputs or with additional data. This encourages buy-in from the users and reinforces the model's robustness, which we think is more intimately tied to building trust. This is a different approach to providing insight into a model's inner workings. Such interactive processes are not new in health care, and their design may depend on the specific application. One example of such a process is the use of computer decision aids for shared decision-making for antenatal counseling at the limits of gestational viability. A neonatologist and the prospective parents might use the decision aid together in such a way to show how various uncertainties will affect the “risk:benefit ratios of resuscitating an infant at the limits of viability” ([ 14 ][14]). This reflects a phenomenon for which there is growing evidence—that allowing individuals to interact with an algorithm reduces “algorithmic aversion” and makes them more willing to accept the algorithm's predictions ([ 2 ][2]). ### From health care to other settings Our argument is targeted particularly to the case of health care. This is partly because health care applications tend to rely on massively high-dimensional predictive algorithms where loss of accuracy is particularly likely if one insists on the ability of good black-box approximations with simple enough explanations, and expertise levels vary. Moreover, the costs of misclassifications and potential harm to patients are relatively higher in health care compared with many other sectors. Finally, health care traditionally has multiple ways of demonstrating the reliability of a product or process, even in the absence of explanations. This is true of many FDA-approved drugs. We might think of medical AI/ML as more like a credence good, where the epistemic warrant for its use is trust in someone else rather than an understanding of how it works. For example, many physicians may be quite ignorant of the underlying clinical trial design or results that led the FDA to believe that a certain prescription drug was safe and effective, but their knowledge that it has been FDA-approved and that other experts further scrutinize it and use it supplies the necessary epistemic warrant for trusting the drug. But insofar as other domains share some of these features, our argument may apply more broadly and hold some lessons for regulators outside health care as well. ### When interpretable AI/ML is necessary Health care is a vast domain. Many AI/ML predictions are made to support diagnosis or treatment. For example, Biofourmis's RhythmAnalytics is a deep neural network architecture trained on electrocardiograms to predict more than 15 types of cardiac arrhythmias ([ 15 ][15]). In cases like this, accuracy matters a lot, and understanding is less important when a black box achieves higher accuracy than a white box. Other medical applications, however, are different. For example, imagine an AI/ML system that uses predictions about the extent of a patient's kidney damage to determine who will be eligible for a limited number of dialysis machines. In cases like this, when there are overarching concerns of justice— that is, concerns about how we should fairly allocate resources—ex ante transparency about how the decisions are made can be particularly important or required by regulators. In such cases, the best standard would be to simply use interpretable AI/ML from the outset, with clear predetermined procedures and reasons for how decisions are taken. In such contexts, even if interpretable AI/ML is less accurate, we may prefer to trade off some accuracy, the price we pay for procedural fairness. We argue that the current enthusiasm for explainability in health care is likely overstated: Its benefits are not what they appear, and its drawbacks are worth highlighting. For health AI/ML-based medical devices at least, it may be preferable not to treat explainability as a hard and fast requirement but to focus on their safety and effectiveness. Health care professionals should be wary of explanations that are provided to them for black-box AI/ML models. Health care professionals should strive to better understand AI/ML systems to the extent possible and educate themselves about how AI/ML is transforming the health care landscape, but requiring explainable AI/ML seldom contributes to that end. 1. [↵][16]1. S. Benjamens, 2. P. Dhunnoo, 3. B. Meskó , NPJ Digit. Med. 3, 118 (2020). [OpenUrl][17][PubMed][18] 2. [↵][19]1. B. J. Dietvorst, 2. J. P. Simmons, 3. C. Massey , Manage. Sci. 64, 1155 (2018). [OpenUrl][20] 3. [↵][21]1. A. F. Markus, 2. J. A. Kors, 3. P. R. Rijnbeek , J. Biomed. Inform. 113, 103655 (2021). [OpenUrl][22][PubMed][18] 4. [↵][23]1. M. T. Ribeiro, 2. S. Singh, 3. C. Guestrin , in KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2016), pp. 1135–1144. 5. [↵][24]1. A. Bohr, 2. K. Memarzadeh 1. S. Gerke, 2. T. Minssen, 3. I. G. Cohen , in Artificial Intelligence in Healthcare, A. Bohr, K. Memarzadeh, Eds. (Elsevier, 2020), pp. 295–336. 6. [↵][25]1. Y. Lou, 2. R. Caruana, 3. J. Gehrke , in KDD '12: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2012), pp. 150–158. 7. [↵][26]1. Z. C. Lipton , ACM Queue 16, 1 (2018). [OpenUrl][27] 8. [↵][28]1. C. Rudin , Nat. Mach. Intell. 1, 206 (2019). [OpenUrl][29] 9. [↵][30]1. D. Martens, 2. F. Provost , Manage. Inf. Syst. Q. 38, 73 (2014). [OpenUrl][31] 10. [↵][32]1. S. Wachter, 2. B. Mittelstadt, 3. C. Russell , Harv. J. Law Technol. 31, 841 (2018). [OpenUrl][33] 11. [↵][34]1. R. M. Hamm, 2. S. L. Smith , J. Fam. Pract. 47, 44 (1998). [OpenUrl][35][PubMed][36] 12. [↵][37]1. S. Gerke, 2. B. Babic, 3. T. Evgeniou, 4. I. G. Cohen , NPJ Digit. Med. 3, 53 (2020). [OpenUrl][38] 13. [↵][39]1. U. J. Muehlematter, 2. P. Daniore, 3. K. N. Vokinger , Lancet Digit. Health 3, e195 (2021). [OpenUrl][40] 14. [↵][41]1. U. Guillen, 2. H. Kirpalani , Semin. Fetal Neonatal Med. 23, 25 (2018). [OpenUrl][42][PubMed][18] 15. [↵][43]Biofourmis, RhythmAnalytics (2020); [][44]. Acknowledgments: We thank S. Wachter for feedback on an earlier version of this manuscript. All authors contributed equally to the analysis and drafting of the paper. Funding: S.G. and I.G.C. were supported by a grant from the Collaborative Research Program for Biomedical Innovation Law, a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784). I.G.C. was also supported by Diagnosing in the Home: The Ethical, Legal, and Regulatory Challenges and Opportunities of Digital Home Health, a grant from the Gordon and Betty Moore Foundation (grant agreement number 9974). Competing interests: S.G. is a member of the Advisory Group–Academic of the American Board of Artificial Intelligence in Medicine. I.G.C. serves as a bioethics consultant for Otsuka on their Abilify MyCite product. I.G.C. is a member of the Illumina ethics advisory board. I.G.C. serves as an ethics consultant for Dawnlight. The authors declare no other competing interests. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: #ref-7 [8]: #ref-8 [9]: #ref-9 [10]: #ref-10 [11]: #ref-11 [12]: #ref-12 [13]: #ref-13 [14]: #ref-14 [15]: #ref-15 [16]: #xref-ref-1-1 "View reference 1 in text" [17]: {openurl}?query=rft.jtitle%253DNPJ%2BDigit.%2BMed.%26rft.volume%253D3%26rft.spage%253D118%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [18]: /lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fsci%2F373%2F6552%2F284.atom [19]: #xref-ref-2-1 "View reference 2 in text" [20]: {openurl}?query=rft.jtitle%253DManage.%2BSci.%26rft.volume%253D64%26rft.spage%253D1155%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [21]: #xref-ref-3-1 "View reference 3 in text" [22]: {openurl}?query=rft.jtitle%253DJ.%2BBiomed.%2BInform.%26rft.volume%253D113%26rft.spage%253D103655%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [23]: #xref-ref-4-1 "View reference 4 in text" [24]: #xref-ref-5-1 "View reference 5 in text" [25]: #xref-ref-6-1 "View reference 6 in text" [26]: #xref-ref-7-1 "View reference 7 in text" [27]: {openurl}?query=rft.jtitle%253DACM%2BQueue%26rft.volume%253D16%26rft.spage%253D1%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [28]: #xref-ref-8-1 "View reference 8 in text" [29]: {openurl}?query=rft.jtitle%253DNat.%2BMach.%2BIntell.%26rft.volume%253D1%26rft.spage%253D206%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [30]: #xref-ref-9-1 "View reference 9 in text" [31]: {openurl}?query=rft.jtitle%253DManage.%2BInf.%2BSyst.%2BQ.%26rft.volume%253D38%26rft.spage%253D73%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [32]: #xref-ref-10-1 "View reference 10 in text" [33]: {openurl}?query=rft.jtitle%253DHarv.%2BJ.%2BLaw%2BTechnol.%26rft.volume%253D31%26rft.spage%253D841%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [34]: #xref-ref-11-1 "View reference 11 in text" [35]: {openurl}?query=rft.jtitle%253DThe%2BJournal%2Bof%2Bfamily%2Bpractice%26rft.stitle%253DJ%2BFam%2BPract%26rft.aulast%253DHamm%26rft.auinit1%253DR.%2BM.%26rft.volume%253D47%26rft.issue%253D1%26rft.spage%253D44%26rft.epage%253D52%26rft.atitle%253DThe%2Baccuracy%2Bof%2Bpatients%2527%2Bjudgments%2Bof%2Bdisease%2Bprobability%2Band%2Btest%2Bsensitivity%2Band%2Bspecificity.%26rft_id%253Dinfo%253Apmid%252F9673608%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [36]: /lookup/external-ref?access_num=9673608&link_type=MED&atom=%2Fsci%2F373%2F6552%2F284.atom [37]: #xref-ref-12-1 "View reference 12 in text" [38]: {openurl}?query=rft.jtitle%253DNPJ%2BDigit.%2BMed.%26rft.volume%253D3%26rft.spage%253D53%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [39]: #xref-ref-13-1 "View reference 13 in text" [40]: {openurl}?query=rft.jtitle%253DLancet%2BDigit.%2BHealth%26rft.volume%253D3%26rft.spage%253D195e%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [41]: #xref-ref-14-1 "View reference 14 in text" [42]: {openurl}?query=rft.jtitle%253DSemin.%2BFetal%2BNeonatal%2BMed.%26rft.volume%253D23%26rft.spage%253D25%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [43]: #xref-ref-15-1 "View reference 15 in text" [44]:

AI Imaging specialist closes $66m funding round


Aidoc, a provider of artificial intelligence (AI) solutions for medical imaging, has announced a $66 million investment, bringing its total funding to $140 million. This Series C round, led by General Catalyst, follows a surge in demand for Aidoc's AI-driven solutions, including the largest clinical deployment of AI in healthcare through its partnership with Radiology Partners. Aidoc co-founder and CEO Elad Walach, said: "This investment comes after significant milestones; expanding our product lines, doubling our FDA clearances and quadrupling our customer base. We are experiencing a huge expansion, which is also a direct result of C-level executives adopting an AI strategy and integrating our platform as a must-have solution across clinical pathways. It is truly rewarding – and a great responsibility – to be the trusted partner of the most innovative health systems and physician practices across the globe." A pioneer in healthcare AI, Aidoc's FDA-cleared solutions analyse medical images for critical conditions and trigger actionable alerts directly in the imaging workflow supporting medical specialists in reducing turnaround time and improving quality of care.