Not enough data to create a plot.
Try a different view from the menu above.
Science
Huge protein structure database could transform biology
Earlier this month, two groups unveiled the culmination of years of work by computer scientists, biologists, and physicists: advanced modeling programs that can predict the precise 3D atomic structures of proteins. Last week, the biggest payoff of that work arrived. One team used its newly minted artificial intelligence (AI) programs to solve the structures of 350,000 proteins from humans and 20 model organisms, such as Escherichia coli bacteria, yeast, and fruit flies, all mainstays of biological research. In the coming months, the group says it plans to expand its efforts to all cataloged proteins—some 100 million molecules. “It's pretty overwhelming,” says John Moult, a protein folding expert at the University of Maryland, Shady Grove, who runs a biennial competition called the Critical Assessment of protein Structure Prediction (CASP). Moult says structural biologists have dreamed for decades that accurate computer models would one day augment slow, painstaking experimental methods, such as x-ray crystallography, that map protein shapes with extreme precision. “I never thought the dream would come true,” Moult says. The computer model, called AlphaFold, is the work of researchers at DeepMind, a U.K. AI company owned by Alphabet, the parent company of Google. In fall of 2020, AlphaFold swept the CASP competition, tallying a median accuracy score of 92.4 out of 100 for its predicted structures, well ahead of the next closest competitor ( Science , 4 December 2020, p. [1144][1]). But because DeepMind researchers didn't reveal AlphaFold's underlying computer code, other teams were left frustrated, unable to build on the progress. That began to change this month ( Science , 16 July, p. [262][2]). On 15 July, researchers led by Minkyung Baek and David Baker at the University of Washington, Seattle, reported online in Science that they had created a competing system: a highly accurate protein structure prediction program called RoseTTAFold, which they released publicly. The same day, Nature rushed out details of AlphaFold in a paper by DeepMind researchers led by Demis Hassabis and John Jumper. Both programs use AI to spot folding patterns in vast databases of solved protein structures. The programs compute the most likely structure of unknown proteins by applying those patterns and also considering basic physical and biological rules governing how neighboring amino acids in a protein interact. In their paper, Baek and Baker used RoseTTAFold to create a structure database of hundreds of G-protein coupled receptors, a class of common drug targets. Now, DeepMind researchers report in Nature that they have amassed 350,000 predicted structures—more than twice as many as experimenters have solved in many decades of work. AlphaFold's structures for which the researchers say they have high confidence cover nearly 44% of all human proteins. AlphaFold determined that many of the remaining human proteins were “disordered,” meaning their shape doesn't adopt a single structure. Such disordered proteins may ultimately adopt a structure when they bind to a protein partner, Baker says. They may also naturally adopt multiple conformations, says David Agard, a structural biologist at the University of California, San Francisco. A database of DeepMind's new protein predictions, assembled with collaborators at the European Molecular Biology Laboratory (EMBL), is freely accessible online. “It's fantastic they have made this available,” Baker says. “It will really increase the pace of research.” Because the 3D structure of a protein largely dictates its function, the DeepMind library is apt to help biologists sort out how thousands of unknown proteins do their jobs. “We at EMBL believe this will be transformative to understanding how life works,” says the lab's director general, Edith Heard. “This will be one of the most important data sets since the mapping of the human genome,” adds Ewan Birney, director of EMBL's European Bioinformatics Institute. DeepMind collaborators say that by making it possible to quickly assess how a change in a protein's sequence alters its structure and function, AlphaFold has already spurred the development of novel enzymes for breaking down plastic waste. It has also prompted efforts to better target parasitic diseases. The impacts aren't likely to stop there. The predictions will help experimentalists who solve structures, Baek says. Data from x-ray crystallography and cryo–electron microscopy experiments can be difficult to interpret, Baek and others say, and having a model can help pinpoint the correct structure. “In the short term, it will boost structure determination efforts,” she predicts. “And over time it will also slowly replace [experimental] structural determination efforts.” If that happens, structural biologists won't find themselves out of work. Baker notes that both experimental and computational scientists are already beginning to turn their efforts to the more complex challenge of understanding exactly which proteins interact with one another and what molecular changes happen during these interactions. The new tools will “reset the field,” Baker says. “It's a very exciting time.” [1]: http://www.sciencemag.org/content/370/6521/1144 [2]: http://www.sciencemag.org/content/373/6552/262
Human-wildlife conflict under climate change
Human-wildlife conflict—defined here as direct interactions between humans and wildlife with adverse outcomes—costs the global economy billions of dollars annually, threatens human lives and livelihoods, and is a leading cause of biodiversity loss ([ 1 ][1]). These clashes largely stem from the co-occurrence of humans and wildlife seeking limited resources in shared landscapes and often has unforeseen consequences. For example, large carnivore species like leopards may prey upon livestock and disrupt human livelihoods, leading to retaliatory killings that can drive wildlife decline, zoonotic disease outbreaks, and child labor practices ([ 2 ][2]). As dire as these conflicts have been, climate change is intensifying human-wildlife conflict by exacerbating resource scarcity and forcing people and wildlife to share increasingly crowded spaces. Consequently, human-wildlife conflict is rising in frequency and severity, but the complex connections among climate dynamics, ecological dynamics, and social dynamics contributing to the heightened conflict have yet to be fully appreciated. ![Figure][3] Warming temperatures have driven animals to human-dominated areas in search of food. Increased attacks on livestock can spur retaliatory killing of predators. A sheep corral in the Himalayas is covered with wire to protect against attacks from snow leopards. PHOTO: NICK GARBUTT/MINDEN PICTURES Both extreme climate events and directional climate change have the potential to alter the dynamics of human-wildlife conflict. Acute climate events can cause rapid changes in resource availability that drive strong behavioral and spatial responses in animals and people, leading to increased co-occurrence and competition. In terrestrial systems, droughts in particular have intensified some of the most visible conflicts. For example, from 1986 to 1988, a severe drought in India brought about by an extreme El Niño led to a sharp decline in vegetation productivity; loss of food drove elephants to new human-dominated areas, which led to rapid increases in crop damage and fatal attacks on people ([ 3 ][4]). The same drought event in India saw a marked increase in livestock losses to lions, and human fatalities from lion attacks rose by more than 600% in one region to 6.7 deaths per year following the drought ([ 3 ][4]). More recently in 2018, a prolonged drought in Botswana saw some of the highest incidences of livestock depredations by large carnivores on record, compounding drought-induced food and economic insecurity in agricultural and pastoral communities ([ 4 ][5]). Similar connections between climate events and conflicts are occurring in marine systems. For instance, anomalously warm water temperatures off the South African coast drove changes in prey availability that displaced great white sharks into areas of high human use; the increase in spatial overlap between people and sharks led to a nearly fourfold increase in shark attacks within a single year ([ 5 ][6]). A similar increase in spatial overlap that resulted in heightened conflict occurred in 2014 to 2016 off the US West Coast, when an intense marine heat wave drove changes in both large-whale distributions and fisheries management, leading to an unprecedented number of whale entanglements in fishing gear ([ 6 ][7]). Not only did these entanglements cause high rates of whale mortality, but subsequent management restrictions have threatened millions of dollars in lost fishery revenue. Although extreme climate events often create dramatic conflicts, long-term warming is also producing conflicts with interconnected consequences for people and wildlife. In a notable example, over a 30-year period in Canada's Hudson Bay, human–polar bear conflicts involving property damage, life-threatening encounters, or bear killings have more than tripled as sea ice has declined and polar bears have spent more time on land ([ 7 ][8]). In the Himalayas, warming-induced vegetation changes at high elevations have driven the bharal or blue sheep to lower elevations, where they forage on crops, which affects the livelihoods of local subsistence agricultural producers. Simultaneously, the redistribution of bharal has also drawn their primary predator, snow leopards, to lower elevations, leading to increased livestock depredation and retaliatory killing of leopards ([ 8 ][9]). In other examples, crop foraging ([ 9 ][10]), livestock depredation ([ 10 ][11]) or competition ([ 11 ][12]), and human-wildlife encounters ([ 12 ][13]) are inversely correlated with interannual rainfall as a result of reduced food and water availability, and declining rainfall trends in parts of the globe continue to create more frequent and intense conflicts ([ 13 ][14]). Even as climate change restricts resource availability in many contexts, climate-driven expansion of the human footprint further forces people and animals to share spaces and can create new conflicts—for example, agricultural expansion into previously unproductive or inaccessible areas is significantly associated with rises in human-wildlife conflict ([ 9 ][10]). By investigating the interrelated consequences of climate change on wildlife and human populations, we can better anticipate undesired outcomes and identify how human interventions can mitigate cascading ecological and social dynamics. Climate impacts on human-wildlife conflict do not act in isolation—among other factors, socioeconomic drivers such as land-use change and demographic processes such as rising human populations or changes in predator and prey populations play major roles in determining the frequency, scale, and distribution of conflicts ([ 1 ][1]). Thus, illuminating and ultimately addressing the interconnections between climate change and human-wildlife conflict requires a coupled socioecological systems approach, drawing from fields as diverse as ecology, global change biology, human demography, political science, public policy, history, and economics. Although the impact of climate change on human-wildlife conflict has arguably received relatively little research attention, governmental bodies are increasingly recognizing this phenomenon and developing forward-looking policies to explicitly incorporate climate into the management of certain conflicts ([ 3 ][4], [ 4 ][5]). For example, the state of California in the US recently implemented a Risk Assessment and Mitigation Program that assimilates climatic, oceanographic, biological, and economic indices to inform dynamic fisheries management to reduce the risk of whale entanglements ([ 6 ][7]). Knowledge of climate impacts on human-wildlife conflict can also aid long-term planning efforts and public outreach. For instance, livestock compensation programs, one of the most widely implemented tools to mitigate human-carnivore conflict, could plan funding allocations to anticipate higher spending in years with anomalous climate conditions. Furthermore, given early warning from climate predictions or emerging efforts to predict human-wildlife conflicts using artificial intelligence ([ 14 ][15]), governments or nongovernmental organizations can educate and warn the public about possible increased interactions with wildlife ([ 12 ][13]). As climate change continues to drive both increased climate variability and directional change ([ 15 ][16]), climate-driven human-wildlife conflict can be expected to be a recurring challenge. To protect wildlife and humans alike, it is vital that a diverse body of research and institutions considers the role of a changing climate in shaping the complex socioecological dynamics of conflict. 1. [↵][17]1. P. J. Nyhus , Annu. Rev. Environ. Resour. 41, 143 (2016). [OpenUrl][18] 2. [↵][19]1. J. Terborgh, 2. J. A. Estes 1. J. S. Brashares, 2. L. R. Prugh, 3. C. J. Stoner, 4. C. W. Epps , in Trophic Cascades, J. Terborgh, J. A. Estes, Eds. (Island Press, 2010), pp. 221–240. 3. [↵][20]1. J. R. Bhatt, 2. A. Das, 3. K. Shanker , Eds., Biodiversity and Climate Change: An Indian Perspective (Ministry of Environment, Forest and Climate Change, Government of India, New Delhi, 2018), pp. 1–138. 4. [↵][21]Botswana Vulnerability Assessment Committee, (Botswana Ministry of Local Government and Rural Development, 2019); . 5. [↵][22]1. B. K. Chapman, 2. D. McPhee , Ocean Coast. Manage. 133, 72 (2016). [OpenUrl][23] 6. [↵][24]1. J. A. Santora et al ., Nat. Commun. 11, 536 (2020). [OpenUrl][25] 7. [↵][26]1. L. Towns et al ., Polar Biol. 32, 1529 (2009). [OpenUrl][27][CrossRef][28] 8. [↵][29]1. A. Aryal et al ., Theor. Appl. Climatol. 115, 517 (2013). [OpenUrl][30] 9. [↵][31]1. J. M. Mukeka, 2. J. O. Ogutu, 3. E. Kanga, 4. E. Røskaft , Glob. Ecol. Conserv. 18, e00620 (2019). [OpenUrl][32] 10. [↵][33]1. M. Schiess-Meier, 2. S. Ramsauer, 3. T. Gabanapelo, 4. B. Konig , J. Wildl. Manage. 71, 1267 (2007). [OpenUrl][34] 11. [↵][35]1. S. P. Vargas et al ., Oryx 55, 275 (2021). [OpenUrl][36] 12. [↵][37]1. C. S. Zack et al ., Wildl. Soc. Bull. 31, 517 (2003). [OpenUrl][38] 13. [↵][39]1. J. M. Mukeka et al ., Hum. Wildl. Interact. 14, 255 (2020). [OpenUrl][40] 14. [↵][41]1. P. Variyar , Can Artificial Intelligence Predict Human-Wildlife Conflict? (Wildlife Conservation Trust, 2021); [www.wildlifeconservationtrust.org/can-artificial-intelligence-predict-human-wildlife-conflict/][42]. 15. [↵][43]1. D. Coumou, 2. S. Rahmstorf , Nat. Clim. Chang. 2, 491 (2012). [OpenUrl][44] Acknowledgments: I thank K. Gaynor, A. McInturff, E. Pikitch, and J. Samhouri for valuable discussions and comments. [1]: #ref-1 [2]: #ref-2 [3]: pending:yes [4]: #ref-3 [5]: #ref-4 [6]: #ref-5 [7]: #ref-6 [8]: #ref-7 [9]: #ref-8 [10]: #ref-9 [11]: #ref-10 [12]: #ref-11 [13]: #ref-12 [14]: #ref-13 [15]: #ref-14 [16]: #ref-15 [17]: #xref-ref-1-1 "View reference 1 in text" [18]: {openurl}?query=rft.jtitle%253DAnnu.%2BRev.%2BEnviron.%2BResour.%26rft.volume%253D41%26rft.spage%253D143%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [19]: #xref-ref-2-1 "View reference 2 in text" [20]: #xref-ref-3-1 "View reference 3 in text" [21]: #xref-ref-4-1 "View reference 4 in text" [22]: #xref-ref-5-1 "View reference 5 in text" [23]: {openurl}?query=rft.jtitle%253DOcean%2BCoast.%2BManage.%26rft.volume%253D133%26rft.spage%253D72%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [24]: #xref-ref-6-1 "View reference 6 in text" [25]: {openurl}?query=rft.jtitle%253DNat.%2BCommun.%26rft.volume%253D11%26rft.spage%253D536%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [26]: #xref-ref-7-1 "View reference 7 in text" [27]: {openurl}?query=rft.jtitle%253DPolar%2BBiol.%26rft.volume%253D32%26rft.spage%253D1529%26rft_id%253Dinfo%253Adoi%252F10.1007%252Fs00300-009-0653-y%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [28]: /lookup/external-ref?access_num=10.1007/s00300-009-0653-y&link_type=DOI [29]: #xref-ref-8-1 "View reference 8 in text" [30]: {openurl}?query=rft.jtitle%253DTheor.%2BAppl.%2BClimatol.%26rft.volume%253D115%26rft.spage%253D517%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [31]: #xref-ref-9-1 "View reference 9 in text" [32]: {openurl}?query=rft.jtitle%253DGlob.%2BEcol.%2BConserv.%26rft.volume%253D18%26rft.spage%253D00620e%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [33]: #xref-ref-10-1 "View reference 10 in text" [34]: {openurl}?query=rft.jtitle%253DJ.%2BWildl.%2BManage.%26rft.volume%253D71%26rft.spage%253D1267%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [35]: #xref-ref-11-1 "View reference 11 in text" [36]: {openurl}?query=rft.jtitle%253DOryx%26rft.volume%253D55%26rft.spage%253D275%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [37]: #xref-ref-12-1 "View reference 12 in text" [38]: {openurl}?query=rft.jtitle%253DWildl.%2BSoc.%2BBull.%26rft.volume%253D31%26rft.spage%253D517%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [39]: #xref-ref-13-1 "View reference 13 in text" [40]: {openurl}?query=rft.jtitle%253DHum.%2BWildl.%2BInteract.%26rft.volume%253D14%26rft.spage%253D255%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [41]: #xref-ref-14-1 "View reference 14 in text" [42]: http://www.wildlifeconservationtrust.org/can-artificial-intelligence-predict-human-wildlife-conflict/ [43]: #xref-ref-15-1 "View reference 15 in text" [44]: {openurl}?query=rft.jtitle%253DNat.%2BClim.%2BChang.%26rft.volume%253D2%26rft.spage%253D491%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx
A fast link between face perception and memory in the temporal pole
Explicit semantic information in the brain is generated by gradually stripping off the specific context in which the item is embedded. A particularly striking example of such explicit representations are face-specific neurons. Landi et al. report the properties of neurons in a small region of the monkey anterior temporal cortex that respond to the sight of familiar faces. These cells respond to the internal features of familiar faces but not unknown faces. Some of these responses are very highly selective, reliably responding to only one face out of a vast number of other stimuli. These findings will advance our understanding about where and how semantic memories are stored in the brain. Science , abi6671, this issue p. [581][1] The question of how the brain recognizes the faces of familiar individuals has been important throughout the history of neuroscience. Cells linking visual processing to person memory have been proposed but not found. Here, we report the discovery of such cells through recordings from an area in the macaque temporal pole identified with functional magnetic resonance imaging. These cells responded to faces that were personally familiar. They responded nonlinearly to stepwise changes in face visibility and detail and holistically to face parts, reflecting key signatures of familiar face recognition. They discriminated between familiar identities, as fast as a general face identity area. The discovery of these cells establishes a new pathway for the fast recognition of familiar individuals. [1]: /lookup/doi/10.1126/science.abi6671
Type 1 diabetes glycemic management: Insulin therapy, glucose monitoring, and automation
Despite innovations in insulin therapy since its discovery, most patients living with type 1 diabetes do not achieve sufficient glycemic control to prevent complications, and they experience hypoglycemia, weight gain, and major self-care burden. Promising pharmacological advances in insulin therapy include the refinement of extremely rapid insulin analogs, alternate insulin-delivery routes, liver-selective insulins, add-on drugs that enhance insulin effect, and glucose-responsive insulin molecules. The greatest future impact will come from combining these pharmacological solutions with existing automated insulin delivery methods that integrate insulin pumps and glucose sensors. These systems will use algorithms enhanced by machine learning, supplemented by technologies that include activity monitors and sensors for other key metabolites such as ketones. The future challenges facing clinicians and researchers will be those of access and broad clinical implementation.
Retinal waves prime visual motion detection by simulating future optic flow
As a mouse runs forward across the forest floor, the scenery that it passes flows backwards. Ge et al. show that the developing mouse retina practices in advance for what the eyes must later process as the mouse moves. Spontaneous waves of retinal activity flow in the same pattern as would be produced days later by actual movement through the environment. This patterned, spontaneous activity refines the responsiveness of cells in the brain's superior colliculus, which receives neural signals from the retina to process directional information. Science , abd0830, this issue p. [eabd0830][1] ### INTRODUCTION Fundamental circuit features of the mouse visual system emerge before the onset of vision, allowing the mouse to perceive objects and detect visual motion immediately upon eye opening. How the mouse visual system achieves self-organization by the time of eye opening without structured external sensory input is not well understood. In the absence of sensory drive, the developing retina generates spontaneous activity in the form of propagating waves. Past work has shown that spontaneous retinal waves provide the correlated activity necessary to refine the development of gross topographic maps in downstream visual areas, such as retinotopy and eye-specific segregation, but it is unclear whether waves also convey information that instructs the development of higher-order visual response properties, such as direction selectivity, at eye opening. ### RATIONALE Spontaneous retinal waves exhibit stereotyped changing spatiotemporal patterns throughout development. To characterize the spatiotemporal properties of waves during development, we used one-photon wide-field calcium imaging of retinal axons projecting to the superior colliculus in awake neonatal mice. We identified a consistent propagation bias that occurred during a transient developmental window shortly before eye opening. Using quantitative analysis, we investigated whether the directionally biased retinal waves conveyed ethological information relevant to future visual inputs. To understand the origin of directional retinal waves, we used pharmacological, optogenetic, and genetic strategies to identify the retinal circuitry underlying the propagation bias. Finally, to evaluate the role of directional retinal waves in visual system development, we used pharmacological and genetic strategies to chronically manipulate wave directionality and used two-photon calcium imaging to measure responses to visual motion in the midbrain superior colliculus immediately after eye opening. ### RESULTS We found that spontaneous retinal waves in mice exhibit a distinct propagation bias in the temporal-to-nasal direction during a transient window of development (postnatal day 8 to day 11). The spatial geometry of directional wave flow aligns strongly with the optic flow pattern generated by forward self-motion, a dominant natural optic flow pattern after eye opening. We identified an intrinsic asymmetry in the retinal circuit that enforced the wave propagation bias involving the same circuit elements necessary for motion detection in the adult retina, specifically asymmetric inhibition from starburst amacrine cells through γ-aminobutyric acid type A (GABAA) receptors. Finally, manipulation of directional retinal waves, through either the chronic delivery of gabazine to block GABAergic inhibition or the starburst amacrine cell–specific mutation of the FRMD7 gene, impaired the development of responses to visual motion in superior colliculus neurons downstream of the retina. ### CONCLUSION Our results show that spontaneous activity in the developing retina prior to vision onset is structured to convey essential information for the development of visual response properties before the onset of visual experience. Spontaneous retinal waves simulate future optic flow patterns produced by forward motion through space, due to an asymmetric retinal circuit that has an evolutionarily conserved link with motion detection circuitry in the mature retina. Furthermore, the ethologically relevant information relayed by directional retinal waves enhances the development of higher-order visual function in the downstream visual system prior to eye opening. These findings provide insight into the activity-dependent mechanisms that regulate the self-organization of brain circuits before sensory experience begins. ![Figure][2] Origin and function of directional retinal waves. ( A ) Imaging of retinal axon activity reveals a propagation bias in spontaneous retinal waves (scale bar, 500 μm). ( B ) Cartoon depiction of wave flow vectors projected onto visual space. Vectors (black arrows) align with the optic flow pattern (red arrows) generated by forward self-motion. ( C ) Asymmetric GABAergic inhibition in the retina mediates wave directionality. ( D ) Developmental manipulation of wave directionality disrupts direction-selective responses in downstream superior colliculus neurons at eye opening. The ability to perceive and respond to environmental stimuli emerges in the absence of sensory experience. Spontaneous retinal activity prior to eye opening guides the refinement of retinotopy and eye-specific segregation in mammals, but its role in the development of higher-order visual response properties remains unclear. Here, we describe a transient window in neonatal mouse development during which the spatial propagation of spontaneous retinal waves resembles the optic flow pattern generated by forward self-motion. We show that wave directionality requires the same circuit components that form the adult direction-selective retinal circuit and that chronic disruption of wave directionality alters the development of direction-selective responses of superior colliculus neurons. These data demonstrate how the developing visual system patterns spontaneous activity to simulate ethologically relevant features of the external world and thereby instruct self-organization. [1]: /lookup/doi/10.1126/science.abd0830 [2]: pending:yes
News at a glance
SCI COMMUN### Astronomy The Hubble Space Telescope ended a monthlong hiatus on 16 July when operators successfully switched a failed control system to backup devices. The trouble started on 13 June when Hubble's payload computer, which controls its instruments, halted, and the main spacecraft computer put all the astronomical instruments in safe mode. Operators were unable to restart the payload computer, and switching memory modules—which they initially thought were at fault—didn't wake the telescope. They tested and ruled out problems in other devices before zeroing in on a power control unit. NASA called in retired staff to help devise a fix for the 31-year-old telescope, which involved remotely switching to a spare power control unit and other backup hardware for managing the instruments and their data. The agency practiced and checked the repair on the ground for 2 weeks before executing it. After powering up all the hardware, Hubble returned to work on 17 July, and has already beamed back new images. NASA says it expects Hubble to continue for many years. ### Conservation A new automated alert system can help veterinarians get a jump on investigating disease outbreaks and disasters afflicting wildlife. Researchers at the University of California, Davis, and colleagues used a machine learning algorithm to scan case reports of sick and dead wildlife submitted to a database by wildlife clinics and rehabilitation centers in the United States and other countries. The researchers used data from 3081 reports filed from California to train the algorithm to detect patterns of species suffering common symptoms. The software is designed to identify unusual events in one of 12 clinical categories, such as mass starvation or an oil spill. The algorithm assigned the correct category to 83% of cases examined, including ones from an outbreak of neurological disease in California brown pelicans (above) and red-throated loons, the research team reported last week in the Proceedings of the Royal Society B . The system could help wildlife officials more quickly detect developing problems and confirm specific causes. ### Public health Reflecting another toll of the coronavirus pandemic, 23 million children missed routine vaccinations in 2020, the most since 2009 and 19% more than in 2019, the World Health Organization (WHO) and UNICEF said last week. As many as 17 million didn't receive any childhood vaccine at all. The pandemic led to closures or cutbacks at vaccination clinics and lockdowns that prevented parents and their children from reaching them, the groups reported. In addition, 57 mass vaccination campaigns for non–COVID-19 diseases in 66 countries were postponed. Childhood vaccination rates decreased across all WHO regions, with the Southeast Asian and eastern Mediterranean regions particularly affected. In India, more than 3 million children missed a first dose of the diphtheria, tetanus, and pertussis vaccine, more than double the number in 2019. “We [are] leaving children at risk from devastating but preventable diseases like measles, polio, or meningitis,” says WHO Director-General Tedros Adhanom Ghebreyesus. ### Climate policy As part of the run-up to the U.N. climate summit in November, the European Union and China announced last week plans to follow through on commitments to curb their carbon emissions. The European proposal, which must be approved by the bloc's member states, would steeply increase the price of carbon dioxide (CO2) emissions; eliminate new gas-powered cars by 2035; require 38% of all energy to come from renewables by 2030, up from a previous goal of 32%; and impose tariffs on goods from countries that have not acted on climate change. (Democratic lawmakers in the United States proposed a similar tariff this week.) Meanwhile, China on 16 July launched a carbon trading scheme for power plants that instantly created the world's largest carbon market, triple the European Union's in size. China's plan incentivizes plants to lower CO2 emissions by allowing more efficient facilities to sell some of their reductions to less efficient ones. Although some observers call the plan weak because it covers a relatively small portion of China's emissions, it could be expanded to eventually incorporate three-fourths of the country's emissions from all sources. ### Public health When temperatures soar, workers and their employers need to take heed: Hot weather led to 20,000 more injuries annually in California between 2001 and 2018, according to a novel analysis of 11 million workers compensation claims. Economist Jisung Park at the University of California, Los Angeles, and colleagues classified work-related injuries by ZIP code and looked up local temperatures on the day each was recorded. They found increases of between 5% and 15% in claims, depending on the temperature and occupation, compared with those filed on a typical cooler day, defined as a temperature of 16°C. Few were attributed directly to heat, but the injuries connected to higher temperatures—such as falls and mishandling equipment—may have resulted because the heat made workers woozy, the researchers reported to Congress last week and in a preprint on the SSRN server. But mitigation may be possible: Heat-related injury claims declined after 2005, when California began to require shade, water, and breaks for outdoor workers—in industries such as construction, utilities, and farming—whenever temperature exceeded 35°C. ### Research integrity Both the United Kingdom and the United States last week announced new high-level bodies to provide guidance on research integrity—but both lack the powers that many whistleblowers say are critical, such as independently investigating complaints of wrongdoing and pulling grant funding from institutions that fail to conduct misconduct probes properly. The umbrella funding body UK Research and Innovation (UKRI) launched the Committee on Research Integrity, which plans to operate for 3 years and accelerate existing projects in this area. The U.S. National Academies of Sciences, Engineering, and Medicine (NASEM) unveiled the Strategic Council for Research Excellence, Integrity, and Trust, which will have members from the U.S. National Institutes of Health and National Science Foundation. Unlike UKRI, NASEM does not fund researchers, so it cannot set policies on how to handle misconduct allegations. But it could promote integrity in other ways—for instance by pushing for a central repository for researchers to report their financial conflicts of interest, says Marcia McNutt, president of the National Academy of Sciences and an ex officio member of the new panel. ### Microbiology Sifting through DNA in the mud of her backyard, a geomicrobiologist discovered what may be the longest known extrachromosomal sequence, which includes genes from a variety of microbes—prompting her son to propose naming it after Star Trek 's Borg, cybernetic aliens that assimilate humans. Jill Banfield of the University of California, Berkeley, was searching for viruses that infect archaea, a type of microbe often found in places devoid of oxygen. The 1-million-base-pair strand of DNA contains genes known to help archaea metabolize methane, suggesting the fragment might exist inside the microbes but outside their normal chromosome, the research team wrote in a preprint posted on 10 July on the bioRxiv server. Scanning a public microbial DNA database, the authors identified 23 possible Borgs, with many of the same characteristics, in other U.S. locations. The Borgs' role remains murky, but they may provide another example of DNA that can hop between an organism's chromosomes or between organisms, helping species adapt to changes in their environment.
Brain signals 'speak for person with paralysis
A man unable to speak after a stroke has produced sentences through a system that reads electrical signals from speech production areas of his brain, researchers report this week. The approach has previously been used in nondisabled volunteers to reconstruct spoken or imagined sentences. But this first demonstration in a person who is paralyzed “tackles really the main issue that was left to be tackled—bringing this to the patients that really need it,” says Christian Herff, a computer scientist at Maastricht University who was not involved in the new work. The participant had a stroke more than a decade ago that left him with anarthria—an inability to control the muscles involved in speech. Because his limbs are also paralyzed, he communicates by selecting letters on a screen using small movements of his head, producing roughly five words per minute. To enable faster, more natural communication, neurosurgeon Edward Chang of the University of California, San Francisco, tested an approach that uses a computational model known as a deep-learning algorithm to interpret patterns of brain activity in the sensorimotor cortex, a brain region involved in producing speech ( Science , 4 January 2019, p. [14][1]). The approach has so far been tested in volunteers who have electrodes surgically implanted for nonresearch reasons such as to monitor epileptic seizures. In the new study, Chang's team temporarily removed a portion of the participant's skull and laid a thin sheet of electrodes smaller than a credit card directly over his sensorimotor cortex. To “train” a computer algorithm to associate brain activity patterns with the onset of speech and with particular words, the team needed reliable information about what the man intended to say and when. So the researchers repeatedly presented one of 50 words on a screen and asked the man to attempt to say it on cue. Once the algorithm was trained with data from the individual word task, the man tried to read sentences built from the same set of 50 words, such as “Bring my glasses, please.” To improve the algorithm's guesses, the researchers added a processing component called a natural language model, which uses common word sequences to predict the likely next word in a sentence. With that approach, the system only got about 25% of the words in a sentence wrong, they report this week in The New England Journal of Medicine . That's “pretty impressive,” says Stephanie Riès-Cornou, a neuroscientist at San Diego State University. (The error rate for chance performance would be 92%.) Because the brain reorganizes over time, it wasn't clear that speech production areas would give interpretable signals after more than 10 years of anarthria, notes Anne-Lise Giraud, a neuroscientist at the University of Geneva. The signals' preservation “is surprising,” she says. And Herff says the team made a “gigantic” step by generating sentences as the man was attempting to speak rather than from previously recorded brain data, as most studies have done. With the new approach, the man could produce sentences at a rate of up to 18 words per minute, Chang says. That's roughly comparable to the speed achieved with another brain-computer interface, described in Nature in May. That system decoded individual letters from activity in a brain area responsible for planning hand movements while a person who was paralyzed imagined handwriting. These speeds are still far from the 120 to 180 words per minute typical of conversational English, Riès-Cornou notes, but they far exceed what the participant can achieve with his head-controlled device. The system isn't ready for use in everyday life, Chang notes. Future improvements will include expanding its repertoire of words and making it wireless, so the user isn't tethered to a computer roughly the size of a minifridge. [1]: http://www.sciencemag.org/content/363/6422/14
A sustainable use of space
Last month, at the G7 Leaders' Summit in Cornwall, United Kingdom, the leading industrial nations addressed the sustainable and safe use of space, making space debris a priority and calling on other nations to follow suit. This is good news because space is becoming increasingly congested, and strong political will is needed for the international space community to start using space sustainably and preserve the orbital environment for the space activities of future generations. There are more than 28,000 routinely tracked objects orbiting Earth. The vast majority (85%) are space debris that no longer serve a purpose. These debris objects are dominated by fragments from the approximately 560 known breakups, explosions, and collisions of satellites or rocket bodies. These have left behind an estimated 900,000 objects larger than 1 cm and a staggering 130 million objects larger than 1 mm in commercially and scientifically valuable Earth orbits. Today's already active satellite infrastructure provides a multitude of critical services to modern society, including communication, weather, navigation, and Earth-monitoring missions. Its loss would severely damage modern society. Furthermore, a new era in space has just started, driven by commercial, low-latency broadband services that rely on large constellations of satellites in low Earth orbit. These will revolutionize connectivity on the ground and in the air. However, they will also increase space traffic. The satellites to be launched over the next 5 years will surpass the number launched globally over the entire history of spaceflight. Congestion in space is only going to get worse. It is apparent that debris mitigation strategies—defined two decades ago by experts in the world's leading space agencies—are ever more important. They aim to prevent explosive breakups by venting residual energy from space systems at the end of their missions, and to “dispose” of a space object through a final maneuver that causes it to reenter Earth's atmosphere. Although these strategies are widely recognized, dozens of large space objects are still stranded every year in critical orbital regions where they will remain for several hundred years. And an average of eight fragmentation events in orbit occur annually, adding more pollution and increasing the likelihood of more collisions. Operations in space are themselves facing the burden of increasing evasive maneuvers to prevent losing a mission. In the most densely populated orbital altitudes, space objects are receiving dozens of collision warnings per day, of which only the most critical can be avoided. The number of such alerts will grow as large constellations of satellites come online. Another important facet of the debris problem is the risk on Earth from reentering objects. Between 100 and 200 metric tons of human-made hardware reenters Earth's atmosphere every year in an uncontrolled fashion. Heat-resistant material, like titanium or stainless steel, can survive the harsh reentry conditions. Progress can be made by advancing technology to ensure spaceflight safety. For example, the European Space Agency's Space Safety Programme is developing solutions that make disposal and energy passivation actions more fail-safe. “Deorbiting kits” will provide redundant propulsion and communication to ensure disposal of a spacecraft even after it ceases to function. A new field of “design-to-demise” will aim to replace critical components with less heat-resistant material to limit their chance of reaching ground upon reentry. In addition, a more systematic deployment of ground-based laser tracking could increase the accuracy of space surveillance data and consequently limit the number of collision avoidance alerts. Laser power could even transfer a small amount of momentum to objects to prevent their collisions. On top of that, missions, such as Clearspace-1, will aim to remove targeted debris through robotic capture. An internationally binding regime for the management of debris and space traffic is pending. Thus far, space missions have been supervised on the national level only, and states have been encouraged to translate the nonbinding space debris guidelines into national regulations. Space, however, is a commonly used resource with a limited capacity. International harmonization of space traffic would be required for an efficient and interference-free use of space. The coordinated use of the available radio frequencies could serve as a template. Furthermore, the implementation of space debris mitigation requirements should be tracked, following internationally binding principles. New and affordable technical solutions might stimulate more ambitious steps in international regulation to preserve space for the spacefarers of tomorrow.
Estimating epidemiologic dynamics from cross-sectional viral load distributions
During the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, polymerase chain reaction (PCR) tests were generally reported only as binary positive or negative outcomes. However, these test results contain a great deal more information than that. As viral load declines exponentially, the PCR cycle threshold (Ct) increases linearly. Hay et al. developed an approach for extracting epidemiological information out of the Ct values obtained from PCR tests used in surveillance for a variety of settings (see the Perspective by Lopman and McQuade). Although there are challenges to relying on single Ct values for individual-level decision-making, even a limited aggregation of data from a population can inform on the trajectory of the pandemic. Therefore, across a population, an increase in aggregated Ct values indicates that a decline in cases is occurring. Science , abh0635, this issue p. [eabh0635][1]; see also abj4185, p. [280][2] ### INTRODUCTION Current approaches to epidemic monitoring rely on case counts, test positivity rates, and reported deaths or hospitalizations. These metrics, however, provide a limited and often biased picture as a result of testing constraints, unrepresentative sampling, and reporting delays. Random cross-sectional virologic surveys can overcome some of these biases by providing snapshots of infection prevalence but currently offer little information on the epidemic trajectory without sampling across multiple time points. ### RATIONALE We develop a new method that uses information inherent in cycle threshold (Ct) values from reverse transcription quantitative polymerase chain reaction (RT-qPCR) tests to robustly estimate the epidemic trajectory from multiple or even a single cross section of positive samples. Ct values are related to viral loads, which depend on the time since infection; Ct values are generally lower when the time between infection and sample collection is short. Despite variation across individuals, samples, and testing platforms, Ct values provide a probabilistic measure of time since infection. We find that the distribution of Ct values across positive specimens at a single time point reflects the epidemic trajectory: A growing epidemic will necessarily have a high proportion of recently infected individuals with high viral loads, whereas a declining epidemic will have more individuals with older infections and thus lower viral loads. Because of these changing proportions, the epidemic trajectory or growth rate should be inferable from the distribution of Ct values collected in a single cross section, and multiple successive cross sections should enable identification of the longer-term incidence curve. Moreover, understanding the relationship between sample viral loads and epidemic dynamics provides additional insights into why viral loads from surveillance testing may appear higher for emerging viruses or variants and lower for outbreaks that are slowing, even absent changes in individual-level viral kinetics. ### RESULTS Using a mathematical model for population-level viral load distributions calibrated to known features of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viral load kinetics, we show that the median and skewness of Ct values in a random sample change over the course of an epidemic. By formalizing this relationship, we demonstrate that Ct values from a single random cross section of virologic testing can estimate the time-varying reproductive number of the virus in a population, which we validate using data collected from comprehensive SARS-CoV-2 testing in long-term care facilities. Using a more flexible approach to modeling infection incidence, we also develop a method that can reliably estimate the epidemic trajectory in even more-complex populations, where interventions may be implemented and relaxed over time. This method performed well in estimating the epidemic trajectory in the state of Massachusetts using routine hospital admissions RT-qPCR testing data—accurately replicating estimates from other sources for the entire state. ### CONCLUSION This work provides a new method for estimating the epidemic growth rate and a framework for robust epidemic monitoring using RT-qPCR Ct values that are often simply discarded. By deploying single or repeated (but small) random surveillance samples and making the best use of the semiquantitative testing data, we can estimate epidemic trajectories in real time and avoid biases arising from nonrandom samples or changes in testing practices over time. Understanding the relationship between population-level viral loads and the state of an epidemic reveals important implications and opportunities for interpreting virologic surveillance data. It also highlights the need for such surveillance, as these results show how to use it most informatively. ![Figure][3] Ct values reflect the epidemic trajectory and can be used to estimate incidence. ( A and B ) Whether an epidemic has rising or falling incidence will be reflected in the distribution of times since infection (A), which in turn affects the distribution of Ct values in a surveillance sample (B). ( C ) These values can be used to assess whether the epidemic is rising or falling and estimate the incidence curve. Estimating an epidemic’s trajectory is crucial for developing public health responses to infectious diseases, but case data used for such estimation are confounded by variable testing practices. We show that the population distribution of viral loads observed under random or symptom-based surveillance—in the form of cycle threshold (Ct) values obtained from reverse transcription quantitative polymerase chain reaction testing—changes during an epidemic. Thus, Ct values from even limited numbers of random samples can provide improved estimates of an epidemic’s trajectory. Combining data from multiple such samples improves the precision and robustness of this estimation. We apply our methods to Ct values from surveillance conducted during the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic in a variety of settings and offer alternative approaches for real-time estimates of epidemic trajectories for outbreak management and response. [1]: /lookup/doi/10.1126/science.abh0635 [2]: /lookup/doi/10.1126/science.abj4185 [3]: pending:yes
Beware explanations from AI in health care
Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions ([ 1 ][1]). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion ([ 2 ][2]). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake ([ 3 ][3], [ 4 ][4]). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions ([ 5 ][5]). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable. It is important to first distinguish explainable from interpretable AI/ML. These are two very different types of algorithms with different ways of dealing with the problem of opacity—that AI predictions generated from a black box undermine trust, accountability, and uptake of AI. A typical AI/ML task requires constructing an algorithm that can take a vector of inputs (for example, pixel values of a medical image) and generate an output pertaining to, say, disease occurrence (for example, cancer diagnosis). The algorithm is trained on past data with known labels, which means that the parameters of a mathematical function that relate the inputs to the output are estimated from that data. When we refer to an algorithm as a “black box,” we mean that the estimated function relating inputs to outputs is not understandable at an ordinary human level (owing to, for example, the function relying on a large number of parameters, complex combinations of parameters, or nonlinear transformations of parameters). Interpretable AI/ML (which is not the subject of our main criticism) does roughly the following: Instead of using a black-box function, it uses a transparent (“white-box”) function that is in an easy-to-digest form, for example, a linear model whose parameters correspond to additive weights relating the input features and the output or a classification tree that creates an intuitive rule-based map of the decision space. Such algorithms have been described as intelligible ([ 6 ][6]) and decomposable ([ 7 ][7]). The interpretable algorithm may not be immediately understandable by everyone (even a regression requires a bit of background on linear relationships, for example, and can be misconstrued). However, the main selling point of interpretable AI/ML algorithms is that they are open, transparent, and capable of being understood with reasonable effort. Accordingly, some scholars argue that, under many conditions, only interpretable algorithms should be used, especially when they are used by governments for distributing burdens and benefits ([ 8 ][8]). However, requiring interpretability would create an important change to ML as it is being done today—essentially that we forgo deep learning altogether and whatever benefits it may entail. Explainable AI/ML is very different, even though both approaches are often grouped together. Explainable AI/ML, as the term is typically used, does roughly the following: Given a black-box model that is used to make predictions or diagnoses, a second explanatory algorithm finds an interpretable function that closely approximates the outputs of the black box. This second algorithm is trained by fitting the predictions of the black box and not the original data, and it is typically used to develop the post hoc explanations for the black-box outputs and not to make actual predictions because it is typically not as accurate as the black box. The explanation might, for instance, be given in terms of which attributes of the input data in the black-box algorithm matter most to a specific prediction, or it may offer an easy-to-understand linear model that gives similar outputs as the black-box algorithm for the same given inputs ([ 4 ][4]). Other models, such as so-called counterfactual explanations or heatmaps, are also possible ([ 9 ][9], [ 10 ][10]). In other words, explainable AI/ML ordinarily finds a white box that partially mimics the behavior of the black box, which is then used as an explanation of the black-box predictions. Three points are important to note: First, the opaque function of the black box remains the basis for the AI/ML decisions, because it is typically the most accurate one. Second, the white box approximation to the black box cannot be perfect, because if it were, there would be no difference between the two. It is also not focusing on accuracy but on fitting the black box, often only locally. Finally, the explanations provided are post hoc. This is unlike interpretable AI/ML, where the explanation is given using the exact same function that is responsible for generating the output and is known and fixed ex ante for all inputs. A substantial proportion of AI/ML-based medical devices that have so far been cleared or approved by the US Food and Drug Administration (FDA) use noninterpretable black-box models, such as deep learning ([ 1 ][1]). This may be because blackbox models are deemed to perform better in many health care applications, which are often of massively high dimensionality, such as image recognition or genetic prediction. Whatever the reason, to require an explanation of black-box AI/ML systems in health care at present entails using post hoc explainable AI/ML models, and this is what we caution against here. Explainable algorithms have been a relatively recent area of research, and much of the focus of tech companies and researchers has been on the development of the algorithms themselves—the engineering—and not on the human factors affecting the final outcomes. The prevailing argument for explainable AI/ML is that it facilitates user understanding, builds trust, and supports accountability ([ 3 ][3], [ 4 ][4]). Unfortunately, current explainable AI/ML algorithms are unlikely to achieve these goals—at least in health care—for several reasons. ### Ersatz understanding Explainable AI/ML (unlike interpretable AI/ML) offers post hoc algorithmically generated rationales of black-box predictions, which are not necessarily the actual reasons behind those predictions or related causally to them. Accordingly, the apparent advantage of explainability is a “fool's gold” because post hoc rationalizations of a black box are unlikely to contribute to our understanding of its inner workings. Instead, we are likely left with the false impression that we understand it better. We call the understanding that comes from post hoc rationalizations “ersatz understanding.” And unlike interpretable AI/ML where one can confirm the quality of explanations of the AI/ML outcomes ex ante, there is no such guarantee for explainable AI/ML. It is not possible to ensure ex ante that for any given input the explanations generated by explainable AI/ML algorithms will be understandable by the user of the associated output. By not providing understanding in the sense of opening up the black box, or revealing its inner workings, this approach does not guarantee to improve trust and allay any underlying moral, ethical, or legal concerns. There are some circumstances where the problem of ersatz understanding may not be an issue. For example, researchers may find it helpful to generate testable hypotheses through many different approximations to a black-box algorithm to advance research or improve an AI/ML system. But this is a very different situation from regulators requiring AI/ML-based medical devices to be explainable as a precondition of their marketing authorization. ### Lack of robustness For an explainable algorithm to be trusted, it needs to exhibit some robustness. By this, we mean that the explainability algorithm should ordinarily generate similar explanations for similar inputs. However, for a very small change in input (for example, in a few pixels of an image), an approximating explainable AI/ML algorithm might produce very different and possibly competing explanations, with such differences not being necessarily justifiable or understood even by experts. A doctor using such an AI/ML-based medical device would naturally question that algorithm. ### Tenuous connection to accountability It is often argued that explainable AI/ML supports algorithmic accountability. If the system makes a mistake, the thought goes, it will be easier to retrace our steps and delineate what led to the mistake and who is responsible. Although this is generally true of interpretable AI/ML systems, which are transparent by design, it is not true of explainable AI/ML systems because the explanations are post hoc rationales, which only imperfectly approximate the actual function that drove the decision. In this sense, explainable AI/ML systems can serve to obfuscate our investigation into a mistake rather than help us to understand its source. The relationship between explainability and accountability is further attenuated by the fact that modern AI/ML systems rely on multiple components, each of which may be a black box in and of itself, thereby requiring a fact finder or investigator to identify, and then combine, a sequence of partial post hoc explanations. Thus, linking explainability to accountability may prove to be a red herring. Explainable AI/ML systems not only are unlikely to produce the benefits usually touted of them but also come with additional costs (as compared with interpretable systems or with using black-box models alone without attempting to rationalize their outputs). ### Misleading in the hands of imperfect users Even when explanations seem credible, or nearly so, when combined with prior beliefs of imperfectly rational users, they may still drive the users further away from a real understanding of the model. For example, the average user is vulnerable to narrative fallacies, where users combine and reframe explanations in misleading ways. The long history of medical reversals—the discovery that a medical practice did not work all along, either failing to achieve its intended goal or carrying harms that outweighed the benefits—provides examples of the risks of narrative fallacy in health care. Relatedly, explanations in the form of deceptively simple post hoc rationales can engender a false sense of (over)confidence. This can be further exacerbated through users' inability to reason with probabilistic predictions, which AI/ML systems often provide ([ 11 ][11]), or the users' undue deference to automated processes ([ 2 ][2]). All of this is made more challenging because explanations have multiple audiences, and it would be difficult to generate explanations that are helpful for all of them. ### Underperforming in at least some tasks If regulators decide that the only algorithms that can be marketed are those whose predictions can be explained with reasonable fidelity, they thereby limit the system's developers to a certain subset of AI/ML algorithms. For example, highly nonlinear models that are harder to approximate in a sufficiently large region of the data space may thus be prohibited under such a regime. This will be fine in cases where complex models—like deep learning or ensemble methods—do not particularly outperform their simpler counterparts (characterized by fairly structured data and meaningful features, such as predictions based on relatively few patient medical records) ([ 8 ][8]). But in others, especially in cases with massively high dimensionality—such as image recognition or genetic sequence analysis—limiting oneself to algorithms that can be explained sufficiently well may unduly limit model complexity and undermine accuracy. If explainability should not be a strict requirement for AI/ML in health care, what then? Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness—in particular, how does it perform in the hands of its intended users? To accomplish this, regulators should place more emphasis on well-designed clinical trials, at least for some higher-risk devices, and less on whether the AI/ML system can be explained ([ 12 ][12]). So far, most AI/ML-based medical devices have been cleared by the FDA through the 510(k) pathway, requiring only that substantial equivalence to a legally marketed (predicate) device be demonstrated, without usually requiring any clinical trials ([ 13 ][13]). Another approach is to provide individuals added flexibility when they interact with a model—for example, by allowing them to request AI/ML outputs for variations of inputs or with additional data. This encourages buy-in from the users and reinforces the model's robustness, which we think is more intimately tied to building trust. This is a different approach to providing insight into a model's inner workings. Such interactive processes are not new in health care, and their design may depend on the specific application. One example of such a process is the use of computer decision aids for shared decision-making for antenatal counseling at the limits of gestational viability. A neonatologist and the prospective parents might use the decision aid together in such a way to show how various uncertainties will affect the “risk:benefit ratios of resuscitating an infant at the limits of viability” ([ 14 ][14]). This reflects a phenomenon for which there is growing evidence—that allowing individuals to interact with an algorithm reduces “algorithmic aversion” and makes them more willing to accept the algorithm's predictions ([ 2 ][2]). ### From health care to other settings Our argument is targeted particularly to the case of health care. This is partly because health care applications tend to rely on massively high-dimensional predictive algorithms where loss of accuracy is particularly likely if one insists on the ability of good black-box approximations with simple enough explanations, and expertise levels vary. Moreover, the costs of misclassifications and potential harm to patients are relatively higher in health care compared with many other sectors. Finally, health care traditionally has multiple ways of demonstrating the reliability of a product or process, even in the absence of explanations. This is true of many FDA-approved drugs. We might think of medical AI/ML as more like a credence good, where the epistemic warrant for its use is trust in someone else rather than an understanding of how it works. For example, many physicians may be quite ignorant of the underlying clinical trial design or results that led the FDA to believe that a certain prescription drug was safe and effective, but their knowledge that it has been FDA-approved and that other experts further scrutinize it and use it supplies the necessary epistemic warrant for trusting the drug. But insofar as other domains share some of these features, our argument may apply more broadly and hold some lessons for regulators outside health care as well. ### When interpretable AI/ML is necessary Health care is a vast domain. Many AI/ML predictions are made to support diagnosis or treatment. For example, Biofourmis's RhythmAnalytics is a deep neural network architecture trained on electrocardiograms to predict more than 15 types of cardiac arrhythmias ([ 15 ][15]). In cases like this, accuracy matters a lot, and understanding is less important when a black box achieves higher accuracy than a white box. Other medical applications, however, are different. For example, imagine an AI/ML system that uses predictions about the extent of a patient's kidney damage to determine who will be eligible for a limited number of dialysis machines. In cases like this, when there are overarching concerns of justice— that is, concerns about how we should fairly allocate resources—ex ante transparency about how the decisions are made can be particularly important or required by regulators. In such cases, the best standard would be to simply use interpretable AI/ML from the outset, with clear predetermined procedures and reasons for how decisions are taken. In such contexts, even if interpretable AI/ML is less accurate, we may prefer to trade off some accuracy, the price we pay for procedural fairness. We argue that the current enthusiasm for explainability in health care is likely overstated: Its benefits are not what they appear, and its drawbacks are worth highlighting. For health AI/ML-based medical devices at least, it may be preferable not to treat explainability as a hard and fast requirement but to focus on their safety and effectiveness. Health care professionals should be wary of explanations that are provided to them for black-box AI/ML models. Health care professionals should strive to better understand AI/ML systems to the extent possible and educate themselves about how AI/ML is transforming the health care landscape, but requiring explainable AI/ML seldom contributes to that end. 1. [↵][16]1. S. Benjamens, 2. P. Dhunnoo, 3. B. Meskó , NPJ Digit. Med. 3, 118 (2020). [OpenUrl][17][PubMed][18] 2. [↵][19]1. B. J. Dietvorst, 2. J. P. Simmons, 3. C. Massey , Manage. Sci. 64, 1155 (2018). [OpenUrl][20] 3. [↵][21]1. A. F. Markus, 2. J. A. Kors, 3. P. R. Rijnbeek , J. Biomed. Inform. 113, 103655 (2021). [OpenUrl][22][PubMed][18] 4. [↵][23]1. M. T. Ribeiro, 2. S. Singh, 3. C. Guestrin , in KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2016), pp. 1135–1144. 5. [↵][24]1. A. Bohr, 2. K. Memarzadeh 1. S. Gerke, 2. T. Minssen, 3. I. G. Cohen , in Artificial Intelligence in Healthcare, A. Bohr, K. Memarzadeh, Eds. (Elsevier, 2020), pp. 295–336. 6. [↵][25]1. Y. Lou, 2. R. Caruana, 3. J. Gehrke , in KDD '12: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2012), pp. 150–158. 7. [↵][26]1. Z. C. Lipton , ACM Queue 16, 1 (2018). [OpenUrl][27] 8. [↵][28]1. C. Rudin , Nat. Mach. Intell. 1, 206 (2019). [OpenUrl][29] 9. [↵][30]1. D. Martens, 2. F. Provost , Manage. Inf. Syst. Q. 38, 73 (2014). [OpenUrl][31] 10. [↵][32]1. S. Wachter, 2. B. Mittelstadt, 3. C. Russell , Harv. J. Law Technol. 31, 841 (2018). [OpenUrl][33] 11. [↵][34]1. R. M. Hamm, 2. S. L. Smith , J. Fam. Pract. 47, 44 (1998). [OpenUrl][35][PubMed][36] 12. [↵][37]1. S. Gerke, 2. B. Babic, 3. T. Evgeniou, 4. I. G. Cohen , NPJ Digit. Med. 3, 53 (2020). [OpenUrl][38] 13. [↵][39]1. U. J. Muehlematter, 2. P. Daniore, 3. K. N. Vokinger , Lancet Digit. Health 3, e195 (2021). [OpenUrl][40] 14. [↵][41]1. U. Guillen, 2. H. Kirpalani , Semin. Fetal Neonatal Med. 23, 25 (2018). [OpenUrl][42][PubMed][18] 15. [↵][43]Biofourmis, RhythmAnalytics (2020); [www.biofourmis.com/solutions/][44]. Acknowledgments: We thank S. Wachter for feedback on an earlier version of this manuscript. All authors contributed equally to the analysis and drafting of the paper. Funding: S.G. and I.G.C. were supported by a grant from the Collaborative Research Program for Biomedical Innovation Law, a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784). I.G.C. was also supported by Diagnosing in the Home: The Ethical, Legal, and Regulatory Challenges and Opportunities of Digital Home Health, a grant from the Gordon and Betty Moore Foundation (grant agreement number 9974). Competing interests: S.G. is a member of the Advisory Group–Academic of the American Board of Artificial Intelligence in Medicine. I.G.C. serves as a bioethics consultant for Otsuka on their Abilify MyCite product. I.G.C. is a member of the Illumina ethics advisory board. I.G.C. serves as an ethics consultant for Dawnlight. The authors declare no other competing interests. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: #ref-7 [8]: #ref-8 [9]: #ref-9 [10]: #ref-10 [11]: #ref-11 [12]: #ref-12 [13]: #ref-13 [14]: #ref-14 [15]: #ref-15 [16]: #xref-ref-1-1 "View reference 1 in text" [17]: {openurl}?query=rft.jtitle%253DNPJ%2BDigit.%2BMed.%26rft.volume%253D3%26rft.spage%253D118%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [18]: /lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fsci%2F373%2F6552%2F284.atom [19]: #xref-ref-2-1 "View reference 2 in text" [20]: {openurl}?query=rft.jtitle%253DManage.%2BSci.%26rft.volume%253D64%26rft.spage%253D1155%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [21]: #xref-ref-3-1 "View reference 3 in text" [22]: {openurl}?query=rft.jtitle%253DJ.%2BBiomed.%2BInform.%26rft.volume%253D113%26rft.spage%253D103655%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [23]: #xref-ref-4-1 "View reference 4 in text" [24]: #xref-ref-5-1 "View reference 5 in text" [25]: #xref-ref-6-1 "View reference 6 in text" [26]: #xref-ref-7-1 "View reference 7 in text" [27]: {openurl}?query=rft.jtitle%253DACM%2BQueue%26rft.volume%253D16%26rft.spage%253D1%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [28]: #xref-ref-8-1 "View reference 8 in text" [29]: {openurl}?query=rft.jtitle%253DNat.%2BMach.%2BIntell.%26rft.volume%253D1%26rft.spage%253D206%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [30]: #xref-ref-9-1 "View reference 9 in text" [31]: {openurl}?query=rft.jtitle%253DManage.%2BInf.%2BSyst.%2BQ.%26rft.volume%253D38%26rft.spage%253D73%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [32]: #xref-ref-10-1 "View reference 10 in text" [33]: {openurl}?query=rft.jtitle%253DHarv.%2BJ.%2BLaw%2BTechnol.%26rft.volume%253D31%26rft.spage%253D841%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [34]: #xref-ref-11-1 "View reference 11 in text" [35]: {openurl}?query=rft.jtitle%253DThe%2BJournal%2Bof%2Bfamily%2Bpractice%26rft.stitle%253DJ%2BFam%2BPract%26rft.aulast%253DHamm%26rft.auinit1%253DR.%2BM.%26rft.volume%253D47%26rft.issue%253D1%26rft.spage%253D44%26rft.epage%253D52%26rft.atitle%253DThe%2Baccuracy%2Bof%2Bpatients%2527%2Bjudgments%2Bof%2Bdisease%2Bprobability%2Band%2Btest%2Bsensitivity%2Band%2Bspecificity.%26rft_id%253Dinfo%253Apmid%252F9673608%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [36]: /lookup/external-ref?access_num=9673608&link_type=MED&atom=%2Fsci%2F373%2F6552%2F284.atom [37]: #xref-ref-12-1 "View reference 12 in text" [38]: {openurl}?query=rft.jtitle%253DNPJ%2BDigit.%2BMed.%26rft.volume%253D3%26rft.spage%253D53%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [39]: #xref-ref-13-1 "View reference 13 in text" [40]: {openurl}?query=rft.jtitle%253DLancet%2BDigit.%2BHealth%26rft.volume%253D3%26rft.spage%253D195e%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [41]: #xref-ref-14-1 "View reference 14 in text" [42]: {openurl}?query=rft.jtitle%253DSemin.%2BFetal%2BNeonatal%2BMed.%26rft.volume%253D23%26rft.spage%253D25%26rft_id%253Dinfo%253Apmid%252Fhttp%253A%252F%252Fwww.n%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [43]: #xref-ref-15-1 "View reference 15 in text" [44]: http://www.biofourmis.com/solutions/