The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
Three-quarters of patients would recommend artificial intelligence as a component of clinical decision-making for skin cancer, according to a survey. "The use of artificial intelligence (AI) is expanding throughout the field of medicine," Caroline A. Nelson, MD, of the department of dermatology at Yale School of Medicine, and colleagues wrote, adding that researchers are investigating the use of AI in classifying skin lesions. "Although AI is poised to change how patients engage in health care, patient perspectives remain poorly understood." The researchers conducted semi-structured interviews with 48 patients in general dermatology clinics at Brigham and Women's Hospital and melanoma clinics at Dana-Farber Cancer Institute to determine how patients think about the risks, benefits, strengths and weakness of AI as it pertains to skin cancer screening. They also aimed to determine how patients feel about the differences between human and AI clinical decision-making.
The human brain has 100 billion neurons, connected to each other in networks that allow us to interpret the world around us, plan for the future, and control our actions and movements. MIT neuroscientist Sebastian Seung wants to map those networks, creating a wiring diagram of the brain that could help scientists learn how we each become our unique selves. In a paper appearing in the Aug. 7 online edition of Nature, Seung and collaborators at MIT and the Max Planck Institute for Medical Research in Germany have reported their first step toward this goal: Using a combination of human and artificial intelligence, they have mapped all the wiring among 950 neurons within a tiny patch of the mouse retina. Composed of neurons that process visual information, the retina is technically part of the brain and is a more approachable starting point, Seung says. They also identified a new type of retinal cell that had not been seen before.
Ten years ago, Nobel laureate Sydney Brenner remarked, "We don't have to search for a model organism anymore. Because we are the model organisms" (1). Indeed, over the past decade, we have deepened our understanding not only of how the genomic blueprint for human biology manifests physical and chemical characteristics (phenotype), but also of how traits can change in response to the environment. A better grasp of the dynamic relationship between genes and the environment may truly sharpen our ability to determine disease risk and response to therapy. A collection of human phenotypic data, and its integration with "omic" information (genomic, proteomic, transcriptomic, epigenomic, microbiomic, and metabolomic, among others), along with remote-sensing data, could provide extraordinary opportunities for discovery.
AI promises to be a boon to medical practice, improving diagnoses, personalizing treatment, and spotting future public-health threats. By 2024, experts predict, healthcare AI will be a nearly $20 billion market, with tools that transcribe medical records, assist surgery, and investigate insurance claims for fraud. Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision--and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI "black box"?