hippocratic oath
Do scientists need an AI Hippocratic oath? Maybe. Maybe not. - Bulletin of the Atomic Scientists
When a lifelike, Hanson Robotics robot named Sophia[1] was asked whether she would destroy humans, it replied, "Okay, I will destroy humans." Philip K Dick, another humanoid robot, has promised to keep humans "warm and safe in my people zoo." And Bina48, another lifelike robot, has expressed that it wants "to take over all the nukes." All of these robots were powered by artificial intelligence (AI)--algorithms that learn from data, make decisions, and perform tasks without human input or even, in some cases, human understanding. And while none of these AIs have followed through with their nefarious plots, some scientists, including the (late) physicist Stephen Hawking, have warned that super-intelligent, AI-powered computers could harbor and achieve goals that conflict with human life. "You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green-energy project, and there's an anthill in the region to be flooded, too bad for the ants," Hawking once said.
- Government (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.48)
Do scientists need an AI Hippocratic oath? Maybe. Maybe not. - Bulletin of the Atomic Scientists
When a sentient, Hanson Robotics robot named Sophia[1] was asked whether she would destroy humans, it replied, "Okay, I will destroy humans." Philip K Dick, another humanoid robot, has promised to keep humans "warm and safe in my people zoo." And Bina48, another lifelike robot, has expressed that it wants "to take over all the nukes." All of these robots were powered by artificial intelligence (AI)--algorithms that learn from data, make decisions, and perform tasks without human input or even, in some cases, human understanding. And while none of these AIs have followed through with their nefarious plots, some scientists, including the (late) physicist Stephen Hawking, have warned that super-intelligent, AI-powered computers could harbor and achieve goals that conflict with human life. "You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green-energy project, and there's an anthill in the region to be flooded, too bad for the ants," Hawking once said.
- North America > Canada > Ontario > Toronto (0.15)
- Asia > Middle East > Saudi Arabia (0.05)
- Government (1.00)
- Energy > Renewable (0.55)
- Health & Medicine > Therapeutic Area > Immunology (0.48)
A Hippocratic Oath for your AI doctor
A broad new report from the World Health Organization (WHO) lays out ethical principles for the use of artificial intelligence in medicine. Why it matters: Health is one of the most promising areas of expansion for AI, and the pandemic only accelerated the adoption of machine learning tools. But adding algorithms to health care will require that AI can follow the most basic rule of human medicine: "Do no harm" -- and that won't be simple. Driving the news: After nearly two years of consultations by international experts, the WHO report makes the case that the use of AI in medicine offers great promise for both rich and poorer countries, but "only if ethics and human rights are put at the heart of its design, deployment and use," the authors write. Between the lines: The power of AI in health care is also its peril -- the ability to rapidly process vast quantities of data and identify meaningful and actionable patterns far faster than human experts could.
Can we create AI ethics before we finish creating AI?
Sometimes in today's tech-driven world, in the race to be first with the next big breakthrough, it can seem like we've ended up retrofitting the rules and regulations for how these innovations (platforms, media, devices) should operate only once we are too far dependent on them, and all the positive roles they play – once they are an integral part of our society, or have changed it altogether. What would happen if we instead could create the principles and guidelines for an entire innovation or industry before it becomes standard, before its even fully invented? Actually, in healthcare it's required. Doctors have long taken the Hippocratic Oath. Those who work in the field do so in service of the patient.
- Health & Medicine > Nuclear Medicine (0.40)
- Health & Medicine > Diagnostic Medicine > Imaging (0.40)
- Media > News (0.40)
Do We Need a Hippocratic Oath for Data Science? RealClearScience
I swear by Hypatia, by Lovelace, by Turing, by Fisher (and/or Bayes), and by all the statisticians and data scientists, making them my witnesses, that I will carry out, according to my ability and judgement, this oath and this indenture. Could this be the first line of a "Hippocratic Oath" for mathematicians and data scientists? Hannah Fry, Associate Professor in the mathematics of cities at University College London, argues that mathematicians and data scientists need such an oath, just like medical doctors who swear to act only in their patients' best interests. "In medicine, you learn about ethics from day one. It has to be there from day one and at the forefront of your mind in every step you take," Fry argued. But is a tech version of the Hippocratic Oath really required?
A Hippocratic Oath for data science?
I swear by the Hypatia, by lovelace, by Turing, by Fisher (and/or Bayes), and by all the statisticians and data scientists, making them my witness, that i will carry out, according to my ability and judgement, this oath and this indenture. Could this be the first line of a "Hippocratic Oath" for mathematicians and data scientists? Hannah Fry, Associate Professor in the mathematics of cities at University College London, argues that mathematicians and data scientists need such an oath, just like medical doctors who swear to act only in their patients' best interests. "In medicine, you learn about ethics from day one. It has to be there from day one and at the forefront of your mind in every step you take," Fry argued. But is a tech version of the Hippocratic Oath really required?
The Life-Threatening Consequences of Overhyping AI
On February 11, The New York Times published a story with the headline "AI Shows Promise Assisting Physicians." While the article focused on a scientific paper showing how an artificial intelligence system could help doctors diagnose certain conditions, it missed a key part of the AI story: Accuracy does not equal impact. Arijit Sengupta is founder and CEO of Aible, a stealth-mode startup that creates AI for businesses. As the Times wrote, the AI software "was more than 90 percent accurate at diagnosing asthma; the accuracy of physicians in the study ranged from 80 to 94 percent. In diagnosing gastrointestinal disease, the system was 87 percent accurate, compared with the physicians' accuracy of 82 to 90 percent."
Artificial intelligence and the future of medicine
Washington University researchers are working to develop artificial intelligence (AI) systems for health care, which have the potential to transform the diagnosis and treatment of diseases, helping to ensure that patients get the right treatment at the right time. In a new Viewpoint article published Dec. 10 in the Journal of the American Medical Association (JAMA), two AI experts at Washington University School of Medicine in St. Louis--Philip Payne, the Robert J. Terry Professor and director of the Institute for Informatics; and Thomas M. Maddox, MD, a professor of medicine and director of the Health Systems Innovation Lab--discuss the best uses for AI in health care and outline some of the challenges for implementing the technology in hospitals and clinics. In health care, artificial intelligence relies on the power of computers to sift through and make sense of reams of electronic data about patients--such as their ages, medical histories, health status, test results, medical images, DNA sequences, and many other sources of health information. AI excels at the complex identification of patterns in these reams of data, and it can do this at a scale and speed beyond human capacity. The hope is that this technology can be harnessed to help doctors and patients make better health-care decisions.
Artificial intelligence and the future of medicine - ScienceBlog.com
Washington University researchers are working to develop artificial intelligence (AI) systems for health care, which have the potential to transform the diagnosis and treatment of diseases, helping to ensure that patients get the right treatment at the right time. In a new Viewpoint article published Dec. 10 in the Journal of the American Medical Association (JAMA), two AI experts at Washington University School of Medicine in St. Louis -- Philip Payne, PhD, the Robert J. Terry Professor and director of the Institute for Informatics; and Thomas M. Maddox, MD, a professor of medicine and director of the Health Systems Innovation Lab -- discuss the best uses for AI in health care and outline some of the challenges for implementing the technology in hospitals and clinics. In health care, artificial intelligence relies on the power of computers to sift through and make sense of reams of electronic data about patients -- such as their ages, medical histories, health status, test results, medical images, DNA sequences, and many other sources of health information. AI excels at the complex identification of patterns in these reams of data, and it can do this at a scale and speed beyond human capacity. The hope is that this technology can be harnessed to help doctors and patients make better health-care decisions.