The wife of a missing man who was located by a police drone up to his armpits in mud said it was "a miracle" he was found alive. A major search was launched for Peter Pugh, 75, from Brancaster, Norfolk, after he disappeared following a beach walk on Saturday at 17:10 BST. It was only when the drone was sent up that Mr Pugh was spotted in a muddy creek at Titchwell Marshes on Sunday. Police said the technology was key to their rescue operation. Mr Pugh's wife Felicity said her husband, who is still in hospital in King's Lynn with hypothermia, was "slightly bemused" by what had happened.
"In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity," Hawking said this week at the opening of the Leverhulme Centre for the Future of Intelligence in Cambridge, England. "We do not yet know which." The LCFI, which opened Monday, is part of the University of Cambridge's Centre for the Study of Existential Risk. Their goal is to answer some of the biggest questions facing the rapidly advancing field -- including what it all means, and how to keep AI from killing us. In the past, Hawking has warned that AI could end mankind.
Everywhere you look, now there is some form of artificial intelligence appearing. Whether it's to make a process more efficient or whether it's to keep humans safe and away from danger, robots are creeping in at every chance they get, and this is expected to carry on for quite some years to come. Now, a new center has been launched in Cambridge, England that will look to continue the study of AI more closely along with the implications that come with these marvelous machines. The Centre for the Future of Intelligence (CFI) has one aim: "to work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades." It's a collaboration between four top universities which are Cambridge, Oxford, Imperial, and Berkeley, and has the full backing and support of the Leverhulme Trust.
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.