Goto

Collaborating Authors

From information to I, Robot: the reality of AI ethics

#artificialintelligence

But Vallor – a leading American scholar of the ethics of data and artificial intelligence shortly to flit to Edinburgh University – reckons we should be concerned with military robots. Because of the people who may control them. Science fiction writers have fretted for decades about the moral philosophy of smart robots. What, at least in popular culture, we have not done so much is think about the ethics of dumb humans who will suddenly have control of vast amounts of artificial intelligence. That is where thinkers like Vallor come in.


AI's Ethical Imperative @ Stanford Computer Forum – R&D

#artificialintelligence

Stanford Computer Forum provides a platform to discuss the latest advancements in computer science and engineering. In their April meeting, I realized that we have come a long way since Asimov's 3 laws of robotics. The nuances associated with robotics has become way beyond the dystopian vision of robotic overlords, and apocalyptical notions of cyborgs but more practical society augmentation. Issac Isomov, the master of the science-fiction genre and author of Foundation series suggested three laws devised to protect humans from interactions with robots. In the Computer Forum meetup, guest speaker Shannon Vallor, Professor and Department Chair of Philosophy, in Santa Clara University spoke about Artificial Intelligence's ethical imperative and how to humanize machine values.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have variously called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


INFLUENCE - Why better tech requires better humans

#artificialintelligence

Here at Thwaites we are lucky enough to have not one, but two offices – our Shoreditch HQ, and our Northern home at The Federation in Manchester. Here we share co-working space with lots of brilliant digital and tech firms who have signed up to a pledge outlining a broad set of values that chime with us – to be open, honest and ethical. As well as providing a great space to work, Federation Manchester also gives us access to excellent talks by leading speakers from around the world – most recently The Federation Presents series, which explored ethics in the tech industry and wider society. Naturally, one of the topics that's arisen (more than once) is AI, and the seemingly boundless scope of machine learning. Yet despite the many ways in which intelligent systems can transform our lives for the better, there is still an underlying mistrust.