Goto

Collaborating Authors

 ethical black box


A Simulated real-world upper-body Exoskeleton Accident and Investigation

Winfield, Alan, Webb, Nicola, Etoundi, Appolinaire, Derval, Romain, Salvini, Pericle, Jirotka, Marina

arXiv.org Artificial Intelligence

This paper describes the enactment of a simulated (mock) accident involving an upper-body exoskeleton and its investigation. The accident scenario is enacted by role-playing volunteers, one of whom is wearing the exoskeleton. Following the mock accident, investigators - also volunteers - interview both the subject of the accident and relevant witnesses. The investigators then consider the witness testimony alongside robot data logged by the ethical black box, in order to address the three key questions: what happened?, why did it happen?, and how can we make changes to prevent the accident happening again? This simulated accident scenario is one of a series we have run as part of the RoboTIPS project, with the overall aim of developing and testing both processes and technologies to support social robot accident investigation.


A draft open standard for an Ethical Black Box

Robohub

About 5 years ago we proposed that all robots should be fitted with the robot equivalent of an aircraft Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box (EBB). We argued that an ethical black box will play a key role in the processes of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. Since then, within the RoboTIPS project, we have developed and tested several model EBBs, including one for an e-puck robot that I wrote about in this blog, and another for the MIRO robot. With some experience under our belts, we have now drafted an Open Standard for the EBB for social robots – initially as a paper submitted to the International Conference on Robots Ethics and Standards.


How to investigate when a robot causes an accident – and why it's important that we do

Oxford Comp Sci

Robots are featuring more and more in our daily lives. They can be incredibly useful (bionic limbs, robotic lawnmowers, or robots which deliver meals to people in quarantine), or merely entertaining (robotic dogs, dancing toys, and acrobatic drones). Imagination is perhaps the only limit to what robots will be able to do in the future. What happens, though, when robots don't do what we want them to – or do it in a way that causes harm? For example, what happens if a bionic arm is involved in a driving accident?

  Industry: Transportation > Air (0.56)

How do we investigate when a robot causes an accident?

Oxford Comp Sci

Robots are featuring more and more in our daily lives. They can be incredibly useful (bionic limbs, robotic lawnmowers, or the robots that deliver meals to people in quarantine), or merely entertaining (robotic dogs, dancing toys, and acrobatic drones). Imagination is perhaps the only limit to what robots will be able to do in the future. What happens, though, when robots don't do what we want them to – or do it in a way that causes harm? For example, what happens if a bionic arm is involved in a driving accident?


How to investigate when a robot causes an accident

#artificialintelligence

Robots are featuring more and more in our daily lives. They can be incredibly useful (bionic limbs, robotic lawnmowers, or robots which deliver meals to people in quarantine), or merely entertaining (robotic dogs, dancing toys, and acrobatic drones). Imagination is perhaps the only limit to what robots will be able to do in the future. What happens, though, when robots don't do what we want them to – or do it in a way that causes harm? For example, what happens if a bionic arm is involved in a driving accident?


Back to Robot Coding part 2: the ethical black box

Robohub

In the last few days I started some serious coding. The first for 20 years, in fact, when I built the software for the BRL LinuxBots. My coding project is to start building an ethical black box (EBB), or to be more accurate, a module that will allow a software EBB to be incorporated into a robot. Conceptually the EBB is very simple, it is a data logger – the robot equivalent of an aircraft Flight Data Recorder, or an automotive Event Data Recorder. Nearly five years ago I made the case, with Marina Jirotka, that all robots (and AIs) should be fitted with an EBB as standard.

  ebb, ethical black box, robot, (12 more...)
  Industry: Transportation > Air (0.92)

Experts Want Robots to Have an "Ethical Black Box" That Explains Their Decision-Making

#artificialintelligence

Scientists Alan Winfield, professor of robot ethics at the University of the West of England in Bristol, and Marina Jirotka, professor of human-centered computing at Oxford University, believe robots should be fitted with an "ethical black box." This would be the ethics equivalent of the aviation safety measure of the same name, designed to track a pilot's decisions and enable investigators to follow those actions in the event of accidents. As robots leave the controlled settings of factories and laboratories to interact more with humans, safety measures of this nature will become increasingly important. Winfield and Jirotka argue that robotics firms should emulate the example provided by the aviation industry, which owes its safety record not just to technology and design, but also to stringent safety protocols and accident investigation. That industry introduced both black boxes and cockpit voice recorders to ensure accident investigators would be able to determine both causes of crashes and obtain critical lessons in prevention and safety.


Robots should be equipped with an ethical black box

Daily Mail - Science & tech

Science fiction is littered with nightmare visions of robots turning against their creators, but a proposed device could help us keep track of their decision making. Intelligent machines of the future could be equipped with recorders, similar to the black box on aircraft, that would capture their ethical behaviour. This'ethical black box' would allow experts to analyse what went wrong in the event of an accident or malfunction. While it may not be enough to stop a robot from harming a human, albeit accidentally, it could prove useful in help to avoid repeat incidents. Science fiction is littered with nightmare visions of robots turning against their creators, like the Terminator (pictured).