Goto

Collaborating Authors

There's Still Work to Do Addressing Ethics in Autonomous Vehicles

#artificialintelligence

There's a fairly large flaw in the way that programmers are currently addressing ethical concerns related to artificial intelligence (AI) and autonomous vehicles (AVs). Namely, existing approaches don't account for the fact that people might try to use the AVs to do something bad. For example, imagine that there is an autonomous vehicle with no passengers and it is about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian. Most discussions of ethics in this scenario focus on whether the autonomous vehicle's AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own, according to Veljko Dubljević, an assistant professor in the Science, Technology & Society program at North Carolina State University.


AI for self-driving cars doesn't account for crime - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. Existing approaches to artificial intelligence for self-driving cars don't account for the fact that people might try to use the autonomous vehicles to do something bad, researchers report. For example, let's say that there is an autonomous vehicle with no passengers and it's about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian. "…the simplistic approach currently being used to address ethical considerations in AI and autonomous vehicles doesn't account for malicious intent. Most discussions of ethics in this scenario focus on whether the autonomous vehicle's AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own. "Current approaches to ethics and autonomous vehicles are a dangerous oversimplification--moral judgment is more complex than that," says Veljko Dubljević, an assistant professor in the Science, Technology & Society (STS) program at North Carolina State University and author of a paper outlining this problem and a possible path forward. "For example, what if the five people in the car are terrorists?


The ethics of autonomous automobiles »

#artificialintelligence

For instance, let's say that there's an autonomous car with no passengers and it's about to crash right into a automobile containing 5 folks. It may possibly keep away from the collision by swerving out of the highway, however it might then hit a pedestrian. Most discussions of ethics on this state of affairs concentrate on whether or not the autonomous car's AI must be egocentric (defending the car and its cargo) or utilitarian (selecting the motion that harms the fewest folks). "Present approaches to ethics and autonomous automobiles are a harmful oversimplification – ethical judgment is extra complicated than that," says Veljko Dubljevi?, an assistant professor within the Science, Know-how & Society (STS) program at North Carolina State University and creator of a paper outlining this drawback and a attainable path ahead. "For instance, what if the 5 folks within the automobile are terrorists? And what if they're intentionally benefiting from the AI's programming to kill the close by pedestrian or damage different folks? Then you may want the autonomous car to hit the automobile with 5 passengers. "In different phrases, the simplistic strategy at present getting used to handle moral issues in AI and autonomous automobiles doesn't account for malicious intent.


Talking About How We Talk About the Ethics of Artificial Intelligence

#artificialintelligence

If you want to understand how people are thinking (and feeling) about new technologies, it's important to understand how media outlets are thinking (and writing) about new technologies. A recent analysis of how journalists have dealt with the ethics of artificial intelligence (AI) suggests that reporters are doing a good job of grappling with a complex set of questions – but there's room for improvement. To learn more about the work, why they did it, and why it's important, we talked to the researchers who did the work: Veljko Dubljević, corresponding author of the paper and an assistant professor of philosophy at NC State; Allen Coin, co-author of the paper and a web strategist at NC State, who is earning a master's degree in liberal studies with a concentration in science, technology and society; and Leila Ouchchy, first author of the paper and a former undergrad at NC State. The paper, "AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media," was published in the journal AI & Society on March 29. The Abstract: This paper focuses, in part, on ethical issues related to AI technologies that people would use in their daily lives.


Talking about how we talk about the ethics of artificial intelligence

#artificialintelligence

If you want to understand how people are thinking (and feeling) about new technologies, it's important to understand how media outlets are thinking (and writing) about new technologies. A recent analysis of how journalists have dealt with the ethics of artificial intelligence (AI) suggests that reporters are doing a good job of grappling with a complex set of questions--but there's room for improvement. To learn more about the work, why they did it, and why it's important, we talked to the researchers who did the work: Veljko Dubljević, corresponding author of the paper and an assistant professor of philosophy at NC State; Leila Ouchchy, first author of the paper and a former undergrad at NC State; and Allen Coin, co-author of the paper and a graduate student at NC State. The paper, "AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media," was published in the journal AI & Society on March 29. The Abstract: This paper focuses, in part, on ethical issues related to AI technologies that people would use in their daily lives.