responsibility lie
Complex accident, clear responsibility
The problem of allocating accident responsibility for autonomous driving is a difficult issue in the field of autonomous driving. Due to the complexity of autonomous driving technology, most of the research on the responsibility of autonomous driving accidents has remained at the theoretical level. When encountering actual autonomous driving accidents, a proven and fair solution is needed. To address this problem, this study proposes a multi-subject responsibility allocation optimization method based on the RCModel (Risk Chain Model), which analyzes the responsibility of each actor from a technical perspective and promotes a more reasonable and fair allocation of responsibility.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- North America > United States > Arizona > Maricopa County > Tempe (0.04)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
Why we need clear boundaries and guidelines for AI
In recent news, AI didn't come off very well. Companies like IBM or Microsoft just announced that they will end the sales of facial recognition technology, one area of AI, with immediate effect. The real-life implications might be devastating: Whereas the technology might be well-trained to identify white faces, it fails to differentiate black faces. When used by law enforcement, this could lead to false accusations for People of Color. Critical press coverage, intransparency and unethical business practices have led to distrust regarding emerging technologies around the world.
- Information Technology (0.37)
- Education (0.36)
Has AI Storytelling Become Myopic? Where Does Researchers' Responsibility Lie
A researcher recently laid out a controversial proposal to add to a round of peer reviews for journals and conferences that would look at the societal consequences of any computer science research. In an interview published in Nature, Brent Hecht, who is an assistant professor at Northwestern University, director of the People, Space, and Algorithms Research Group, and the chair of the ACM Future of Computing Academy, said that the "peer reviewers must ensure that researchers consider negative societal consequences of their work." He is also of the strong opinion that the review process for any research should have the researcher to assess how the technology can be used in the future. If the researcher does not perform such an analysis then the journal should reject the paper. In March of 2018, Hecht wrote a proposal titled It's Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process where he said that the current research community only thinks of the benefits a research paper can have no impact on the society.