Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms' decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs' perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV's algorithms and of policies and regulations to fully realise AVs' benefits for smart and sustainable cities.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
The Matrix reached US cinemas just over 20 years ago and articulated society's fear of the power of artificial intelligence (AI) and its potential to overpower the human. The film taps into ongoing human anxiety around technology and our ability to control it, best epitomised by Mary Shelley's 19th century trope of the Frankenstein's Monster-- the notion that we may well lose control of our own creations as we strive to push the boundaries of science. The human relationship with technology remains a fraught one, but there is little question that AI has the potential to be revolutionary. The McKinsey Global Institute Study reported that in 2016 alone, between $8bn and $12bn was invested in the development of AI worldwide, and Goldstein Research predicts that by 2023, AI will be a $14bn industry. While few of us yet use driverless cars and interact regularly with the animated robots of another science fiction story, I Robot, AI is nonetheless beginning to affect our daily life.
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
With leaders increasingly seeing artificial intelligence (AI) as helping to drive the next great economic expansion, a fear of missing out is spreading around the globe. Numerous nations have developed AI strategies to advance their capabilities, through investment, incentives, talent development, and risk management. As AI's importance to the next generation of technology grows, many leaders are worried that they will be left behind and not share in the gains. There is a growing realization of AI's importance, including its ability to provide competitive advantage and change work for the better. A majority of global early adopters say that AI technologies are especially important to their business success today--a belief that is increasing. A majority also say they are using AI technologies to move ahead of their competition, and that AI empowers their workforce. AI success depends on getting the execution right. Organizations often must excel at a wide range of practices to ensure AI success, including developing a strategy, pursuing the right use cases, building a data foundation, and cultivating a strong ability to experiment. These capabilities are critical now because, as AI becomes even easier to consume, the window for competitive differentiation will likely shrink. Early adopters from different countries display varying levels of AI maturity. Enthusiasm and experience vary among early adopters from different countries. Some are pursuing AI vigorously, while others are taking a more cautious approach.
However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles--the'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)--rather than on practices, the'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.