The convergence of the availability of a vast amount of big data, the speed and stretch of cloud computing platforms, and the advancement of sophisticated machine learning algorithms have given birth to an array of innovations in Artificial Intelligence (AI). Other applications that benefit from the implementation of AI systems in the public sector include food supply chain, energy, and environmental management. Indeed, the benefits that AI systems bring to society are grand, and so are the challenges and worries. The evolving technologies learning curve implies miscalculations and mistakes, resulting in unanticipated harmful impacts. We are living in times when it is paramount that the possibility of harm in AI systems has to be recognized and addressed quickly. Thus, identifying the potential risks caused by AI systems means a plan of measures to counteract them has to be adopted as soon as possible.
As we start to encounter AI systems in various morally and legally salient environments, some have begun to explore how the current responsibility ascription practices might be adapted to meet such new technologies [19, 33]. A critical viewpoint today is that autonomous and self-learning AI systems pose a so-called responsibility gap . These systems' autonomy challenges human control over them , while their adaptability leads to unpredictability. Hence, it might infeasible to trace back responsibility to a specific entity if these systems cause any harm. Considering responsibility practices as the adoption of certain attitudes towards an agent , scholarly work has also posed the question of whether AI systems are appropriate subjects of such practices [15, 29, 37] -- e.g., they might "have a body to kick," yet they "have no soul to damn" .
AI holds fantastic opportunities for large and small-medium organisations alike, and businesses are right to embrace them. Be it to improve back office operations, maximise marketing efforts or deploy predictive technologies to allocate resources more efficiently, algorithms have a lot to offer and we are seeing many organisations deploying AI systems already. Talking with industries as well as policy makers, I notice that we all seem to share the same belief, that is that innovation and ethics can go hand in hand. In fact, many believe that businesses that can utilise data, and do so ethically, have a clear competitive advantage. But how do we turn ethics into practice?
As machines become increasingly more accurate and intelligent, we humans will need to sharpen our skills. One of your primary responsibilities as a Learning & Development leader is to sharpen your skill and ensure that you empower the workforce to develop the four skills critical to thriving in 2030. I have compiled a series of articles titled eLearning Skills 2030 to explore the essential skills to help you future-proof your career and lead your team. This article explores the skill of Digital Ethics, why it is critical, and what actionable steps you can take today to improve. According to Brian Patrick Green, director of Technology Ethics at Santa Clara University, technology or digital ethics refers to applying ethical thinking and acting to the practical concerns of technology.
The starting point for any discussion about AI will almost certainly focus on how we should use it, and the advantages and disadvantages it could bring. Google's Sundar Pichai recently suggested AI could be used to help solve human problems -- a noble goal. How we use it to solve these problems, and ultimately how successfully they will be, is going to depend on our ethics.