Artificial Intelligence (AI) is the software engine that drives the Fourth Industrial Revolution. Its effect can be seen in homes, businesses and political processes. In its embodied form as robots, it will soon be driving cars, stocking warehouses and caring for the young and elderly. AI holds the promise of solving some of society's most pressing issues, but also presents challenges such as inscrutable "black box" algorithms, unethical use of data and potential job displacement. As rapid advances in machine learning (ML) increase the scope and scale of AI's deployment across all aspects of daily life, and as the technology can learn and change on its own, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality to create trust.
Any new technology comes with both advantages and risks. Where AI in financial institutions is concerned many of the methods and skills used to counter the risks develop from already familiar safeguards. No technology should be adopted unless it comes thoroughly tested, everyone understands the outcomes to expect, and systems are constantly monitored to ensure that those outcomes are being delivered. That is not unique to AI, and in many cases will have been standard practice when adopting any new technology in the past. Similarly it should come as little surprise that where training is required it may be needed at all levels within an institution.
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
If a business isn't using AI, then it's either claiming to use it or claiming that it's about to start any day now. Whatever problem your company is having, it seems that a solution powered by decision intelligence, machine learning or some other form of AI is available. Yet, beneath the marketing hype, the truth is that many businesses can indeed benefit from this tech – if they take the time to learn what it can (and can't) do for them and understand the potential pitfalls. In essence, AI enables its users to do useful things with a large pool of data – for instance, fish out insights without tying up the time of data scientists. Data is therefore fundamental to AI.
AI governance is needed to properly monitor and evaluate algorithms for ROI, bias, risk, effectiveness, etc. Unfortunately, due importance is not given to this process. CIOs acknowledge that enterprises often do not take proper measures when drafting AI governance strategies. These governance measures are especially required for monitoring bias, risk, and ROI, among other factors in the enterprise. Enterprise leaders feel that the main reason for this disconnect is the near non-existence of coordination relevant to AI projects in an organization.