Artificial Intelligence (AI) is the software engine that drives the Fourth Industrial Revolution. Its effect can be seen in homes, businesses and political processes. In its embodied form as robots, it will soon be driving cars, stocking warehouses and caring for the young and elderly. AI holds the promise of solving some of society's most pressing issues, but also presents challenges such as inscrutable "black box" algorithms, unethical use of data and potential job displacement. As rapid advances in machine learning (ML) increase the scope and scale of AI's deployment across all aspects of daily life, and as the technology can learn and change on its own, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality to create trust.
Any new technology comes with both advantages and risks. Where AI in financial institutions is concerned many of the methods and skills used to counter the risks develop from already familiar safeguards. No technology should be adopted unless it comes thoroughly tested, everyone understands the outcomes to expect, and systems are constantly monitored to ensure that those outcomes are being delivered. That is not unique to AI, and in many cases will have been standard practice when adopting any new technology in the past. Similarly it should come as little surprise that where training is required it may be needed at all levels within an institution.
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
If a business isn't using AI, then it's either claiming to use it or claiming that it's about to start any day now. Whatever problem your company is having, it seems that a solution powered by decision intelligence, machine learning or some other form of AI is available. Yet, beneath the marketing hype, the truth is that many businesses can indeed benefit from this tech – if they take the time to learn what it can (and can't) do for them and understand the potential pitfalls. In essence, AI enables its users to do useful things with a large pool of data – for instance, fish out insights without tying up the time of data scientists. Data is therefore fundamental to AI.
Mila – Quebec Artificial Intelligence Institute and the United Nations Educational, Scientific and Cultural Organization (UNESCO) have joined forces on a book entitled Missing links in AI governance. Focussed on the need for better governance of AI, the book comprises 18 chapters written by academics, civil society representatives, innovators and policy makers. The book explores themes such as the influence of AI on indigenous and LGBTI communities, the necessary inclusion of all countries in global governance, and the use of AI to support innovation for socially beneficial purposes. It maps out possible solutions to foster an AI development that is ethical, inclusive, and respectful of human rights. The authors also warn against the use of AI in potentially harmful contexts like autonomous weapons or the manipulation of digital content for social destabilization, deplore the increasing centralization of decision-making power in the development of AI systems and biases embedded in them, and the lack of transparency and accountability in the industry.