AI not gaining ground in HR functions: Arvind Gupta of KPMG explains - ET CIO

#artificialintelligence

Arvind Gupta, Partner and Head, Management Consulting, KPMG in India, explains why HR functions are still reluctant to use AI, the co-effect of workplace culture and digital transformation on each other and the CIO's role in all of this. Edited excerpts: What factors are causing reluctance in adopting AI in HR functions? One of the main challenges that see's reluctance is the fact that employees' data are not present in one single location. In most cases, the data is distributed over many different data sets; and often the absence of one set in the analytics could lead to a totally wrong estimation. Another challenge is that the world of HR is not one where black and white decisions work.


Teaching AI, Ethics, Law and Policy

arXiv.org Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.


Auditing Algorithms for Bias

#artificialintelligence

In 1971, philosopher John Rawls proposed a thought experiment to understand the idea of fairness: the veil of ignorance. What if, he asked, we could erase our brains so we had no memory of who we were -- our race, our income level, our profession, anything that may influence our opinion? Who would we protect, and who would we serve with our policies? The veil of ignorance is a philosophical exercise for thinking about justice and society. But it can be applied to the burgeoning field of artificial intelligence (AI) as well.


Responsible AI takes more than good intentions - TechHQ

#artificialintelligence

Last month, 42 countries signed up to the OECD's common artificial intelligence (AI) principles. Just before that, the European Commission published its own ethics guidelines for trustworthy AI. In fact, to date, there has been a huge amount of work on ethical AI principles, guidelines and standards across different organizations, including IEEE, ISO and the Partnership on AI. On top of these principles, there has been a growing body of work in the fairness, accountability, and transparency machine learning community with a growing number of solutions to tackle bias from a quantitative perspective. Both organizations and governments alike clearly recognize the importance of designing ethics into AI, there's no doubt about that.


Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to improve people's lives, then people must be able to trust AI, which means being able to understand what the system is doing and why. Even though transparency is often seen as the requirement in this case, realistically it might not always be possible or desirable, whereas the need to ensure that the system operates within set moral bounds remains. In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a "glass box" around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems. The explicit transformation of abstract moral values into concrete norms brings great benefits in terms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value. These advantages will have an impact on the well-being of AI systems users at large, building their trust and providing them with concrete knowledge on how systems adhere to moral values.