Goto

Collaborating Authors

Grilling the answers: How businesses need to show how AI decides

#artificialintelligence

Show your working: generations of mathematics students have grown up with this mantra. Getting the right answer is not enough. To get top marks, students must demonstrate how they got there. Now, machines need to do the same. As artificial intelligence (AI) is used to make decisions affecting employment, finance or justice, as opposed to which film a consumer might want to watch next, the public will insist it explains its working.


The often underestimated piece to successful Artificial Intelligence

#artificialintelligence

The first generation of AI has picked up on human biases. Among many disturbing cases of biased AI systems resulting in discriminatory outcomes, the most heart-breaking ones were cases involving unfair elongation of prison sentence, unfair credit card decision, and home appraisal outcomes. So, how does bias get into AI systems? While this is by no means an excuse, it does point to the key problem -- almost no focus was given to ensuring the moral, social, and responsible aspect of AI- often termed Ethical AI. A 2019 Gartner study reported that by 2022, 30% of the companies will invest in explainable ethical AI, from almost none in 2019.


Building Trust in AI through Transparency and Governance

#artificialintelligence

There is thus a great need to define inputs, outputs, and their interactive relationships clearly. Inevitably, technologists would code fairness as a narrowly defined modular property of the machine learning system. However, fairness is not a well defined nor universally applicable concept, to begin with as it has to be understood amidst a particular social context. Abstracting away this context is thus an abstraction error. With the presence of this error, AI would have an ineffective, inaccurate and misguided interpretation and thus, quantification of fairness when it is introduced to varying societal systems.


A Framework for Ethical AI at the United Nations

arXiv.org Artificial Intelligence

This paper aims to provide an overview of the ethical concerns in artificial intelligence (AI) and the framework that is needed to mitigate those risks, and to suggest a practical path to ensure the development and use of AI at the United Nations (UN) aligns with our ethical values. The overview discusses how AI is an increasingly powerful tool with potential for good, albeit one with a high risk of negative side-effects that go against fundamental human rights and UN values. It explains the need for ethical principles for AI aligned with principles for data governance, as data and AI are tightly interwoven. It explores different ethical frameworks that exist and tools such as assessment lists. It recommends that the UN develop a framework consisting of ethical principles, architectural standards, assessment methods, tools and methodologies, and a policy to govern the implementation and adherence to this framework, accompanied by an education program for staff.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.