Goto

Collaborating Authors

Results


The trouble with AI: Why we need new laws to stop algorithms ruining our lives

#artificialintelligence

Stronger action needs to be taken to stop technologies like facial recognition from being used to violate fundamental human rights, because the ethics charters currently adopted by businesses and governments won't cut it, warns a new report from digital rights organization Access Now. The past few years have seen "ethical AI" become a hot topic, with requirements such as oversight, safety, privacy, transparency, or accountability being added to codes of conduct for private and public organizations alike. From 5% in 2019, in fact, the proportion of organizations that now have an AI ethics charter has jumped to 45% in 2020. The EU's guidelines for "Trustworthy AI" have informed many of these documents; in addition, the European bloc recently published a white paper on artificial intelligence presenting a so-called "European framework for AI", with ethics at its core. How much real change has happened as a result of those ethical guidelines is up for debate.


Are you being overcharged by clever AI? Watchdog looks at whether algorithms hurt competition

ZDNet

To stop algorithms from charging unfair prices when we shop online, the UK's competition watchdog is launching a new investigation into the ways that AI systems might harm consumers – an issue that has so far lacked in-depth research and analysis, says the organization, and yet affects most of us in our everyday lives. While a lot of attention has focused on algorithmic harms in general, the Competition and Markets Authority (CMA) suggested that little work has been done in the specific area of consumer and competition harms, and reported that almost no research on the topic exists in the UK. There is particularly little insight into the ways that automated systems tailor costs, shopping options or rankings to each individual's online behavior, often leading to consumers paying higher prices than they should. For this reason, the CMA has asked academics and industry to submit evidence about the potential misdeeds caused by the misuse of algorithms, and is launching a program called "Analyzing Algorithms", which could even help identify specific firms that are violating consumers' rights, so that cases can be taken forward if needed. Kate Brand, director of data science at CMA, said: "We want to receive as much information as possible from stakeholders in academia, the competition community, firms, civil society and third sector organizations in order to understand where the harm is occurring and what the most effective regulatory approach is to protect consumers in the future."


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


Rights for robots: why we need better AI regulation

#artificialintelligence

We live in a world where humans aren't the only ones that have rights. In the eyes of the law, artificial entities have a legal persona too. Corporations, partnerships or nation states also have the same rights and responsibility as human beings. With rapidly evolving technologies, is it time our legal system considered a similar status for artificial intelligence (AI) and robots? "AI is already impacting most aspects of our lives. Given its pervasiveness, how this technology is developed is raising profound legal and ethical questions that need to be addressed," says Julian David, chief executive of industry body techUK.


The trouble with AI: Why we need new laws to stop algorithms ruining our lives

ZDNet

Stronger action needs to be taken to stop technologies like facial recognition from being used to violate fundamental human rights, because the ethics charters currently adopted by businesses and governments won't cut it, warns a new report from digital rights organization Access Now. The past few years have seen "ethical AI" become a hot topic, with requirements such as oversight, safety, privacy, transparency, or accountability being added to codes of conduct for private and public organizations alike. From 5% in 2019, in fact, the proportion of organizations that now have an AI ethics charter has jumped to 45% in 2020. The EU's guidelines for "Trustworthy AI" have informed many of these documents; in addition, the European bloc recently published a white paper on artificial intelligence presenting a so-called "European framework for AI", with ethics at its core. How much real change has happened as a result of those ethical guidelines is up for debate.


Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

arXiv.org Artificial Intelligence

In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.


AI virtues -- The missing link in putting AI ethics into practice

arXiv.org Artificial Intelligence

Several seminal ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, widespread criticism has pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach and the many shortcomings associated with it. This paper proposes a different approach. It defines four basic AI virtues, namely justice, honesty, responsibility and care, all of which represent specific motivational settings that constitute the very precondition for ethical decision making in the AI field. Moreover, it defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues by helping with overcoming bounded ethicality or the many hidden psychological forces that impair ethical decision making and that are hitherto completely disregarded in AI ethics. Lastly, the paper describes measures for successfully cultivating the mentioned virtues in organizations dealing with AI research and development.


AI Governance for Businesses

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) governance regulates the exercise of authority and control over the management of AI. It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk. While topics such as AI governance and AI ethics are thoroughly discussed on a theoretical, philosophical, societal and regulatory level, there is limited work on AI governance targeted to companies and corporations. This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data. We derive a conceptual framework by synthesizing literature on AI and related fields such as ML. Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions. It relates to existing IT and data governance frameworks and practices. It can be adopted by practitioners and academics alike. For practitioners the synthesis of mainly research papers, but also practitioner publications and publications of regulatory bodies provides a valuable starting point to implement AI governance, while for academics the paper highlights a number of areas of AI governance that deserve more attention.


Operationalizing AI Ethics Principles

Communications of the ACM

Artificial intelligence (AI) has become a part of our everyday lives from healthcare to law enforcement. AI-related ethical challenges have grown apace ranging from algorithmic bias and data privacy to transparency and accountability. As a direct reaction to these growing ethical concerns, organizations have been publishing their AI principles for ethical practice (over 100 sets and increasing). However, the multiplication of these mostly vaguely formulated principles has not proven to be helpful in guiding practice. Only by operationalizing AI principles for ethical practice can we help computer scientists, developers, and designers to spot and think through ethical issues and recognize when a complex ethical issue requires in-depth expert analysis.


What are the contours of the EU legislation envisaged by MEPs around artificial intelligence? - Actu IA

#artificialintelligence

MEPs are currently working on legislation to be adopted around artificial intelligence (AI). Innovation, access to data, protection of citizens, ethics, research, legal, social and economic issues, the impacts of the future regulation are numerous and central for citizens, administrations and businesses alike. So what are the outlines of the EU legislation envisaged by MEPs on artificial intelligence? This is the question Parliament has answered. Intelligence plays a major role in the digital transformation of our societies.