Goto

Collaborating Authors

 floridi


Epistemic Trade-Off: An Analysis of the Operational Breakdown and Ontological Limits of "Certainty-Scope" in AI

Immediato, Generoso

arXiv.org Artificial Intelligence

The recently published "certainty - scope" conjecture offers a compelling insight into the inherent trade - off present within artificial intelligence (AI) systems. As general research, this investigation remains vital as a philosophical undertaking and a potential guide for directing AI investments, design, and deployment, especially in safety - critical and mission - critical domains where risk levels are substantially elevated. W hile maintaining intellectual coherence, its formalization ultimately consolidates this insight into a suspended epistemic truth, which resists operational implementation within practical systems. This paper argues that the conjecture's objective to furnish insights for engineering de sign and regulatory decision - making is limited by two fundamental factors: first, its dependence on incomputable constructs and its failure to capture the generality factors of AI, rendering it practically unimplementable and unverifiable; second, its foundational ontological assumption of AI systems as self - contained epistemic entities, distancing it from the complex and dynamic socio - technical environments where knowledge is co - constructed. We conclude that this dual breakdown -- an epistemic closure deficit and an embeddedness bypass -- hinders the conjecture's transition to a practical and actionable framework suitable for informing and guiding AI deployments . In response, we point towards a possible framing of the epistemic challenge, emphasizing the inherent epistemic burdens of AI within complex human - centric domains. Keywords: artificial intelligence (AI), AI governance, algorithmic information theory (AIT), certainty - scope trade - off, complex systems, computability & operationalization, epistemic entanglement, epistemic certainty, hybrid AI systems, information theory, Kolmogorov complexity, risk - based assurance, safety - critical AI, socio - technical systems, verification and validation (V&V).


Towards a Measure Theory of Semantic Information

Coghill, George M.

arXiv.org Artificial Intelligence

A classic account of the quantification of semantic information is that of Bar-Hiller and Carnap. Their account proposes an inverse relation between the informativeness of a statement and its probability. However, their approach assigns the maximum informativeness to a contradiction: which Floridi refers to as the Bar-Hillel-Carnap paradox. He developed a novel theory founded on a distance metric and parabolic relation, designed to remove this paradox. Unfortunately is approach does not succeed in that aim. In this paper I critique Floridi's theory of strongly semantic information on its own terms and show where it succeeds and fails. I then present a new approach based on the unit circle (a relation that has been the basis of theories from basic trigonometry to quantum theory). This is used, by analogy with von Neumann's quantum probability to construct a measure space for informativeness that meets all the requirements stipulated by Floridi and removes the paradox. In addition, while contradictions and tautologies have zero informativeness, it is found that messages which are contradictory to each other are equally informative. The utility of this is explained by means of an example.


Which symbol grounding problem should we try to solve?

Müller, Vincent C.

arXiv.org Artificial Intelligence

Müller, Vincent C. (2015), 'Which symbol grounding problem should we try to solve?', Journal of Experimental and Theoretical Artificial Intellig ence, 27 (1, ed. Which symbol grounding problem should we try to solve? October, 201 3 Floridi and Taddeo propose a condition of "zero semantic co m-mitment" for sol u tions to the grounding problem, and a solution to it . I argue briefly that their condition cannot be fulfilled, not even by their own solu tion . After a look at Luc Steel's very different competing suggestion, I suggest that w e need to rethink what the problem is and what role the'goals' in a system play in formulating the problem .


Moral Agency in Silico: Exploring Free Will in Large Language Models

Porter, Morgan S.

arXiv.org Artificial Intelligence

This study investigates the potential of deterministic systems, specifically large language models (LLMs), to exhibit the functional capacities of moral agency and compatibilist free will. We develop a functional definition of free will grounded in Dennett's compatibilist framework, building on an interdisciplinary theoretical foundation that integrates Shannon's information theory, Dennett's compatibilism, and Floridi's philosophy of information. This framework emphasizes the importance of reason-responsiveness and value alignment in determining moral responsibility rather than requiring metaphysical libertarian free will. Shannon's theory highlights the role of processing complex information in enabling adaptive decision-making, while Floridi's philosophy reconciles these perspectives by conceptualizing agency as a spectrum, allowing for a graduated view of moral status based on a system's complexity and responsiveness. Our analysis of LLMs' decision-making in moral dilemmas demonstrates their capacity for rational deliberation and their ability to adjust choices in response to new information and identified inconsistencies. Thus, they exhibit features of a moral agency that align with our functional definition of free will. These results challenge traditional views on the necessity of consciousness for moral responsibility, suggesting that systems with self-referential reasoning capacities can instantiate degrees of free will and moral reasoning in artificial and biological contexts. This study proposes a parsimonious framework for understanding free will as a spectrum that spans artificial and biological systems, laying the groundwork for further interdisciplinary research on agency and ethics in the artificial intelligence era.


Auditing of AI: Legal, Ethical and Technical Approaches

Mokander, Jakob

arXiv.org Artificial Intelligence

AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Societys topical collection on Auditing of AI, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and technology providers have an interest in promoting auditing as an AI governance mechanism. Academic researchers can thus fill an important role by studying the feasibility and effectiveness of different AI auditing procedures. Third, AI auditing is an inherently multidisciplinary undertaking, to which substantial contributions have been made by computer scientists and engineers as well as social scientists, philosophers, legal scholars and industry practitioners. Reflecting this diversity of perspectives, different approaches to AI auditing have different affordances and constraints. Specifically, a distinction can be made between technology-oriented audits, which focus on the properties and capabilities of AI systems, and process oriented audits, which focus on technology providers governance structures and quality management systems. The next step in the evolution of auditing as an AI governance mechanism, this article concludes, should be the interlinking of these available (and complementary) approaches into structured and holistic procedures to audit not only how AI systems are designed and used but also how they impact users, societies and the natural environment in applied settings over time.


Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry

Mökander, Jakob, Sheth, Margi, Gersbro-Sundler, Mimmi, Blomgren, Peder, Floridi, Luciano

arXiv.org Artificial Intelligence

While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.


The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems

Mokander, Jakob, Sheth, Margi, Watson, David, Floridi, Luciano

arXiv.org Artificial Intelligence

Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems found in previous literature use one of three mental model. The Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics. The Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose. And the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, data input, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the conceptual tools needed to operationalise AI governance in practice.


The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?

Mokander, Jakob, Juneja, Prathm, Watson, David, Floridi, Luciano

arXiv.org Artificial Intelligence

On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).


AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps

Gao, Di Kevin, Haverly, Andrew, Mittal, Sudip, Wu, Jiming, Chen, Jingdao

arXiv.org Artificial Intelligence

Artificial intelligence (AI) ethics has emerged as a burgeoning yet pivotal area of scholarly research. This study conducts a comprehensive bibliometric analysis of the AI ethics literature over the past two decades. The analysis reveals a discernible tripartite progression, characterized by an incubation phase, followed by a subsequent phase focused on imbuing AI with human-like attributes, culminating in a third phase emphasizing the development of human-centric AI systems. After that, they present seven key AI ethics issues, encompassing the Collingridge dilemma, the AI status debate, challenges associated with AI transparency and explainability, privacy protection complications, considerations of justice and fairness, concerns about algocracy and human enfeeblement, and the issue of superintelligence. Finally, they identify two notable research gaps in AI ethics regarding the large ethics model (LEM) and AI identification and extend an invitation for further scholarly research.