Goto

Collaborating Authors

 transparency and accountability


Accountability of Generative AI: Exploring a Precautionary Approach for "Artificially Created Nature"

Nakao, Yuri

arXiv.org Artificial Intelligence

The rapid development of generative artificial intelligence (AI) technologies raises concerns about the accountability of sociotechnical systems. Current generative AI systems rely on complex mechanisms that make it difficult for even experts to fully trace the reasons behind the outputs. This paper first examines existing research on AI transparency and accountability and argues that transparency is not a sufficient condition for accountability but can contribute to its improvement. We then discuss that if it is not possible to make generative AI transparent, generative AI technology becomes ``artificially created nature'' in a metaphorical sense, and suggest using the precautionary principle approach to consider AI risks. Finally, we propose that a platform for citizen participation is needed to address the risks of generative AI.


Trustworthy AI Using Confidential Federated Learning

Communications of the ACM

The artificial intelligence (AI) revolution is reshaping industries and transforming the way we live, work, and interact with technology. From AI chatbots and personalized recommendation systems to autonomous vehicles navigating city streets, AI-powered innovations are emerging everywhere. As businesses and organizations harness AI to streamline operations, optimize processes, and drive innovation, the potential for economic growth and societal advancement is immense. Amid this rapid progress, however, it is critical to ensure AI's trustworthiness. Trustworthy AI systems must exhibit certain characteristics, such as reliability, fairness, transparency, accountability, and robustness. Only then can AI systems be depended upon to operate ethically and effectively without causing harm or discrimination.


Documentation Practices of Artificial Intelligence

Arnold, Stefan, Yesilbas, Dilara, Gröbner, Rene, Riedelbauch, Dominik, Horn, Maik, Weinzierl, Sven

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) faces persistent challenges in terms of transparency and accountability, which requires rigorous documentation. Through a literature review on documentation practices, we provide an overview of prevailing trends, persistent issues, and the multifaceted interplay of factors influencing the documentation. Our examination of key characteristics such as scope, target audiences, support for multimodality, and level of automation, highlights a dynamic evolution in documentation practices, underscored by a shift towards a more holistic, engaging, and automated documentation.


The flawed algorithm at the heart of Robodebt

AIHub

Australia's Royal Commission into the Robodebt Scheme has published its findings. Various unnamed individuals are referred for potential civil or criminal investigation, but its publication is a timely reminder of the potential dangers presented by automated decision-making systems, and how the best way to mitigate their risks is by instilling a strong culture of ethics and systems for accountability in our institutions. The so-called Robodebt scheme was touted to save billions of dollars by using automation and algorithms to identify welfare fraud and overpayments. But in the end, it serves as a salient lesson in the dangers of replacing human oversight and judgement with automated decision-making. It reminds us that the basic method was not merely flawed but illegal; it was premised on the false belief of treating welfare recipients as cheats (rather than as society's most vulnerable); and it lacked both transparency and oversight.


The Rise of Explainable AI and its Implications for Transparency and Accountability

#artificialintelligence

Artificial intelligence (AI) has rapidly become an essential part of modern society, powering everything from virtual assistants to medical diagnosis tools. As AI systems have become more advanced, the lack of transparency and accountability has become an increasingly pressing concern. This is where explainable AI (XAI) comes in. XAI is a new approach to developing AI systems that prioritise transparency and accountability by making their decision-making processes understandable to humans. This article explores the rise of explainable AI and its implications.


Locked AI: The Dangers of Closed Source Code in the Age of Artificial Intelligence

#artificialintelligence

OpenAI has been known for its mission to develop and promote artificial intelligence in a safe and ethical manner. However, the organization recently announced that it will no longer be open sourcing its AI code. This decision has raised concerns about the potential dangers of limiting access to AI research and development. One of the biggest dangers of not open sourcing AI code is the potential for decreased transparency and accountability. Open sourcing code allows other researchers to verify the accuracy and safety of AI models, which can lead to improvements and prevent the deployment of harmful systems. Without open sourcing, there is less transparency and accountability for the development of AI models, which could lead to unintended consequences and the deployment of unsafe AI systems.


The "100% Human" Creation Declaration

#artificialintelligence

We've all heard them: 24 carat gold, 100% Florida orange juice, 100% all natural, 100% made in the USA. Much of what we consume--from food to data--is qualified in some way to help us gain insights into what we're consuming. Sometimes, it's directly related to things like ingredients and other times, it's more about the social and political implications. But the rise of machine learning and natural language processing has led to the development of advanced language models such as GPT (Generative Pre-trained Transformer) and raised important questions about creativity and ownership, to name just a few. These models are amazing and are capable of generating human-like text, making it difficult to distinguish between text written by a human and text generated by a machine.


10 Books to get ahead of the curve of ChatGPT and the future of AI

#artificialintelligence

This book, written by philosopher Nick Bostrom, examines the potential consequences of creating superintelligent AI, including the potential risks and benefits. Bostrom discusses the potential dangers of creating an AI that surpasses human intelligence, including the possibility that such an AI could develop goals that are incompatible with human values. He also discusses the steps we can take to ensure that the development of AI follows a positive trajectory, including the importance of designing AI systems with appropriate values and goals. In this book, physicist Max Tegmark explores the future of AI and its potential to transform humanity. He discusses the ways in which AI could potentially enhance or replace human abilities, and the implications of such a scenario for employment, education, and daily life.


Blockchain and AI: A Disruptive Alliance

#artificialintelligence

AI and Blockchain are among some of the most influential drivers of innovation today -- spreading unabated and boasting a distinct degree of technological complexity and multi-dimensional business implications. The collaborative applicability between both technologies has yet to be fully realized but is beginning to spur profound changes in numerous aspects of our lives, from financial transactions and autonomous vehicles to engaging assistants and granting us newfound ownership of our data -- the ball has only begun rolling. Blockchain is hindered by issues of security, scalability, and efficiency while AI remains plagued by concerns of trustworthiness, explainability, and privacy. These terms are often thrown around as synonymous but there are distinct differences. Machine learning, not the first to reach the conceptual stage but the first to be developed, is an algorithm that parses data, learns from that data, and then applies newfound knowledge to make informed decisions.


How to fix the EU Artificial Intelligence Act

#artificialintelligence

The European Union is getting back to work after the summer break, and one of the key files on everyone's mind is the EU Artificial Intelligence Act (AIA). Over the summer, the European Commission held a consultation on the AIA that received 304 responses, with everyone from the usual Big Tech players down to the Council of European Dentists having their say. Access Now submitted a response to the consultation in August that outlined a number of key issues that need to be addressed in the next stages of the legislative process. If you want to regulate something, you need to define it properly; if not, you're creating problematic loopholes. Unfortunately, the definitions of emotion recognition (Article 3(34)) and biometric categorisation (Article 3(35)) in the current draft of the EU Artificial Intelligence Act are technically flawed.