Goto

Collaborating Authors

ethical principle


Taking Principles Seriously: A Hybrid Approach to Value Alignment in Artificial Intelligence

Journal of Artificial Intelligence Research

An important step in the development of value alignment (VA) systems in artificial intelligence (AI) is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing “naturalistic fallacy,” which is an attempt to derive “ought” from “is,” and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified modal logic, we precisely formulate principles derived from deontological ethics and show how they imply particular “test propositions” for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles. This article is part of the special track on AI and Society.


Taking Principles Seriously: A Hybrid Approach to Value Alignment

arXiv.org Artificial Intelligence

An important step in the development of value alignment (VA) systems in AI is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing the "naturalistic fallacy," which is an attempt to derive "ought" from "is," and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified model logic, we precisely formulate principles derived from deontological ethics and show how they imply particular "test propositions" for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles.


Who will your algorithm harm next? Why businesses need to start thinking about evil AI now

ZDNet

From Google's commitment to never pursue AI applications that might cause harm, to Microsoft's "AI principles", through IBM's defense of fairness and transparency in all algorithmic matters: big tech is promoting an responsible AI agenda, and it seems companies large and small are following the lead. While in 2019, a mere 5% of organizations had come up with an ethics charter that framed how AI systems should be developed and used, the proportion jumped to 45% in 2020. Key words such as "human agency", "governance", "accountability" or "non-discrimination" are becoming central components of many companies' AI values. The concept of responsible technology, it would seem, is slowly making its way from the conference room and into the boardroom. What is AI? Everything you need to know about Artificial Intelligence This renewed interest in ethics, despite the topic's complex and often abstract dimensions, has been largely motivated by various pushes from both governments and citizens to regulate the use of algorithms.


Towards and Ethical Framework in the Complex Digital Era

arXiv.org Artificial Intelligence

Since modernity, ethic has been progressively fragmented into specific communities of practice. The digital revolution enabled by AI and Data is bringing ethical wicked problems in the crossroads of technology and behavior. However, the need of a comprehensive and constructive ethical framework is emerging as digital platforms connect us globally. The unequal structure of the global system makes that dynamic changes and systemic problems impact more on those that are most vulnerable. Ethical frameworks based only on the individual-level are not longer sufficient. A new ethical vision must comprise the understanding of the scales and complex interconnections of social systems. Many of these systems are internally fragile and very sensitive to external factors and threats, which turns into unethical situations that require systemic solutions. The high scale nature of digital technology that expands globally has also an impact at the individual level having the risk to make humans beings more homogeneous, predictable and ultimately controllable. To preserve the core of humanity ethic must take a stand to preserve and keep promoting individual rights and uniqueness and cultural heterogeneity tackling the negative trends and impact of digitalization. Only combining human-centered and collectiveness-oriented digital development it will be possible to construct new social models and human-machine interactions that are ethical. This vision requires science to enhance ethical frameworks and principles with the actionable insights of relationships and properties of the social systems that may not be evident and need to be quantified and understood to be solved. Artificial Intelligence is both a risk and and opportunity for an ethical development, thus we need a conceptual construct that drives towards a better digitalizated world.


NXP launches AI Ethics initiative

#artificialintelligence

With secure, power-efficient edge computing and AI, everyday devices not only sense their environments, but also interpret, analyze, and act in real time on the data collected. Published in a new whitepaper entitled The Morals of Algorithms, the company details its comprehensive framework for AI principles: non-maleficence, human autonomy, explicability, continued attention & vigilance, and privacy and security by design. These principles are rooted in NXP's corporate values, ethical guidelines, and a long tradition of building some of the world's most sophisticated secure devices. The AI framework evolved as a result of a cross-company collaboration, including inputs and insights across engineering and customer-facing teams around the world. NXP is a vanguard in the AI revolution with a portfolio of microcontrollers (MCUs) and processors optimized for machine learning applications "at the edge" of networks, including thermostats, security systems, car sensors, robots and industrial automation and other devices, thereby making them not only intelligent but faster, more flexible, and more secure.



How to Approach Ethical AI Implementation?

#artificialintelligence

"Why does AI need to have moral agency?" Because the level of autonomy in an AI system has reached human level "cognition". AI can perform human liked tasks with "intelligence" and no supervision. It can learn from the real world experience through data to execute tasks to achieve its intended purpose. As opposed to standard programming methods, AI doesn't use fixed algorithm to perform a tasks, but has the ability to decide what task to execute under diverse circumstances and sometimes beyond human capabilities and understanding. In other words, it has the level of autonomy and intelligence to be human-like. We must recognised by now that AI has the power to change the course of humanity either for the greater good or for worse. It would be foolish and irresponsible for any government to take an unregulated capitalistic approach to let this technology advance unrestricted based on market forces.


Businesses, policymakers 'misaligned' on what ethical AI really means

#artificialintelligence

From autonomous vehicles to virtual assistants, artificial intelligence is becoming increasingly present in our daily lives, and yet we are really just at the beginning of the curve. A powerful, transformative technology though it is, dealing with vast amounts of data, applications are already triggering unease in the public and the continued adoption of the game-changing technology must be balanced with heightened scrutiny towards policy, regulation, and ethics. The need for more stringent oversight is demonstrated by the increasing reliance we place on this technology in our daily lives -- in the case of driverless cars, we'd be placing our lives in the hands of AI. But it's also demonstrated in use by businesses and organizations. In the case of law enforcement, Flaws, or incompleteness in the data used by facial recognition systems in law enforcement, for example, can lead to racial profiling or misidentification of suspects, or add to the sense of an invasive surveillance culture at best.


Is AI A Force For Good? Interview With Branka Panic, Founder And Executive Director At AI For Peace

#artificialintelligence

Increasingly, organizations across many industries and geographies are building and deploying machine learning models and incorporating artificial intelligence into a variety of their different products and offerings. However, as they put AI capabilities into systems that we interact with on a daily basis, it becomes increasingly important to make sure these systems are behaving in a way that's beneficial to the public. When creating AI systems organizations should also consider the ethical and moral implications to make sure that AI is being created for good intentions. Policymakers that want to understand and leverage AI's potential and impact need to take a holistic view of the issues. This includes things like intentions behind AI systems, as well as potential unintended consequences and actions of AI systems.


Efforts to understand impact of AI on society put pressure on biometrics industry to sort out priorities, role

#artificialintelligence

Companies involved in face biometrics and other artificial intelligence applications have not come to a consensus on what ethical principles to prioritize, which may cause problems for them as policymakers move to set regulations, according to a new report from EY. Facial recognition check-ins for venues such as airports, hotels and banks, and law enforcement surveillance, including the use of face biometrics, are two of a dozen specific use cases considered in the study. The report'Bridging AI's trust gaps' was developed by EY in collaboration with The Future Society, suggests companies developing and providing AI technologies are misaligned with policymakers, which is creating new risks for them. Third parties may have a role to play in bridging the trust gap, such as with an equivalent to'organic' or'fairtrade' labels, EY argues. For biometric facial recognition, 'fairness and avoiding bias' is the top priority for policymakers, followed by'privacy and data rights' and'transparency.' Among companies, privacy and data rights tops the list followed by'safety and security,' and then transparency.