Goto

Collaborating Authors

ethics in artificial intelligence


Virtual IEAI Speaker Series - Artificial Intelligence Is Necessarily Irresponsible with Prof. Dr. Joanna Bryson - Institute for Ethics in Artificial Intelligence

#artificialintelligence

Exceptional times require exceptional solutions. Due to the current lockdown, the IEAI has decided to hold its May Speaker Series with Prof. Dr. Joanna Bryson virtually via Zoom. With its Speaker Series, the TUM Institute for Ethics in Artificial Intelligence is inviting experts from all over the world to talk about ethics and governance of AI. The May session of the TUM IEAI Speaker Series will take place on 14 May 2020, 10am (CEST), virtually via Zoom. We will send out the link to all registered attendees one day prior to the event.


Publications and Reports - Institute for Ethics in Artificial Intelligence

#artificialintelligence

"AI4People On Good AI Governance – 14 Priority Actions, a S.M.A.R.T. Model of Governance, and a Regulatory Toolbox" presented to the European Parliament and the European Commission on 6 November 2019 "Why addressing ethical questions in AI will benefit organizations" published by the Capgemini Research Institute on 5 July 2019 "Key Ethical Challenges in the European Medical Information Framework" in: Minds and Machines, Journal for Artificial Intelligence, Philosophy and Cognitive Science, 2018 Ethik in KI und Robotik (English title: An Introduction to Ethics in AI and Robotics) was published in October 2019 by these four distinguished authors.


Thinking About 'Ethics' in the Ethics of AI – Idees

#artificialintelligence

Therefore, it is essential, in thinking about'ethics', to look beyond the capacities for ethical decision-making and action and the moments of ethical choice and action and into the background of values and the stories behind the choice and action. Similar arguments have been made to affirm the role of social and relational contexts in limiting ethical choices and shaping moral outcomes, and thus the importance to account for them in our ethical reflection.



TUM Institute for Ethics in Artificial Intelligence officially opened

#artificialintelligence

TUM has been studying the complex interactions of science, technology and society since 2012 through the work of the Munich Center for Technology in Society (MCTS), which was established under the 2012 Excellence Initiative. As part of the MCTS, the TUM Institute for Ethics in Artificial Intelligence (IEAI) will focus on ethical implications of artificial intelligence. The US company Facebook is supporting this TUM initiative by a 6.5 million euro donation not subject to any conditions or expectations. At today's opening symposium for the Institute for Ethics in Artificial Intelligence (IEAI) at TUM, Dorothee Bär, the Federal Government Commissioner for Digital Affairs, said: "To some extent, machine learning algorithms are already playing a role in choosing the news articles we read. But the possible applications extend far beyond that, for example into such areas as medical diagnostics. These far-reaching technological changes raise many ethical issues. It is a good thing that TUM is getting involved in addressing these issues."


European Union Parliament Releases Guidelines On Ethics In Artificial Intelligence - Data Protection - European Union

#artificialintelligence

On September 19, 2019, the European Parliament Research Service (EPRS) released a paper, European Union (EU) Guidelines on Ethics in Artificial Intelligence (AI): Context and Implementation (--Paper--), to shed light on the ethical rules that were established under the EU Guidelines on Ethics in AI (--Guidelines--). The Guidelines, which are nonbinding, were published in April 2019 after the European Parliament was directed to update and complement the existing Union legal framework with guiding ethical principles that are based on a --human-centric-- approach to AI. The Paper aims to provide guidance on the key ethical requirements that are recommended in the Guidelines when designing, developing, implementing or using AI products and services to promote trustworthy, ethical and robust AI systems. The Paper also identifies some implementation challenges and possible future EU action while also calling for certain actions including clarifying the Guidelines, fostering the adoption of ethical standards and adopting legally binding instruments to set common rules on transparency. Of note, the Guidelines highlight that all AI stakeholders must comply with the General Data Protection Regulation (GDPR) principles and advise the AI community to guarantee that privacy and personal data are protected, both when building and when running AI systems to afford citizens full control over their data.


Making AI Systems Fairer Will Require Time, Guidelines

#artificialintelligence

Christoph Lutge, director of the Institute for Ethics in Artificial Intelligence at Germany's Technical University of Munich, said there is "a chance that these AI systems might be fairer eventually, but they will need guidelines." In January, the Institute for Ethics in Artificial Intelligence was established at Germany's Technical University of Munich (TUM), with initial funding from a five-year, $7.5-million grant from Facebook. The Institute has issued its first call for proposals, and an advisory board was recently appointed. The Institute's director, Christoph Lütge, holds the Peter Löscher Chair in Business Ethics at TUM. Lütge recently spoke about ethics in artificial intelligence (AI) generally, and the new Institute specifically. Can you give an example of the type of ethical question in AI that the Center might be dealing with?


Ethics in Artificial Intelligence

#artificialintelligence

Ethics in machine learning are what comes to mind when we imagine a worst case scenario in the context of artificial intelligence. As an example, we can think of HAL 9000 from '2001: A Space Odyssey' and Skynet from the'Terminator' films or more recently Ultron from the'Avengers'. Sadly, the main thinking behind most of the depicted self aware artificial intelligences is that they become sentient with the sole purpose of destroying the human race to ensure its survival. While such a scenario is not impossible or thankfully a ways off from this type of dystopian future. There are however pressing ethical matters in AI that we need to be considering right now.


Home - Institute for Ethics in Artificial Intelligence

#artificialintelligence

In early 2019, the Technical University of Munich (TUM) announced the founding of the TUM Institute for Ethics in Artificial Intelligence (IEAI), an independent body for research projects that deepen the university s examination of the social relevance of technical innovation. A foremost priority of the new institute will be the generation of ethical guidelines for the development and implementation of Artificial Intelligence. TUM has long been a driving force in researching the mutual interactions of science, technology and society and has made "Human-Centered Engineering" a central point in its strategic guidelines. The IEAI is integrated in the Munich Center for Technology in Society, which was founded by TUM in 2012 and is now considered one of Germany's leading centers for scientific and technical research.


Facebook Funds New AI Ethics Institute at Technical University of Munich

#artificialintelligence

Facebook has partnered with the Technical University of Munich, Germany, to create an Institute of Ethics in Artificial Intelligence. Facebook has teamed with the Technical University of Munich (TUM) in Germany to establish an Institute of Ethics in Artificial Intelligence (AI), via a five-year, $7.5 million grant. Said TUM's Christoph Lutge, "We want to supply guidelines for the identification and answer of ethical questions of artificial intelligence...for the responsible use of the technology in society and the economy." The Institute will apply evidence-based research to explore issues at the core of human values and how they interweave with emerging technology, with central issues like privacy, trust, inclusion, and bias paramount. Lutge said, "We will also deal with transparency and accountability...or with rights and autonomy in human decision-making in situations of human-AI interaction."