The ACM committee on Professional Ethics is updating the ACM Code of Ethics and Professional Conduct (the Code).a It was last changed in 1992. Beginning on page 7 of this issue there is an article that outlines the motivations for the update, describes the update process, includes the first draft of the update, and invites you to take part in this important project. While the draft addresses changes in technology and society, here I reflect on a different view: how those changes motivate a call to action for the profession. There have been significant changes to the profession of computing in the last 25 years.
AI holds fantastic opportunities for large and small-medium organisations alike, and businesses are right to embrace them. Be it to improve back office operations, maximise marketing efforts or deploy predictive technologies to allocate resources more efficiently, algorithms have a lot to offer and we are seeing many organisations deploying AI systems already. Talking with industries as well as policy makers, I notice that we all seem to share the same belief, that is that innovation and ethics can go hand in hand. In fact, many believe that businesses that can utilise data, and do so ethically, have a clear competitive advantage. But how do we turn ethics into practice?
Here you will learn how to identify and manage ethical risks connected with AI development and implementation. You will also understand how the drawbacks of AI influence the society and will assess your individual and corporate responsibilities. But before we start, let me introduce myself. I have a Master's degree in Gender Studies from Charles University. I wrote my thesis about ethical chatbot design.
Recently, a group of faculty and students gathered at New York University before the annual FAT* conference to discuss the promises and challenges of teaching data science ethics, and to learn from one another's experiences in the classroom. This blog post is the first of two which will summarize the discussions had at this workshop. There is general agreement that data science ethics should be taught, but less consensus about what its goals should be or how they should be pursued. Because the field is so nascent, there is substantial room for innovative thinking about what data science ethics ought to mean. In some respects, its goal may be the creation of "future citizens" of data science who are invested in the welfare of their communities and the world, and understand the social and political role of data science therein.
JUDY WOODRUFF: The last day has seen a rise in both concern over and defense of President-elect Trump's Cabinet nominees after news that some of them have not completed ethics reviews. LISA DESJARDINS: The president-elect walked out of Trump Tower with a business leader, Jack Ma of the Chinese e-commerce giant Alibaba, but his words were about politics and his Cabinet nominees. DONALD TRUMP (R), President-Elect: I think they will all pass. LISA DESJARDINS: That after Trump met with a key ally, Senate Majority Leader Mitch McConnell, who dismissed concerns about vetting. MITCH MCCONNELL, Majority Leader: Yes, everybody will be properly vetted, as they have been in the past, and I'm hopeful that we will get up to six or seven picks of the national security team in place on day one.