AI holds fantastic opportunities for large and small-medium organisations alike, and businesses are right to embrace them. Be it to improve back office operations, maximise marketing efforts or deploy predictive technologies to allocate resources more efficiently, algorithms have a lot to offer and we are seeing many organisations deploying AI systems already. Talking with industries as well as policy makers, I notice that we all seem to share the same belief, that is that innovation and ethics can go hand in hand. In fact, many believe that businesses that can utilise data, and do so ethically, have a clear competitive advantage. But how do we turn ethics into practice?
Here you will learn how to identify and manage ethical risks connected with AI development and implementation. You will also understand how the drawbacks of AI influence the society and will assess your individual and corporate responsibilities. But before we start, let me introduce myself. I have a Master's degree in Gender Studies from Charles University. I wrote my thesis about ethical chatbot design.
Recently, a group of faculty and students gathered at New York University before the annual FAT* conference to discuss the promises and challenges of teaching data science ethics, and to learn from one another's experiences in the classroom. This blog post is the first of two which will summarize the discussions had at this workshop. There is general agreement that data science ethics should be taught, but less consensus about what its goals should be or how they should be pursued. Because the field is so nascent, there is substantial room for innovative thinking about what data science ethics ought to mean. In some respects, its goal may be the creation of "future citizens" of data science who are invested in the welfare of their communities and the world, and understand the social and political role of data science therein.
JUDY WOODRUFF: The last day has seen a rise in both concern over and defense of President-elect Trump's Cabinet nominees after news that some of them have not completed ethics reviews. LISA DESJARDINS: The president-elect walked out of Trump Tower with a business leader, Jack Ma of the Chinese e-commerce giant Alibaba, but his words were about politics and his Cabinet nominees. DONALD TRUMP (R), President-Elect: I think they will all pass. LISA DESJARDINS: That after Trump met with a key ally, Senate Majority Leader Mitch McConnell, who dismissed concerns about vetting. MITCH MCCONNELL, Majority Leader: Yes, everybody will be properly vetted, as they have been in the past, and I'm hopeful that we will get up to six or seven picks of the national security team in place on day one.
Entrepreneur Evanna Hu joins me to discuss our rapidly evolving relationship with data. The explosion of data streams and technologies that offload challenging analysis to machines are creating interesting national security challenges and opportunities at lightening speed. Evanna, and her company Omelas, seek to leverage these technologies to address hard national security problems (such as countering violent extremism) where metrics have been difficult to acquire or generate. While opportunity abounds, so do potential ethical dilemmas. It's impossible to discuss the potential of these technologies without considering the countless ways that they could be abused.