Within Microsoft, O'Brien is trying to extend the community of people who are focused on the ethics topic. He sent me a collection of materials suggesting that lots of people are already engaged. The materials include, for example, a blog post on Microsoft guidelines about developing responsible conversation bots (no doubt occasioned by the off-the-rails comments from the Tay AI chatbot research project in 2016). They also include a presentation and article on intelligible models in health care, and an internal project for a "learning door" that recognizes (with opt-in) who is coming in and out of Microsoft buildings. O'Brien said he works closely with Smith's legal team and also has a matrixed reporting relationship to Eric Horvitz, technical fellow and director at Microsoft Research Labs.
One of the essential phrases necessary to understand AI in 2019 has to be "ethics washing." Put simply, ethics washing -- also called "ethics theater" -- is the practice of fabricating or exaggerating a company's interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes "AI for good" initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other. Accusations of ethics washing have been lobbed at the biggest AI companies in the world, as well as startups. The most high-profile example this year may have been Google's external AI ethics panel, which devolved into a PR nightmare and was disbanded after about a week.
In June 2015, the education ministry sent shock waves through Japan's academic humanities community when it issued a notice urging national universities to restructure their humanities departments and shift their focus to fields that have greater social demand. Since then, the value of humanities studies at state-funded institutions has been a hotly debated topic, with some questioning the need for such classes in the modern age. But Karen O'Brien, head of the humanities division at the prestigious University of Oxford, said the studies are indispensable and play crucial roles in today's society. "Humanity is fundamental to understanding in a really deep way who we are, how we have come to be who we are, and to think really deeply about how we address the practical, ethical and historical problems in the modern age," O'Brien said during a recent interview in Tokyo, where she visited to promote Oxford to Japanese students as well as to meet its alumni. "Without a scholarly humanity perspective on all of those issues, we run the risk of going into accelerating future and technologies … without that deeper understanding and awareness," she said.
How can you do good with data? The ethical and legal principles surrounding data and its use--from information to analytics and insight and beyond, into data science and artificial intelligence (AI)--are global in nature, even if the laws are not. And, regardless of whether your company is local or has offices around the world, if you're using data (and you probably are), it's important to know how to properly handle it, what to consider, and how to achieve good with it. In short, data professionals today need both the frameworks and the methods in their job to achieve optimal results while being good stewards of their critical role in society today. Corporations, governments, and individuals have powerful tools in Analytics and AI to create real-world outcomes.
James Wolfe, left, the former security director for the Senate Intelligence Committee, and New York Times reporter Ali Watkins, right. Federal investigators had seized years' worth of Watkins' email and phone records as part of a leak probe into Wolfe. Government documents were leaked to the press. A reporter's communications were seized by the government without her knowing about it. And a former Senate aide was charged with "lying repeatedly to investigators about his contacts with three reporters."