The issue of corporate ethics is never far from the business media headlines. Take the troubles embroiling former Nissan chair Carlos Ghosn, or the accounting problems at Patisserie Valerie in the UK, to name just two recent examples. Despite the best intentions and efforts of policymakers, legislators, boards and professional consultants, the corporate scandals keep coming. Now, to further complicate matters, the latest developments in the digital revolution are adding a new dimension to the challenge of ensuring companies and their executives behave responsibly. Ioannis Ioannou, Associate Professor of Strategy and Entrepreneurship at London Business School, and Sam Baker, Monitor Deloitte Partner, suggest that, while the widespread introduction of AI and machine learning technologies can be a force for good, without the right approach there is a risk that the corporate ethics waters become even murkier.
Hosted by strong Antony Walker /strong, Deputy CEO, TechUK br strong Dame Colette Bowe /strong, Trustee, Nuffield Foundation br strong Hetan Shah /strong, Executive Director, The Royal Statistical Society br strong Rachel Coldicutt /strong, CEO, Doteveryone br strong Francesca Rossi /strong, AI Ethics, IBM br strong Nigel Houlden /strong, Head of Technology Policy, Information Commissioner's Office br 12:10 – 13:00 br br span class "synopsis" The current global digital ethics debate comes at a time when businesses are focused on complying with the European General Data Protection Regulation (GDPR). Compared to hard regulation, ethics can sound academic and ethereal, disconnected from the practical realities of running and growing a business. Something that businesses do day-in day-out. Thinking about the ethical implications of innovation in new technology can sound difficult and daunting – a mire to get bogged down in. But when it comes to AI – sound ethical decisions are also likely to be sound business decisions.
So, hands up who was woken up by Alexa this morning? Or now has Google Home finding their favourite radio station for them? Or had fun over the holidays trying to get Siri to tell them a joke? Artificial intelligence is now more accessible and becoming mainstream. The rapid development and evolution of AI technologies, while unleashing opportunities for business and communities across the world, have prompted a number of important overarching questions that go beyond the walls of academia and hi-tech research centres in Silicon Valley.
It sounds like a script from the Netflix futuristic dystopia Black Mirror. Chatbots now ask: "How can I help you?" The reply typed in return: "Are you human?" "Of course I am human," comes the response. "But how do I know you're human?" The so-called Turing Test where people question a machine's ability to imitate human intelligence is happening right now.
After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC--a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing. How did things go so wrong? And can Google put them right?