Agenda - CogX London 2018

#artificialintelligence

Hosted by strong Antony Walker /strong, Deputy CEO, TechUK br strong Dame Colette Bowe /strong, Trustee, Nuffield Foundation br strong Hetan Shah /strong, Executive Director, The Royal Statistical Society br strong Rachel Coldicutt /strong, CEO, Doteveryone br strong Francesca Rossi /strong, AI Ethics, IBM br strong Nigel Houlden /strong, Head of Technology Policy, Information Commissioner's Office br 12:10 โ€“ 13:00 br br span class "synopsis" The current global digital ethics debate comes at a time when businesses are focused on complying with the European General Data Protection Regulation (GDPR). Compared to hard regulation, ethics can sound academic and ethereal, disconnected from the practical realities of running and growing a business. Something that businesses do day-in day-out. Thinking about the ethical implications of innovation in new technology can sound difficult and daunting โ€“ a mire to get bogged down in. But when it comes to AI โ€“ sound ethical decisions are also likely to be sound business decisions.


Comment: 'We can't leave Silicon Valley to solve AI's ethical issues'

#artificialintelligence

So, hands up who was woken up by Alexa this morning? Or now has Google Home finding their favourite radio station for them? Or had fun over the holidays trying to get Siri to tell them a joke? Artificial intelligence is now more accessible and becoming mainstream. The rapid development and evolution of AI technologies, while unleashing opportunities for business and communities across the world, have prompted a number of important overarching questions that go beyond the walls of academia and hi-tech research centres in Silicon Valley.


AI Ethics - Could the UK become a leader of ethical AI?

#artificialintelligence

It sounds like a script from the Netflix futuristic dystopia Black Mirror. Chatbots now ask: "How can I help you?" The reply typed in return: "Are you human?" "Of course I am human," comes the response. "But how do I know you're human?" The so-called Turing Test where people question a machine's ability to imitate human intelligence is happening right now.


Hey Google, sorry you lost your ethics council, so we made one for you

MIT Technology Review

After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC--a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing. How did things go so wrong? And can Google put them right?


Do new technologies take ethics out of healthcare?

#artificialintelligence

As such, even though these technologies bring huge potential and opportunities, they still need to be closely monitored. The University of New South Wales Research Ethics and Compliance Support Director Dr Ted Rohr told HITNA that issues around ethics arise when healthcare access data from medical records for research, for example. "Ethics is all about deciding whether the use of technology is appropriate and is used for public good. For example, AI has its positives, but it can be misused. So, having an ethical framework allows the proper use of medical databases for research and experiments with patients using devices," he said.