Tolmie, Peter
Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare
Procter, Rob, Tolmie, Peter, Rouncefield, Mark
The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this paper, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human-Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.
Towards Detecting Rumours in Social Media
Zubiaga, Arkaitz (University of Warwick) | Liakata, Maria (University of Warwick) | Procter, Rob (University of Warwick) | Bontcheva, Kalina (University of Sheffield) | Tolmie, Peter (University of Warwick)
This is especially the media as an event unfolds. This methodology consists of case in emergency situations, where the spread of a false rumour three main steps: (i) collection of (source) tweets posted during can have dangerous consequences. For instance, in a an emergency situation, sampling in such a way that situation where a hurricane is hitting a region, or a terrorist it is manageable for human assessment, while generating attack occurs in a city, access to accurate information is a good number of rumourous tweets from multiple stories, crucial for finding out how to stay safe and for maximising (ii) collection of conversations associated with each of the citizens' wellbeing. This is even more important in cases source tweets, which includes a set of replies discussing the where users tend to pass on false information more often source tweet, and (iii) collection of human annotations on than real facts, as occurred with Hurricane Sandy in 2012 the tweets sampled. We provide a definition of a rumour (Zubiaga and Ji 2014). Hence, identifying rumours within a which informs the annotation process. Our definition draws social media stream can be of great help for the development on definitions from different sources, including dictionaries of tools that prevent the spread of inaccurate information.