Goto

Collaborating Authors

Teaching Responsible Data Science: Charting New Pedagogical Territory

arXiv.org Artificial Intelligence

Although numerous ethics courses are available, with many focusing specifically on technology and computer ethics, pedagogical approaches employed in these courses rely exclusively on texts rather than on software development or data analysis. Technical students often consider these courses unimportant and a distraction from the "real" material. To develop instructional materials and methodologies that are thoughtful and engaging, we must strive for balance: between texts and coding, between critique and solution, and between cutting-edge research and practical applicability. Finding such balance is particularly difficult in the nascent field of responsible data science (RDS), where we are only starting to understand how to interface between the intrinsically different methodologies of engineering and social sciences. In this paper we recount a recent experience in developing and teaching an RDS course to graduate and advanced undergraduate students in data science. We then dive into an area that is critically important to RDS -- transparency and interpretability of machine-assisted decision-making, and tie this area to the needs of emerging RDS curricula. Recounting our own experience, and leveraging literature on pedagogical methods in data science and beyond, we propose the notion of an "object-to-interpret-with". We link this notion to "nutritional labels" -- a family of interpretability tools that are gaining popularity in RDS research and practice. With this work we aim to contribute to the nascent area of RDS education, and to inspire others in the community to come together to develop a deeper theoretical understanding of the pedagogical needs of RDS, and contribute concrete educational materials and methodologies that others can use. All course materials are publicly available at https://dataresponsibly.github.io/courses.


Classroom Technology Is Indoctrinating Students Into A Culture Of Surveillance

#artificialintelligence

Are you disturbed by news of China's social credit system, whereby citizens are tracked, their actions graded, and this quantitative data is then used to score their integrity and determine what jobs and other privileges they have access to? Then what if I told you there was a similar system at play in K-12 schools all over the US (and much of the rest of the world) that parents and school administrators apply to children? According to EdSurge, "Class Dojo is an online behavior management system intended to foster positive student behaviors and classroom culture. Students earn'Dojo Points' based on their classroom conduct." The software's website claims that it is "actively used in 95% of all K-8 schools in the U.S. and 180 countries" with "1 in 6 U.S. families with a child under 14" using ClassDojo every day.


An Algorithmic Equity Toolkit for Technology Audits by Community Advocates and Activists

arXiv.org Artificial Intelligence

A wave of recent scholarship documenting the discriminatory harms of algorithmic systems has spurred widespread interest in algorithmic accountability and regulation. Yet effective accountability and regulation is stymied by a persistent lack of resources supporting public understanding of algorithms and artificial intelligence. Through interactions with a US-based civil rights organization and their coalition of community organizations, we identify a need for (i) heuristics that aid stakeholders in distinguishing between types of analytic and information systems in lay language, and (ii) risk assessment tools for such systems that begin by making algorithms more legible. The present work delivers a toolkit to achieve these aims. This paper both presents the Algorithmic Equity Toolkit (AEKit) Equity as an artifact, and details how our participatory process shaped its design. Our work fits within human-computer interaction scholarship as a demonstration of the value of HCI methods and approaches to problems in the area of algorithmic transparency and accountability.


Creation and Evaluation of a Pre-tertiary Artificial Intelligence (AI) Curriculum

arXiv.org Artificial Intelligence

Contributions: The Chinese University of Hong Kong (CUHK)-Jockey Club AI for the Future Project (AI4Future) co-created an AI curriculum for pre-tertiary education and evaluated its efficacy. While AI is conventionally taught in tertiary level education, our co-creation process successfully developed the curriculum that has been used in secondary school teaching in Hong Kong and received positive feedback. Background: AI4Future is a cross-sector project that engages five major partners - CUHK Faculty of Engineering and Faculty of Education, Hong Kong secondary schools, the government and the AI industry. A team of 14 professors with expertise in engineering and education collaborated with 17 principals and teachers from 6 secondary schools to co-create the curriculum. This team formation bridges the gap between researchers in engineering and education, together with practitioners in education context. Research Questions: What are the main features of the curriculum content developed through the co-creation process? Would the curriculum significantly improve the students perceived competence in, as well as attitude and motivation towards AI? What are the teachers perceptions of the co-creation process that aims to accommodate and foster teacher autonomy? Methodology: This study adopted a mix of quantitative and qualitative methods and involved 335 student participants. Findings: 1) two main features of learning resources, 2) the students perceived greater competence, and developed more positive attitude to learn AI, and 3) the co-creation process generated a variety of resources which enhanced the teachers knowledge in AI, as well as fostered teachers autonomy in bringing the subject matter into their classrooms.


Understanding artificial intelligence ethics and safety

arXiv.org Artificial Intelligence

A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, 'Using AI in the Public Sector,' identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.