porayska-pomsta
A Manifesto for a Pro-Actively Responsible AI in Education
The field of AIED, as defined by the work conducted under the auspices of the International Society of Artificial Intelligence in Education, has been built on big and well-intentioned ambitions to understand, devise and scale-up best learning and teaching practices to as many students as possible. This ambition has been bolstered most notably by the Bloom (1984) studies, which are still routinely cited throughout the AIED literature as a key justification and motivation for the field. This ambition had bootstrapped much of the work within the field and it has spurred in-depth research examining how specific populations of students learn, what are the prerequisites (cognitive, affective, and pedagogic) for successful learning, and how AIED technologies might be designed to help develop and capitalise on such learning prerequisites. Personalisation through adaptivity of assessment and feedback (for the purpose of this article used in the broad sense of pedagogical support) remains at the heart of the work conducted by AIED researchers, regardless of their specific areas of specialisation, or their philosophical or epistemological perspectives. This is why, to date, the AIED community repeatedly voted to retain its long-debated connection with the wider field of AI - a domain like AIED insofar as its central paradigm of adaptive agent technologies, but unlike AIED as far as its aim to emulate human capacities only to the extent that it is useful to a given application's success in achieving its specific goals.
- North America > United States > North Dakota > McKenzie County (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Singapore (0.04)
- Health & Medicine > Therapeutic Area (0.69)
- Education > Educational Setting (0.46)
The Ethics of AI in Education
Porayska-Pomsta, Kaska, Holmes, Wayne, Nemorin, Selena
The advent of big data, and of Artificial Intelligence (AI) applications that collect and consume such data, has led to fundamental questions about the ethics of AI designs and to efforts aimed to highlight and safeguard against any potential harms caused by the deployment of AI across diverse domains of applications. Typically, questions raised relate to the trustworthiness of AI as agent technologies that autonomously or semi-autonomously operate in human environments and that have the ability to alter human behaviour. Other questions concern the role that AI may play now and in the future in either resolving or amplifying pre-existing social biases and any resulting harms. Specifically, Ethical AI as an emergent area of AI research and policy, has been spurred by the revelations of AI applications (usually unintentionally) promoting and amplifying many of the discriminatory and oppressive practices, and assumptions that underpin pre-existing social and institutional systems, e.g., historical biases against non-dominant populations, against users characterised by some divergence from the so-called cognitive or physical'norm', or those who are socio-economically disadvantaged (Crawford, 2017a; Madaio et al., 2022; Porayska-Pomsta and Rajendran, 2019; Williamson, Eynon, Knox & Davis, in this volume). Numerous examples of AI bias are both well-documented and rehearsed throughout the emergent ethics of AI literature, in hundreds of policy reports about AI ethics and governance that have been published to date (c.f.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (4 more...)
- Education > Educational Setting (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Education > Educational Technology > Educational Software > Computer Based Training (0.67)
- (2 more...)
From Algorithm Worship to the Art of Human Learning: Insights from 50-year journey of AI in Education
Over the past decade, there have been increasing proclama5ons from diverse stakeholders that humanity is at an inflec5on point due to advances in Ar5ficial Intelligence (AI) technologies (e.g., Crawford, 2017). The general public are condi5oned by this messaging to expect both big (though so far largely non-descript) changes to our lives, including to the way that we learn and teach. Warnings have been also ar5culated regarding whether and how AI might fundamentally change the way we perceive reality, how we form our beliefs, or interact with one another (Bostrom, 2017). More recently, ques5ons started to emerge about AI's transforma5ve poten5al (for beLer or worse) for our func5oning at neurocogni5ve, socio-emo5onal, individual and collec5ve levels (UNESCO, 2022; Pedro, et al., 2019, Porayska-Pomsta, 2023), along with concerns regarding the ethical implica5ons of using AI for suppor5ng human decision-making in contexts that are both high-stakes (e.g., for medical diagnoses or for student assessment) and rela5vely low-stakes, e.g., selec5ng movies on streaming sites. Such hope-fear rhetoric is also present in the context of AI applica5ons to suppor5ng human learning in formal and informal contexts. Recent hopes for AI in educa5on (AIED) largely relate to delivering learning at scale across different geographical and cultural contexts, especially in light of growing global teacher shortages and diminishing funding for educa5on in many countries (UNESCO, 2023). These hopes are increasingly used to fuel poli5cally and market mo5vated discourse about the need to'release teachers from tedious tasks' such as standardised assessments to allow them to focus on the'things that maLer' (Gen5le et al., 2023), or to jus5fy the narrowing of the formal educa5on curricula mainly to STEM subjects.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Singapore (0.04)
- Health & Medicine (0.88)
- Education > Educational Setting (0.46)