Goto

Collaborating Authors

 Lindley, Joseph


When Discourse Stalls: Moving Past Five Semantic Stopsigns about Generative AI in Design Research

arXiv.org Artificial Intelligence

It has been roughly three years since the open-source release of Stable Diffusion ignited a Generative AI (GenAI) boom [Bengesi et al., 2023]. The proliferation of these technologies has since reshaped design practice and research. From early ideation to final implementation, these developments have significantly altered how design work is conceived, conducted, and evaluated [Hou et al., 2024]. This essay examines the critical juncture at which the design research community finds itself, seeking to understand and shape these developments while grappling with their implications for creative practice, design education, and professional identities. Popular discourse around GenAI often centers on simplified unequivocal narratives: AI as a threat to humanity, as a solution to global challenges, as a force of disruption, or as a replacement for humans [Gilardi et al., 2024]. While these narratives have sparked debate and interest, they can function as "semantic stopsigns"--conceptual framings that oversimplify complex issues, providing an illusion of resolution that hinders deeper inquiry [LessWrong Community, n.d., Lifton, 1961]. For instance, claims like "AI is unreliable" can lead to outright dismissal of its potential,


Responding to Generative AI Technologies with Research-through-Design: The Ryelands AI Lab as an Exploratory Study

arXiv.org Artificial Intelligence

Generative AI technologies demand new practical and critical competencies, which call on design to respond to and foster these. We present an exploratory study guided by Research-through-Design, in which we partnered with a primary school to develop a constructionist curriculum centered on students interacting with a generative AI technology. We provide a detailed account of the design of and outputs from the curriculum and learning materials, finding centrally that the reflexive and prolonged `hands-on' approach led to a co-development of students' practical and critical competencies. From the study, we contribute guidance for designing constructionist approaches to generative AI technology education; further arguing to do so with `critical responsivity.' We then discuss how HCI researchers may leverage constructionist strategies in designing interactions with generative AI technologies; and suggest that Research-through-Design can play an important role as a `rapid response methodology' capable of reacting to fast-evolving, disruptive technologies such as generative AI.


On the Standardization of Behavioral Use Clauses and Their Adoption for Responsible Licensing of AI

arXiv.org Artificial Intelligence

Growing concerns over negligent or malicious uses of AI have increased the appetite for tools that help manage the risks of the technology. In 2018, licenses with behaviorial-use clauses (commonly referred to as Responsible AI Licenses) were proposed to give developers a framework for releasing AI assets while specifying their users to mitigate negative applications. As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses licenses. Notable models licensed with behavioral use clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion (image), and GRID (robotics). This paper explores why and how these licenses have been adopted, and why and how they have been adapted to fit particular use cases. We use a mixed-methods methodology of qualitative interviews, clustering of license clauses, and quantitative analysis of license adoption. Based on this evidence we take the position that responsible AI licenses need standardization to avoid confusing users or diluting their impact. At the same time, customization of behavioral restrictions is also appropriate in some contexts (e.g., medical domains). We advocate for ``standardized customization'' that can meet users' needs and can be supported via tooling.


The Entoptic Field Camera as Metaphor-Driven Research-through-Design with AI Technologies

arXiv.org Artificial Intelligence

Artificial intelligence (AI) technologies are widely deployed in smartphone photography; and prompt-based image synthesis models have rapidly become commonplace. In this paper, we describe a Research-through-Design (RtD) project which explores this shift in the means and modes of image production via the creation and use of the Entoptic Field Camera. Entoptic phenomena usually refer to perceptions of floaters or bright blue dots stemming from the physiological interplay of the eye and brain. We use the term entoptic as a metaphor to investigate how the material interplay of data and models in AI technologies shapes human experiences of reality. Through our case study using first-person design and a field study, we offer implications for critical, reflective, more-than-human and ludic design to engage AI technologies; the conceptualisation of an RtD research space which contributes to AI literacy discourses; and outline a research trajectory concerning materiality and design affordances of AI technologies.