Goto

Collaborating Authors

 irb


Development of Application-Specific Large Language Models to Facilitate Research Ethics Review

Mann, Sebastian Porsdam, Jiehao, Joel Seah, Latham, Stephen R., Savulescu, Julian, Aboy, Mateo, Earp, Brian D.

arXiv.org Artificial Intelligence

Institutional review boards (IRBs) play a crucial role in ensuring the ethical conduct of human subjects research, but face challenges including inconsistency, delays, and inefficiencies. We propose the development and implementation of application-specific large language models (LLMs) to facilitate IRB review processes. These IRB-specific LLMs would be fine-tuned on IRB-specific literature and institutional datasets, and equipped with retrieval capabilities to access up-to-date, context-relevant information. We outline potential applications, including pre-review screening, preliminary analysis, consistency checking, and decision support. While addressing concerns about accuracy, context sensitivity, and human oversight, we acknowledge remaining challenges such as over-reliance on AI and the need for transparency. By enhancing the efficiency and quality of ethical review while maintaining human judgment in critical decisions, IRB-specific LLMs offer a promising tool to improve research oversight. We call for pilot studies to evaluate the feasibility and impact of this approach.


Charting Ethical Tensions in Multispecies Technology Research through Beneficiary-Epistemology Space

Benford, Steve, Mancini, Clara, Chamberlain, Alan, Schneiders, Eike, Castle-Green, Simon, Fischer, Joel, Kucukyilmaz, Ayse, Salimbeni, Guido, Ngo, Victor, Barnard, Pepita, Adams, Matt, Tandavanitj, Nick, Farr, Ju Row

arXiv.org Artificial Intelligence

While ethical challenges are widely discussed in HCI, far less is reported about the ethical processes that researchers routinely navigate. We reflect on a multispecies project that negotiated an especially complex ethical approval process. Cat Royale was an artist-led exploration of creating an artwork to engage audiences in exploring trust in autonomous systems. The artwork took the form of a robot that played with three cats. Gaining ethical approval required an extensive dialogue with three Institutional Review Boards (IRBs) covering computer science, veterinary science and animal welfare, raising tensions around the welfare of the cats, perceived benefits and appropriate methods, and reputational risk to the University. To reveal these tensions we introduce beneficiary-epistemology space, that makes explicit who benefits from research (humans or animals) and underlying epistemologies. Positioning projects and IRBs in this space can help clarify tensions and highlight opportunities to recruit additional expertise.


MobileTL: On-device Transfer Learning with Inverted Residual Blocks

Chiang, Hung-Yueh, Frumkin, Natalia, Liang, Feng, Marculescu, Diana

arXiv.org Artificial Intelligence

Transfer learning on edge is challenging due to on-device limited resources. Existing work addresses this issue by training a subset of parameters or adding model patches. Developed with inference in mind, Inverted Residual Blocks (IRBs) split a convolutional layer into depthwise and pointwise convolutions, leading to more stacking layers, e.g., convolution, normalization, and activation layers. Though they are efficient for inference, IRBs require that additional activation maps are stored in memory for training weights for convolution layers and scales for normalization layers. As a result, their high memory cost prohibits training IRBs on resource-limited edge devices, and making them unsuitable in the context of transfer learning. To address this issue, we present MobileTL, a memory and computationally efficient on-device transfer learning method for models built with IRBs. MobileTL trains the shifts for internal normalization layers to avoid storing activation maps for the backward pass. Also, MobileTL approximates the backward computation of the activation layer (e.g., Hard-Swish and ReLU6) as a signed function which enables storing a binary mask instead of activation maps for the backward pass. MobileTL fine-tunes a few top blocks (close to output) rather than propagating the gradient through the whole network to reduce the computation cost. Our method reduces memory usage by 46% and 53% for MobileNetV2 and V3 IRBs, respectively. For MobileNetV3, we observe a 36% reduction in floating-point operations (FLOPs) when fine-tuning 5 blocks, while only incurring a 0.6% accuracy reduction on CIFAR10. Extensive experiments on multiple datasets demonstrate that our method is Pareto-optimal (best accuracy under given hardware constraints) compared to prior work in transfer learning for edge devices.


Understanding AI research ethics as a collective problem

#artificialintelligence

To understand why this matters, we need to examine the traditional ethical gateway, the Institutional Review Board (IRB), which is typically synonymous with'research ethics' in the United States. The primary difference between the ESR and IRBs is that IRBs are expressly disallowed from addressing concerns about harms to society and focus instead on harms to human participants in research only. A large proportion of AI research does not directly engage human subjects, which means many IRBs decline to review AI research, and a significant number of projects never undergoes any ethical review.


If Your Company Uses AI, It Needs an Institutional Review Board

#artificialintelligence

Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now -- prompted by the increasing frequency of headlines of biased algorithms, black box models, and privacy violations -- boards, C-suites, and data and AI leaders have realized it's an issue for which they need a strategic approach. A solution is hiding in plain sight. Other industries have already found ways to deal with complex ethical quandaries quickly, effectively, and in a way that can be easily replicated. Instead of trying to reinvent this process, companies need to adopt and customize one of health care's greatest inventions: the Institutional Review Board, or IRB. Most discussions of AI ethics follow the same flawed formula, consisting of three moves, each of which is problematic from the perspective of an organization that wants to mitigate the ethical risks associated with AI. Here's how these conversations tend to go. First, companies move to identify AI ethics with "fairness" in AI, or sometimes more generally, "fairness, equity, and inclusion."


Human computation requires and enables a new approach to ethical review

Vepřek, Libuše Hannah, Seymour, Patricia, Michelucci, Pietro

arXiv.org Artificial Intelligence

With humans increasingly serving as computational elements in distributed information processing systems and in consideration of the profit-driven motives and potential inequities that might accompany the emerging thinking economy[1], we recognize the need for establishing a set of related ethics to ensure the fair treatment and wellbeing of online cognitive laborers and the conscientious use of the capabilities to which they contribute. Toward this end, we first describe human-in-the-loop computing in context of the new concerns it raises that are not addressed by traditional ethical research standards. We then describe shortcomings in the traditional approach to ethical review and introduce a dynamic approach for sustaining an ethical framework that can continue to evolve within the rapidly shifting context of disruptive new technologies.