Goto

Collaborating Authors

 routledge


Liaozhai through the Looking-Glass: On Paratextual Explicitation of Culture-Bound Terms in Machine Translation

Shen, Sherrie, Wang, Weixuan, Birch, Alexandra

arXiv.org Artificial Intelligence

The faithful transfer of contextually-embedded meaning continues to challenge contemporary machine translation (MT), particularly in the rendering of culture-bound terms--expressions or concepts rooted in specific languages or cultures, resisting direct linguistic transfer. Existing computational approaches to explicitating these terms have focused exclusively on in-text solutions, overlooking paratextual apparatus in the footnotes and endnotes employed by professional translators. In this paper, we formalize Genette's (1987) theory of paratexts from literary and translation studies to introduce the task of paratextual explicitation for MT. We construct a dataset of 560 expert-aligned paratexts from four English translations of the classical Chinese short story collection Liaozhai and evaluate LLMs with and without reasoning traces on choice and content of explicitation. Experiments across intrinsic prompting and agentic retrieval methods establish the difficulty of this task, with human evaluation showing that LLM-generated paratexts improve audience comprehension, though remain considerably less effective than translator-authored ones. Beyond model performance, statistical analysis reveals that even professional translators vary widely in their use of paratexts, suggesting that cultural mediation is inherently open-ended rather than prescriptive. Our findings demonstrate the potential of paratextual explicitation in advancing MT beyond linguistic equivalence, with promising extensions to monolingual explanation and personalized adaptation.


Gaze-Aware AI: Mathematical modeling of epistemic experience of the Marginalized for Human-Computer Interaction & AI Systems

Hatti, Omkar Suresh

arXiv.org Artificial Intelligence

The proliferation of artificial intelligence provides an opportunity to create psychological spaciousness in society. Spaciousness is defined as the ability to hold diverse interpersonal interactions and forms the basis for vulnerability that leads to authenticity that leads to prosocial behaviors and thus to societal harmony. This paper demonstrates an attempt to quantify, the human conditioning to subconsciously modify authentic self-expression to fit the norms of the dominant culture. Gaze is explored across various marginalized and intersectional groups, using concepts from postmodern philosophy and psychology. The effects of gaze are studied through analyzing a few redacted Reddit posts, only to be discussed in discourse and not endorsement. A mathematical formulation for the Gaze Pressure Index (GPI)-Diff Composite Metric is presented to model the analysis of two sets of conversational spaces in relation to one another. The outcome includes an equation to train Large Language Models (LLMs) - the working mechanism of AI products such as Chat-GPT; and an argument for affirming and inclusive HCI, based on the equation, is presented. The argument is supported by a few principles of Neuro-plasticity, The brain's lifelong capacity to rewire.


Saved from the shredder, Alan Turing's papers sell for 627,000

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. A trove of forgotten papers penned by famed World War II codebreaker Alan Turing has sold for the record-setting price of 627,000. But the June 17 auction almost never happened. At one point, the long-lost archival materials from the father of modern computer science were nearly pulverized by a paper shredder. Alan Turing was many things during his brief and ultimately tragic life: renowned mathematician, computer theorist, marathon runner, philosopher, and an invaluable codebreaker.


Sociotechnical Effects of Machine Translation

Moorkens, Joss, Way, Andy, Lankford, Séamus

arXiv.org Artificial Intelligence

While the previous chapters have shown how machine translation (MT) can be useful, in this chapter we discuss some of the side-effects and risks that are associated, and how they might be mitigated. With the move to neural MT and approaches using Large Language Models (LLMs), there is an associated impact on climate change, as the models built by multinational corporations are massive. They are hugely expensive to train, consume large amounts of electricity, and output huge volumes of kgCO2 to boot. However, smaller models which still perform to a high level of quality can be built with much lower carbon footprints, and tuning pre-trained models saves on the requirement to train from scratch. We also discuss the possible detrimental effects of MT on translators and other users. The topics of copyright and ownership of data are discussed, as well as ethical considerations on data and MT use. Finally, we show how if done properly, using MT in crisis scenarios can save lives, and we provide a method of how this might be done.


The Original Turing Test Was a Drag Show

Slate

ChatGPT can now easily pass any Turing test, a measure of successful A.I. proposed by a founder of computer science, Alan Turing. But contemporary Turing tests leave out the most interesting part of Turing's original test: the gender-bending. I can usually spot A.I. writing in my students' work by the overuse of words like "delve," but the accuracy of artificial intelligence is impossible to deny. A.I. is being integrated into every aspect of our written culture, from news sources to classrooms to medicine. But in 1950, Turing's ideas about A.I. were prescient, creative, and, when I read them, surprisingly queer.


If the Sources Could Talk: Evaluating Large Language Models for Research Assistance in History

Garcia, Giselle Gonzalez, Weilbach, Christian

arXiv.org Artificial Intelligence

The recent advent of powerful Large-Language Models (LLM) provides a new conversational form of inquiry into historical memory (or, training data, in this case). We show that by augmenting such LLMs with vector embeddings from highly specialized academic sources, a conversational methodology can be made accessible to historians and other researchers in the Humanities. Concretely, we evaluate and demonstrate how LLMs have the ability of assisting researchers while they examine a customized corpora of different types of documents, including, but not exclusive to: (1). primary sources, (2). secondary sources written by experts, and (3). the combination of these two. Compared to established search interfaces for digital catalogues, such as metadata and full-text search, we evaluate the richer conversational style of LLMs on the performance of two main types of tasks: (1). question-answering, and (2). extraction and organization of data. We demonstrate that LLMs semantic retrieval and reasoning abilities on problem-specific tasks can be applied to large textual archives that have not been part of the its training data. Therefore, LLMs can be augmented with sources relevant to specific research projects, and can be queried privately by researchers.


Enactive Artificial Intelligence: Subverting Gender Norms in Robot-Human Interaction

Hipolito, Ines, Winkle, Katie, Lie, Merete

arXiv.org Artificial Intelligence

This paper introduces Enactive Artificial Intelligence (eAI) as an intersectional gender-inclusive stance towards AI. AI design is an enacted human sociocultural practice that reflects human culture and values. Unrepresentative AI design could lead to social marginalisation. Section 1, drawing from radical enactivism, outlines embodied cultural practices. In Section 2, explores how intersectional gender intertwines with technoscience as a sociocultural practice. Section 3 focuses on subverting gender norms in the specific case of Robot-Human Interaction in AI. Finally, Section 4 identifies four vectors of ethics: explainability, fairness, transparency, and auditability for adopting an intersectionality-inclusive stance in developing gender-inclusive AI and subverting existing gender norms in robot design.


UM scholar publishes book on regulating artificial intelligence

#artificialintelligence

MACAU, August 24 - Rostam J Neuwirth, head of the Department of Global Legal Studies of the University of Macau (UM) Faculty of Law, has published a new book titled'The EU Artificial Intelligence Act: Regulating Subliminal AI Systems'. Through exploring legal, ethical, and scientific issues related to artificial intelligence (AI), the book aims to show how cognitive, technological, and legal questions are intrinsically interwoven and to stimulate a transdisciplinary and transnational global debate between students, academics, practitioners, policymakers, and citizens. The book has been published by the British publisher Routledge. It contextualises the future regulation of AI as proposed by the European Union, specifically addressing the regulatory challenges relating to the planned prohibition of the use of AI systems that deploy subliminal techniques to manipulate the human mind and alter human behaviour. Subliminal perception usually refers to perception received below the threshold of awareness, such as images flashed quickly before the eyes or background music embedded with hidden messages, and these external stimuli can affect people without their being aware of it. In this respect, Prof Neuwirth points out that the convergence of AI with various related technologies, such as brain–computer interfaces, functional magnetic resonance imaging, robotics, and big data, already allows for'mind reading' or'dream hacking' through brain spyware, as well as other practices that intrude on cognition and the right to freedom of thought.


Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk Management

Ashwin, null, Agnew, William, Pajaro, Juan, Subramonian, Arjun

arXiv.org Artificial Intelligence

AI, machine learning, and data science methods are already pervasive in our society and technology, affecting all of our lives in many subtle ways. Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place. Researchers, corporations, and governments have long and painful histories of excluding marginalized groups from technology development, deployment, and oversight. As a direct result of this exclusion, these technologies have long histories of being less useful or even harmful to minoritized groups. This infuriating history illustrates that industry cannot be trusted to self-regulate and why trust in commercial AI systems and development has been lost. We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative participatory design principles and strong, outside, and continual monitoring and testing. We additionally explain the importance of considering aspects of trustworthiness beyond just transparency, fairness, and accountability, specifically, to consider justice and shifting power to the people and disempowered as core values to any trustworthy AI system. Creating trustworthy AI starts by funding, supporting, and empowering groups like Queer in AI so the field of AI has the diversity and inclusion to credibly and effectively develop trustworthy AI. Through our years of work and advocacy, we have developed expert knowledge around questions of if and how gender, sexuality, and other aspects of identity should be used in AI systems and how harms along these lines should be mitigated. Based on this, we discuss a gendered approach to AI, and further propose a queer epistemology and analyze the benefits it can bring to AI.


British firm starts trials of psychedelic drug to treat depression

Daily Mail - Science & tech

People suffering with depression could soon have a new treatment, in the form of a drug based on a common psychedelic substance found in plants, developers claim. The first patient dosing of the drug, based on the compound DMT (N-Dimethyltryptamine), is being given to'healthy brained' first time drug users in a clinical trial that will examine the impact the substance has on the brain. If successful, the second stage of the trial will see the team experiment with different dosing levels and strategies and eventually treat people with depression. It works by sending the patient on a hallucinogenic trip that acts to'break down' blockages in the mind, that can then be restored with a course of therapy. British biotech firm Small Pharma are running the trial, and CEO Peter Rands told MailOnline said it had the potential to help people not supported by current drugs.