Goto

Collaborating Authors

 huth


Query Augmentation by Decoding Semantics from Brain Signals

Ye, Ziyi, Zhan, Jingtao, Ai, Qingyao, Liu, Yiqun, de Rijke, Maarten, Lioma, Christina, Ruotsalo, Tuukka

arXiv.org Artificial Intelligence

Query augmentation is a crucial technique for refining semantically imprecise queries. Traditionally, query augmentation relies on extracting information from initially retrieved, potentially relevant documents. If the quality of the initially retrieved documents is low, then the effectiveness of query augmentation would be limited as well. We propose Brain-Aug, which enhances a query by incorporating semantic information decoded from brain signals. BrainAug generates the continuation of the original query with a prompt constructed with brain signal information and a ranking-oriented inference approach. Experimental results on fMRI (functional magnetic resonance imaging) datasets show that Brain-Aug produces semantically more accurate queries, leading to improved document ranking performance. Such improvement brought by brain signals is particularly notable for ambiguous queries.


Language Generation from Brain Recordings

Ye, Ziyi, Ai, Qingyao, Liu, Yiqun, Zhang, Min, Lioma, Christina, Ruotsalo, Tuukka

arXiv.org Artificial Intelligence

Generating human language through non-invasive brain-computer interfaces (BCIs) has the potential to unlock many applications, such as serving disabled patients and improving communication. Currently, however, generating language via BCIs has been previously successful only within a classification setup for selecting pre-generated sentence continuation candidates with the most likely cortical semantic representation. Inspired by recent research that revealed associations between the brain and the large computational language models, we propose a generative language BCI that utilizes the capacity of a large language model (LLM) jointly with a semantic brain decoder to directly generate language from functional magnetic resonance imaging (fMRI) input. The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli perceived, without prior knowledge of any pre-generated candidates. We compare the language generated from the presented model with a random control, pre-generated language selection approach, and a standard LLM, which generates common coherent text solely based on the next word likelihood according to statistical language training data. The proposed model is found to generate language that is more aligned with semantic stimulus in response to which brain input is sampled. Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.


Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)

Oota, Subba Reddy, Gupta, Manish, Bapi, Raju S., Jobard, Gael, Alexandre, Frederic, Hinaut, Xavier

arXiv.org Artificial Intelligence

How does the brain represent different modes of information? Can we design a system that automatically understands what the user is thinking? Such questions can be answered by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic research in cognitive science and neuroscience. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, recently several neural encoding and decoding models have been proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a brief summary and discussion about future trends. Given the large amount of recently published work in the `computational cognitive neuroscience' community, we believe that this survey nicely organizes the plethora of work and presents it as a coherent story.


Scientists can now read your MIND: AI turns people's thoughts into text in real-time

Daily Mail - Science & tech

Mind-reading technology can now transcribe people's thoughts in real-time based on the blood flow in their brain. A study put three people in MRI machines and got them to listen to stories. For the first time, researchers claim, they produced a rolling text of people's thoughts, and not just single words or sentences, without using a brain implant. The mind-reading technology did not exactly replicate the stories, but captured the main points. The breakthrough raises concerns about'mental privacy' as it could be the first step in being able to eavesdrop on others' thoughts.


AI makes non-invasive mind-reading possible by turning thoughts into text

The Guardian

An AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person's thoughts to be read non-invasively for the first time. The decoder could reconstruct speech with uncanny accuracy while people listened to a story – or even silently imagined one – using only fMRI scan data. Previous language decoding systems have required surgical implants, and the latest advance raises the prospect of new ways to restore speech in patients struggling to communicate due to a stroke or motor neurone disease. Dr Alexander Huth, a neuroscientist who led the work at the University of Texas at Austin, said: "We were kind of shocked that it works as well as it does. I've been working on this for 15 years … so it was shocking and exciting when it finally did work."


Artificial Intelligence Is Sophisticated, Autonomous, and Maybe Dangerous--Here's Why

#artificialintelligence

Artificial intelligence (AI) has the potential to cause global disaster, but humans can take steps to prevent calamity, experts say. A new survey finds that more than a third of scientists doing AI research believe that the decisions made by AIs could trigger a debacle as bad or worse than nuclear war. The research highlights growing concern that AI could have unintended dangerous effects and produce many benefits. The way to stop computers from harming people? "Develop principles for safe and trustworthy AI," Michael Huth, the head of the Department of Computing at Imperial College London and a senior researcher at the AI company Xayn told Lifewire in an email interview. "This is already happening for deep learning applications, such image classifications, where one can verify, in principle, whether small input distortions can make the AI model misclassify a commercial airplane as an attacking fighter jet."


Brain's 'atlas' of words revealed

BBC News

Scientists in the US have mapped out how the brain organises language. Their "semantic atlas" shows how, for example, one region of the brain activates in response to words about clothing and appearance. The researchers found that these maps were quite similar across the small number of individuals in the study, even down to minor details. The work, by a team at the University of California, Berkeley, is published in the journal Nature. It had previously been proposed that information about words' meaning was represented in a group of brain regions known as the semantic system.