Goto

Collaborating Authors

Results


Wearable BMI With VR Aims To Help People With Motor Dysfunction, Paralysis

International Business Times

A team of scientists and researchers has developed a new wearable brain-machine interface (BMI) system that aims to improve the quality of life of people with paralysis, motor dysfunction or even those who are fully conscious but can't communicate or move. An international multi-institutional team of scientists and researchers led by Wan-Hong Yeo at the Georgia Institute of Technology created a device that combines wireless soft scalp electronics and virtual reality in a brain-machine interface system. The device enables users to imagine an action and control a robotic arm or wheelchair wirelessly. The team described the new motor imagery-based brain-machine interface system in a paper published in the journal Advanced Science on July 17. "The major advantage of this system to the user, compared to what currently exists, is that it is comfortable to wear, and doesn't have any wires," Yeo, who is an associate professor at George W. Woodruff School of Mechanical Engineering, said as per Science Daily. The team designed a portable EEG system that includes imperceptible microneedle electrodes with soft wireless circuits to enhance the acquisition of signals.


Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

arXiv.org Artificial Intelligence

Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and performs seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences by using MAR devices to provide universal accessibility to digital contents. Over the past 20 years, a number of MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discusses the latest studies on MAR through a top-down approach: 1) MAR applications; 2) MAR visualisation techniques adaptive to user mobility and contexts; 3) systematic evaluation of MAR frameworks including supported platforms and corresponding features such as tracking, feature extraction plus sensing capabilities; and 4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields, current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.


One year on: How AI can supercharge the healthcare of the future

#artificialintelligence

As we approach one year since the first national lockdown in the UK, it is clear that Covid-19 is still putting enormous pressures on our healthcare system. Indeed, the NHS reported in January that a record 4.46 million people were on the waiting list for routine treatments and operations, and a recent study by the British Medical Association found that almost 60% of doctors are suffering from some form of anxiety or depression. The path to recovering from this healthcare fallout will not be easy, however, when thinking about how we could alleviate this pressure in the future, emerging artificial intelligence (AI) technologies may be the answer. The World Health Organisation (WHO) predicts that there will be a shortfall of around 9.9 million healthcare professionals worldwide by 2030, despite the economy being able to create 40 million new health sector jobs by the same year. With larger, aging populations and increasingly complex healthcare demands, there will continue to be strain on health workers for the foreseeable future – so how can AI alleviate this?


Artificial Intelligence and Virtual Reality Can Accelerate Covid Vaccine Development

#artificialintelligence

In this global health emergency, the medical business is searching for new technologies to screen and control the spread of COVID-19 (Coronavirus) pandemic. Artificial intelligence is one of such innovations which can undoubtedly track the spread of this infection, identifies the high-risk patients, and is helpful in controlling this disease progressively. It can likewise anticipate mortality hazard by adequately analyzing the previous data of the patients. Artificial intelligence can assist us with fighting this infection by medical help, population screening, suggestions about the infection control, and medical notification. This technology can possibly improve the planning, treatment and reported outcomes of the COVID-19 patient, being an evidence-based medical tool.


'Swapping bodies' changes a person's personality, study reveals

The Independent - Tech

Swapping bodies with another person would have a profound effect on the subject's behaviour and even their personality, a new study has revealed. Scientists at the Karolinska Institutet in Sweden discovered a way to allow people to experience the effect of swapping bodies, through a perceptual illusion, in order to understand the relationship between a person's psychological and physical sense of self. They found that when pairs of friends "switched bodies", each friend's personality became more like the other. "Body swapping is not a domain reserved for science fiction anymore," said Pawel Tacikowski, a postdoctoral researcher at the institute and lead author of the study. In order to create the illusion that the study's subjects had switched bodies, Dr Tacikowski and his team fitted them with virtual reality goggles showing live feeds of the other person's body from a first-person perspective.


Emotion-robust EEG Classification for Motor Imagery

arXiv.org Machine Learning

Developments in Brain Computer Interfaces (BCIs) are empowering those with severe physical afflictions through their use in assistive systems. Common methods of achieving this is via Motor Imagery (MI), which maps brain signals to code for certain commands. Electroencephalogram (EEG) is preferred for recording brain signal data on account of it being non-invasive. Despite their potential utility, MI-BCI systems are yet confined to research labs. A major cause for this is lack of robustness of such systems. As hypothesized by two teams during Cybathlon 2016, a particular source of the system's vulnerability is the sharp change in the subject's state of emotional arousal. This work aims towards making MI-BCI systems resilient to such emotional perturbations. To do so, subjects are exposed to high and low arousal-inducing virtual reality (VR) environments before recording EEG data. The advent of COVID-19 compelled us to modify our methodology. Instead of training machine learning algorithms to classify emotional arousal, we opt for classifying subjects that serve as proxy for each state. Additionally, MI models are trained for each subject instead of each arousal state. As training subjects to use MI-BCI can be an arduous and time-consuming process, reducing this variability and increasing robustness can considerably accelerate the acceptance and adoption of assistive technologies powered by BCI.


Prediction of Human Empathy based on EEG Cortical Asymmetry

arXiv.org Artificial Intelligence

Humans constantly interact with digital devices that disregard their feelings. However, the synergy between human and technology can be strengthened if the technology is able to distinguish and react to human emotions. Models that rely on unconscious indications of human emotions, such as (neuro)physiological signals, hold promise in personalization of feedback and adaptation of the interaction. The current study elaborated on adopting a predictive approach in studying human emotional processing based on brain activity. More specifically, we investigated the proposition of predicting self-reported human empathy based on EEG cortical asymmetry in different areas of the brain. Different types of predictive models i.e. multiple linear regression analyses as well as binary and multiclass classifications were evaluated. Results showed that lateralization of brain oscillations at specific frequency bands is an important predictor of self-reported empathy scores. Additionally, prominent classification performance was found during resting-state which suggests that emotional stimulation is not required for accurate prediction of empathy -- as a personality trait -- based on EEG data. Our findings not only contribute to the general understanding of the mechanisms of empathy, but also facilitate a better grasp on the advantages of applying a predictive approach compared to hypothesis-driven studies in neuropsychological research. More importantly, our results could be employed in the development of brain-computer interfaces that assist people with difficulties in expressing or recognizing emotions.


DeFINE: Delayed Feedback based Immersive Navigation Environment for Studying Goal-Directed Human Navigation

arXiv.org Artificial Intelligence

With the advent of consumer-grade products for presenting an immersive virtual environment (VE), there is a growing interest in utilizing VEs for testing human navigation behavior. However, preparing a VE still requires a high level of technical expertise in computer graphics and virtual reality, posing a significant hurdle to embracing the emerging technology. To address this issue, this paper presents Delayed Feedback based Immersive Navigation Environment (DeFINE), a framework that allows for easy creation and administration of navigation tasks within customizable VEs via intuitive graphical user interfaces and simple settings files. Importantly, DeFINE has a built-in capability to provide performance feedback to participants during an experiment, a feature that is critically missing in other similar frameworks. To demonstrate the usability of DeFINE from both experimentalists' and participants' perspectives, a case study was conducted in which participants navigated to a hidden goal location with feedback that differentially weighted speed and accuracy of their responses. In addition, the participants evaluated DeFINE in terms of its ease of use, required workload, and proneness to induce cybersickness. Results showed that the participants' navigation performance was affected differently by the types of feedback they received, and they rated DeFINE highly in the evaluations, validating DeFINE's architecture for investigating human navigation in VEs. With its rich out-of-the-box functionality and great customizability due to open-source licensing, DeFINE makes VEs significantly more accessible to many researchers.


Virtual Reality to Study the Gap Between Offline and Real-Time EMG-based Gesture Recognition

arXiv.org Machine Learning

Within sEMG-based gesture recognition, a chasm exists in the literature between offline accuracy and real-time usability of a classifier. This gap mainly stems from the four main dynamic factors in sEMG-based gesture recognition: gesture intensity, limb position, electrode shift and transient changes in the signal. These factors are hard to include within an offline dataset as each of them exponentially augment the number of segments to be recorded. On the other hand, online datasets are biased towards the sEMG-based algorithms providing feedback to the participants, limiting the usability of such datasets as benchmarks. This paper proposes a virtual reality (VR) environment and a real-time experimental protocol from which the four main dynamic factors can more easily be studied. During the online experiment, the gesture recognition feedback is provided through the leap motion camera, enabling the proposed dataset to be re-used to compare future sEMG-based algorithms. 20 able-bodied persons took part in this study, completing three to four sessions over a period spanning between 14 and 21 days. Finally, TADANN, a new transfer learning-based algorithm, is proposed for long term gesture classification and significantly (p<0.05) outperforms fine-tuning a network.


A.I. and virtual reality can determine neurosurgeon expertise with 90 per cent accuracy

#artificialintelligence

Machine learning-guided virtual reality simulators can help neurosurgeons develop the skills they need before they step in the operating room, according to a recent study. Research from the Neurosurgical Simulation and Artificial Intelligence Learning Centre at the Montreal Neurological Institute and Hospital (The Neuro) and McGill University shows that machine learning algorithms can accurately assess the capabilities of neurosurgeons during virtual surgery, demonstrating that virtual reality simulators using artificial intelligence can be powerful tools in surgeon training. Fifty participants were recruited from four stages of neurosurgical training; neurosurgeons, fellows and senior residents, junior residents, and medical students. They performed 250 complex tumour resections using NeuroVR, a virtual reality surgical simulator developed by the National Research Council of Canada and distributed by CAE, which recorded all instrument movements in 20 millisecond intervals. Using this raw data, a machine learning algorithm developed performance measures such as instrument position and force applied, as well as outcomes such as amount of tumour removed and blood loss, which could predict the level of expertise of each participant with 90 per-cent accuracy.