Goto

Collaborating Authors

 Yang, Hyung-Jeong


Conditional Diffusion Model for Longitudinal Medical Image Generation

arXiv.org Artificial Intelligence

Alzheimers disease progresses slowly and involves complex interaction between various biological factors. Longitudinal medical imaging data can capture this progression over time. However, longitudinal data frequently encounter issues such as missing data due to patient dropouts, irregular follow-up intervals, and varying lengths of observation periods. To address these issues, we designed a diffusion-based model for 3D longitudinal medical imaging generation using single magnetic resonance imaging (MRI). This involves the injection of a conditioning MRI and time-visit encoding to the model, enabling control in change between source and target images. The experimental results indicate that the proposed method generates higher-quality images compared to other competing methods.


DCTM: Dilated Convolutional Transformer Model for Multimodal Engagement Estimation in Conversation

arXiv.org Artificial Intelligence

Conversational engagement estimation is posed as a regression Engagement is the process by which two (or more) participants problem, entailing the identification of the favorable attention and establish, maintain, and end their perceived connection to each involvement of the participants in the conversation. This task arises other during an interaction [30]. Participant engagement stands as a crucial pursuit to gain insights into human's interaction dynamics as a key factor within the multifaceted dynamics of conversation, and behavior patterns within a conversation. In this research, wielding significant influence over the quality and effectiveness of we introduce a dilated convolutional Transformer for modeling the interaction. However, while it is natural for human to discern and estimating human engagement in the MULTIMEDIATE 2023 the attentiveness of conversation counterparts, it remains a difficult competition. Our proposed system surpasses the baseline models, task for a machine to apprehend [23]. Therefore, automatically estimating exhibiting a noteworthy 7% improvement on test set and 4% on engagement degrees has became a primary challenge for validation set. Moreover, we employ different modality fusion mechanism both affective computing and group behavior analysis. The significance and show that for this type of data, a simple concatenated of this task has been increasingly recognized by researchers method with self-attention fusion gains the best performance.


Mental Workload Estimation with Electroencephalogram Signals by Combining Multi-Space Deep Models

arXiv.org Artificial Intelligence

The human brain is in a continuous state of activity during both work and rest. Mental activity is a daily process, and when the brain is overworked, it can have negative effects on human health. In recent years, great attention has been paid to early detection of mental health problems because it can help prevent serious health problems and improve quality of life. Several signals are used to assess mental state, but the electroencephalogram (EEG) is widely used by researchers because of the large amount of information it provides about the brain. This paper aims to classify mental workload into three states and estimate continuum levels. Our method combines multiple dimensions of space to achieve the best results for mental estimation. In the time domain approach, we use Temporal Convolutional Networks, and in the frequency domain, we propose a new architecture called the Multi-Dimensional Residual Block, which combines residual blocks.