Not enough data to create a plot.
Try a different view from the menu above.
Chen, Ying-Jung
Large Cognition Model: Towards Pretrained EEG Foundation Model
Chen, Chi-Sheng, Chen, Ying-Jung, Tsai, Aidan Hung-Wen
Electroencephalography provides a non-invasive window into brain activity, offering valuable insights for neurological research, brain-computer interfaces, and clinical diagnostics. However, the development of robust machine learning models for EEG analysis is hindered by the scarcity of large-scale, well-annotated datasets and the inherent variability of EEG signals across subjects and recording conditions. Inspired by the success of foundation models in natural language processing and computer vision, we propose the Large Cognition Model-a transformer-based foundation model designed to generalize across diverse EEG datasets and downstream tasks. Unlike traditional approaches, our proposed transformer-based architecture demonstrates strong generalization capabilities across datasets and tasks, even without pretraining, surpassing some existing EEG universal models on specific downstream applications. LCM leverages large-scale self-supervised learning techniques to capture universal EEG representations, enabling efficient fine-tuning for applications such as cognitive state decoding, disease classification, and neurofeedback systems. We introduce a novel architecture that integrates temporal and spectral attention mechanisms, optimizing the model's ability to extract meaningful features from raw EEG signals. Extensive evaluations demonstrate that LCM outperforms state-of-the-art approaches across multiple EEG benchmarks, exhibiting strong cross-subject and cross-task generalization. Our findings highlight the potential of pretrained EEG foundation models to accelerate advancements in neuroscience, personalized medicine, and BCI technology.
Optimizing Supply Chain Networks with the Power of Graph Neural Networks
Chen, Chi-Sheng, Chen, Ying-Jung
Graph Neural Networks (GNNs) have emerged as transformative tools for modeling complex relational data, offering unprecedented capabilities in tasks like forecasting and optimization. This study investigates the application of GNNs to demand forecasting within supply chain networks using the SupplyGraph dataset, a benchmark for graph-based supply chain analysis. By leveraging advanced GNN methodologies, we enhance the accuracy of forecasting models, uncover latent dependencies, and address temporal complexities inherent in supply chain operations. Comparative analyses demonstrate that GNN-based models significantly outperform traditional approaches, including Multilayer Perceptrons (MLPs) and Graph Convolutional Networks (GCNs), particularly in single-node demand forecasting tasks. The integration of graph representation learning with temporal data highlights GNNs' potential to revolutionize predictive capabilities for inventory management, production scheduling, and logistics optimization. This work underscores the pivotal role of forecasting in supply chain management and provides a robust framework for advancing research and applications in this domain.
Science Time Series: Deep Learning in Hydrology
He, Junyang, Chen, Ying-Jung, Idamekorala, Anushka, Fox, Geoffrey
This research is part of a systematic study of scientific time series. In the last three years, hundreds of papers and over fifty new deep-learning models have been described for time series models. These mainly focus on the key aspect of time dependence, whereas in some scientific time series, the situation is more complex with multiple locations, each location having multiple observed and target time-dependent streams and multiple exogenous (known) properties that are either constant or time-dependent. Here, we analyze the hydrology time series using the CAMELS and Caravan global datasets on catchment rainfall and runoff. Together, these have up to 6 observed streams and up to 209 static parameters defined at each of about 8000 locations. This analysis is fully open source with a Jupyter Notebook running on Google Colab for both an LSTM-based analysis and the data engineering preprocessing. Our goal is to investigate the importance of exogenous data, which we look at using eight different choices on representative hydrology tasks. Increasing the exogenous information significantly improves the data representation, with the mean square error decreasing to 60% of its initial value in the largest dataset examined. We present the initial results of studies of other deep-learning neural network architectures where the approaches that can use the full observed and exogenous observations outperform less flexible methods, including Foundation models. Using the natural annual periodic exogenous time series produces the largest impact, but the static and other periodic exogenous streams are also important. Our analysis is intended to be valuable as an educational resource and benchmark.