Goto

Collaborating Authors

 Velichko, Andrei


Entropy-statistical approach to phase-locking detection of pulse oscillations: application for the analysis of biosignal synchronization

arXiv.org Artificial Intelligence

In this study a new method for analyzing synchronization in oscillator systems is proposed using the example of modeling the dynamics of a circuit of two resistively coupled pulse oscillators. The dynamic characteristic of synchronization is fuzzy entropy (FuzzyEn) calculated a time series composed of the ratios of the number of pulse periods (subharmonic ratio, SHR) during phase-locking intervals. Low entropy values indicate strong synchronization, whereas high entropy values suggest weak synchronization between the two oscillators. This method effectively visualizes synchronized modes of the circuit using entropy maps of synchronization states. Additionally, a classification of synchronization states is proposed based on the dependencies of FuzzyEn on the length of embedding vectors of SHR time series. An extension of this method for analyzing non-relaxation (non-spike) type signals is illustrated using the example of phase-phase coupling rhythms of local field potential of rat hippocampus. The entropy-statistical approach using rational fractions and pulse signal forms makes this method promising for analyzing biosignal synchronization and implementing the algorithm in mobile digital platforms.


Entropy-based machine learning model for diagnosis and monitoring of Parkinson's Disease in smart IoT environment

arXiv.org Artificial Intelligence

The study presents the concept of a computationally efficient machine learning (ML) model for diagnosing and monitoring Parkinson's disease (PD) in an Internet of Things (IoT) environment using rest-state EEG signals (rs-EEG). We computed different types of entropy from EEG signals and found that Fuzzy Entropy performed the best in diagnosing and monitoring PD using rs-EEG. We also investigated different combinations of signal frequency ranges and EEG channels to accurately diagnose PD. Finally, with a fewer number of features (11 features), we achieved a maximum classification accuracy (ARKF) of ~99.9%. The most prominent frequency range of EEG signals has been identified, and we have found that high classification accuracy depends on low-frequency signal components (0-4 Hz). Moreover, the most informative signals were mainly received from the right hemisphere of the head (F8, P8, T8, FC6). Furthermore, we assessed the accuracy of the diagnosis of PD using three different lengths of EEG data (150-1000 samples). Because the computational complexity is reduced by reducing the input data. As a result, we have achieved a maximum mean accuracy of 99.9% for a sample length (LEEG) of 1000 (~7.8 seconds), 98.2% with a LEEG of 800 (~6.2 seconds), and 79.3% for LEEG = 150 (~1.2 seconds). By reducing the number of features and segment lengths, the computational cost of classification can be reduced. Lower-performance smart ML sensors can be used in IoT environments for enhances human resilience to PD.


A Bio-Inspired Chaos Sensor Model Based on the Perceptron Neural Network: Machine Learning Concept and Application for Computational Neuro-Science

arXiv.org Artificial Intelligence

The study presents a bio-inspired chaos sensor model based on the perceptron neural network for the estimation of entropy of spike train in neurodynamic systems. After training, the sensor on perceptron, having 50 neurons in the hidden layer and 1 neuron at the output, approximates the fuzzy entropy of a short time series with high accuracy, with a determination coefficient of R2 ~ 0.9. The Hindmarsh-Rose spike model was used to generate time series of spike intervals, and datasets for training and testing the perceptron. The selection of the hyperparameters of the perceptron model and the estimation of the sensor accuracy were performed using the K-block cross-validation method. Even for a hidden layer with one neuron, the model approximates the fuzzy entropy with good results and the metric R2 ~ 0.5-0.8. In a simplified model with one neuron and equal weights in the first layer, the principle of approximation is based on the linear transformation of the average value of the time series into the entropy value. An example of using the chaos sensor on spike train of action potential recordings from the L5 dorsal rootlet of rat is provided. The bio-inspired chaos sensor model based on an ensemble of neurons is able to dynamically track the chaotic behavior of a spike signal and transmit this information to other parts of the neurodynamic model for further processing. The study will be useful for specialists in the field of computational neuroscience, and also to create humanoid and animal robots, and bio-robots with limited resources.


Imagery Tracking of Sun Activity Using 2D Circular Kernel Time Series Transformation, Entropy Measures and Machine Learning Approaches

arXiv.org Artificial Intelligence

The sun is highly complex in nature and its observatory imagery features is one of the most important sources of information about the sun activity, space and Earth's weather conditions. The NASA, solar Dynamics Observatory captures approximately 70,000 images of the sun activity in a day and the continuous visual inspection of this solar observatory images is challenging. In this study, we developed a technique of tracking the sun's activity using 2D circular kernel time series transformation, statistical and entropy measures, with machine learning approaches. The technique involves transforming the solar observatory image section into 1-Dimensional time series (1-DTS) while the statistical and entropy measures (Approach 1) and direct classification (Approach 2) is used to capture the extraction features from the 1-DTS for machine learning classification into 'solar storm' and 'no storm'. We found that the potential accuracy of the model in tracking the activity of the sun is approximately 0.981 for Approach 1 and 0.999 for Approach 2. The stability of the developed approach to rotational transformation of the solar observatory image is evident. When training on the original dataset for Approach 1, the match index (T90) of the distribution of solar storm areas reaches T90 ~ 0.993, and T90 ~ 0.951 for Approach 2. In addition, when using the extended training base, the match indices increased to T90 ~ 0.994 and T90 ~ 1, respectively. This model consistently classifies areas with swirling magnetic lines associated with solar storms and is robust to image rotation, glare, and optical artifacts.


Neural Network Entropy (NNetEn): Entropy-Based EEG Signal and Chaotic Time Series Classification, Python Package for NNetEn Calculation

arXiv.org Artificial Intelligence

Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution function. However, for the effective separation of time series, new entropy estimation methods are required to characterize the chaotic dynamic of the system. Our concept of Neural Network Entropy (NNetEn) is based on the classification of special datasets in relation to the entropy of the time series recorded in the reservoir of the neural network. NNetEn estimates the chaotic dynamics of time series in an original way and does not take into account probability distribution functions. We propose two new classification metrics: R2 Efficiency and Pearson Efficiency. The efficiency of NNetEn is verified on separation of two chaotic time series of sine mapping using dispersion analysis. For two close dynamic time series (r = 1.1918 and r = 1.2243), the F-ratio has reached the value of 124 and reflects high efficiency of the introduced method in classification problems. The electroenceph-alography signal classification for healthy persons and patients with Alzheimer disease illustrates the practical application of the NNetEn features. Our computations demonstrate the synergistic effect of increasing classification accuracy when applying traditional entropy measures and the NNetEn concept conjointly. An implementation of the algorithms in Python is presented.


Novel techniques for improving NNetEn entropy calculation for short and noisy time series

arXiv.org Artificial Intelligence

Entropy is a fundamental concept in the field of information theory. During measurement, conventional entropy measures are susceptible to length and amplitude changes in time series. A new entropy metric, neural network entropy (NNetEn), has been developed to overcome these limitations. NNetEn entropy is computed using a modified LogNNet neural network classification model. The algorithm contains a reservoir matrix of N=19625 elements that must be filled with the given data. The contribution of this paper is threefold. Firstly, this work investigates different methods of filling the reservoir with time series (signal) elements. The reservoir filling method determines the accuracy of the entropy estimation by convolution of the study time series and LogNNet test data. The present study proposes 6 methods for filling the reservoir for time series. Two of them (Method 3 and Method 6) employ the novel approach of stretching the time series to create intermediate elements that complement it, but do not change its dynamics. The most reliable methods for short time series are Method 3 and Method 5. The second part of the study examines the influence of noise and constant bias on entropy values. Our study examines three different time series data types (chaotic, periodic, and binary) with different dynamic properties, Signal to Noise Ratio (SNR), and offsets. The NNetEn entropy calculation errors are less than 10% when SNR is greater than 30 dB, and entropy decreases with an increase in the bias component. The third part of the article analyzes real-time biosignal EEG data collected from emotion recognition experiments. The NNetEn measures show robustness under low-amplitude noise using various filters. Thus, NNetEn measures entropy effectively when applied to real-world environments with ambient noise, white noise, and 1/f noise.


Entropy Approximation by Machine Learning Regression: Application for Irregularity Evaluation of Images in Remote Sensing

arXiv.org Artificial Intelligence

Approximation of entropies of various types using machine learning (ML) regression methods are shown for the first time. The ML models presented in this study define the complexity of the short time series by approximating dissimilar entropy techniques such as Singular value decomposition entropy (SvdEn), Permutation entropy (PermEn), Sample entropy (SampEn) and Neural Network entropy (NNetEn) and their 2D analogies. A new method for calculating SvdEn2D, PermEn2D and SampEn2D for 2D images was tested using the technique of circular kernels. Training and testing datasets on the basis of Sentinel-2 images are presented (two training images and one hundred and ninety-eight testing images). The results of entropy approximation are demonstrated using the example of calculating the 2D entropy of Sentinel-2 images and R^2 metric evaluation. The applicability of the method for the short time series with a length from N = 5 to N = 113 elements is shown. A tendency for the R^2 metric to decrease with an increase in the length of the time series was found. For SvdEn entropy, the regression accuracy is R^2 > 0.99 for N = 5 and R^2 > 0.82 for N = 113. The best metrics were observed for the ML_SvdEn2D and ML_NNetEn2D models. The results of the study can be used for fundamental research of entropy approximations of various types using ML regression, as well as for accelerating entropy calculations in remote sensing. The versatility of the model is shown on a synthetic chaotic time series using Planck map and logistic map.


Detection of Risk Predictors of COVID-19 Mortality with Classifier Machine Learning Models Operated with Routine Laboratory Biomarkers

arXiv.org Artificial Intelligence

Early evaluation of patients who require special care and who have high death-expectancy in COVID-19, and the effective determination of relevant biomarkers on large sample-groups are important to reduce mortality. This study aimed to reveal the routine blood-value predictors of COVID-19 mortality and to determine the lethal-risk levels of these predictors during the disease process. The dataset of the study consists of 38 routine blood-values of 2597 patients who died (n = 233) and those who recovered (n = 2364) from COVID-19 in August-December, 2021. In this study, the histogram-based gradient-boosting (HGB) model was the most successful machine-learning classifier in detecting living and deceased COVID-19 patients (with squared F1 metrics F1^2 = 1). The most efficient binary combinations with procalcitonin were obtained with D-dimer, ESR, D-Bil and ferritin. The HGB model operated with these feature pairs correctly detected almost all of the patients who survived and those who died (precision > 0.98, recall > 0.98, F1^2 > 0.98). Furthermore, in the HGB model operated with a single feature, the most efficient features were procalcitonin (F1^2 = 0.96) and ferritin (F1^2 = 0.91). In addition, according to the two-threshold approach, ferritin values between 376.2 mkg/L and 396.0 mkg/L (F1^2 = 0.91) and pro-calcitonin values between 0.2 mkg/L and 5.2 mkg/L (F1^2 = 0.95) were found to be fatal risk levels for COVID-19. Considering all the results, we suggest that many features combined with these features, especially procalcitonin and ferritin, operated with the HGB model, can be used to achieve very successful results in the classification of those who live, and those who die from COVID-19. Moreover, we strongly recommend that clinicians consider the critical levels we have found for procalcitonin and ferritin properties, to reduce the lethality of the COVID-19 disease.


Machine Learning Sensors for Diagnosis of COVID-19 Disease Using Routine Blood Values for Internet of Things Application

arXiv.org Artificial Intelligence

Healthcare digitalization requires effective applications of human sensors, when various parameters of the human body are instantly monitored in everyday life due to the Internet of Things (IoT). In particular, machine learning (ML) sensors for the prompt diagnosis of COVID-19 are an important option for IoT application in healthcare and ambient assisted living (AAL). Determining a COVID-19 infected status with various diagnostic tests and imaging results is costly and time-consuming. This study provides a fast, reliable and cost-effective alternative tool for the diagnosis of COVID-19 based on the routine blood values (RBVs) measured at admission. The dataset of the study consists of a total of 5296 patients with the same number of negative and positive COVID-19 test results and 51 routine blood values. In this study, 13 popular classifier machine learning models and the LogNNet neural network model were exanimated. The most successful classifier model in terms of time and accuracy in the detection of the disease was the histogram-based gradient boosting (HGB) (accuracy: 100%, time: 6.39 sec). The HGB classifier identified the 11 most important features (LDL, cholesterol, HDL-C, MCHC, triglyceride, amylase, UA, LDH, CK-MB, ALP and MCH) to detect the disease with 100% accuracy. In addition, the importance of single, double and triple combinations of these features in the diagnosis of the disease was discussed. We propose to use these 11 features and their binary combinations as important biomarkers for ML sensors in the diagnosis of the disease, supporting edge computing on Arduino and cloud IoT service.


A Method for Medical Data Analysis Using the LogNNet for Clinical Decision Support Systems and Edge Computing in Healthcare

arXiv.org Artificial Intelligence

The study presents a new method for analyzing medical data based on the LogNNet neural network, which uses chaotic mappings to transform input information. The technique calculates risk factors for the presence of a disease in a patient according to a set of medical health indicators. The LogNNet architecture allows the implementation of artificial intelligence on medical pe-ripherals of the Internet of Things with low RAM resources, and the development of edge computing in healthcare. The efficiency of LogNNet in assessing perinatal risk is illustrated on cardiotocogram data of 2126 pregnant women, obtained from the UC Irvine machine learning repository. The classification accuracy reaches ~ 91%, with the ~ 3-10 kB of RAM used on the Arduino microcontroller. In addition, examples for diagnosing COVID-19 are provided, using LogNNet trained on a publicly available database from the Israeli Ministry of Health. The service concept has been developed, which uses the data of the express test for COVID-19 and reaches the classification accuracy of ~ 95% with the ~ 0.6 kB of RAM used on Arduino microcontrollers. In all examples, the model is tested using standard classification quality metrics: Precision, Recall, and F1-measure. The study results can be used in clinical decision support systems.