Mirjalili, Seyedali
Predicting the Stay Length of Patients in Hospitals using Convolutional Gated Recurrent Deep Learning Model
Neshat, Mehdi, Phipps, Michael, Browne, Chris A., Vargas, Nicole T., Mirjalili, Seyedali
Predicting hospital length of stay (LoS) stands as a critical factor in shaping public health strategies. This data serves as a cornerstone for governments to discern trends, patterns, and avenues for enhancing healthcare delivery. In this study, we introduce a robust hybrid deep learning model, a combination of Multi-layer Convolutional (CNNs) deep learning, Gated Recurrent Units (GRU), and Dense neural networks, that outperforms 11 conventional and state-of-the-art Machine Learning (ML) and Deep Learning (DL) methodologies in accurately forecasting inpatient hospital stay duration. Our investigation delves into the implementation of this hybrid model, scrutinising variables like geographic indicators tied to caregiving institutions, demographic markers encompassing patient ethnicity, race, and age, as well as medical attributes such as the CCS diagnosis code, APR DRG code, illness severity metrics, and hospital stay duration. Statistical evaluations reveal the pinnacle LoS accuracy achieved by our proposed model (CNN-GRU-DNN), which averages at 89% across a 10-fold cross-validation test, surpassing LSTM, BiLSTM, GRU, and Convolutional Neural Networks (CNNs) by 19%, 18.2%, 18.6%, and 7%, respectively. Accurate LoS predictions not only empower hospitals to optimise resource allocation and curb expenses associated with prolonged stays but also pave the way for novel strategies in hospital stay management. This avenue holds promise for catalysing advancements in healthcare research and innovation, inspiring a new era of precision-driven healthcare practices.
Using evolutionary machine learning to characterize and optimize co-pyrolysis of biomass feedstocks and polymeric wastes
Shahbeik, Hossein, Shafizadeh, Alireza, Nadian, Mohammad Hossein, Jeddi, Dorsa, Mirjalili, Seyedali, Yang, Yadong, Lam, Su Shiung, Pan, Junting, Tabatabaei, Meisam, Aghbashlo, Mortaza
Co-pyrolysis of biomass feedstocks with polymeric wastes is a promising strategy for improving the quantity and quality parameters of the resulting liquid fuel. Numerous experimental measurements are typically conducted to find the optimal operating conditions. However, performing co-pyrolysis experiments is highly challenging due to the need for costly and lengthy procedures. Machine learning (ML) provides capabilities to cope with such issues by leveraging on existing data. This work aims to introduce an evolutionary ML approach to quantify the (by)products of the biomass-polymer co-pyrolysis process. A comprehensive dataset covering various biomass-polymer mixtures under a broad range of process conditions is compiled from the qualified literature. The database was subjected to statistical analysis and mechanistic discussion. The input features are constructed using an innovative approach to reflect the physics of the process. The constructed features are subjected to principal component analysis to reduce their dimensionality. The obtained scores are introduced into six ML models. Gaussian process regression model tuned by particle swarm optimization algorithm presents better prediction performance (R2 > 0.9, MAE < 0.03, and RMSE < 0.06) than other developed models. The multi-objective particle swarm optimization algorithm successfully finds optimal independent parameters.
The Fifteen Puzzle- A New Approach through Hybridizing Three Heuristics Methods
Hasan, Dler O., Aladdin, Aso M., Talabani, Hardi Sabah, Rashid, Tarik Ahmed, Mirjalili, Seyedali
Fifteen Puzzle problem is one of the most classical problems that have captivated mathematical enthusiasts for centuries. This is mainly because of the huge size of the state space with approximately 1013 states that have to be explored and several algorithms have been applied to solve the Fifteen Puzzle instances. In this paper, to deal with this large state space, Bidirectional A* (BA*) search algorithm with three heuristics, such as Manhattan distance (MD), linear conflict (LC), and walking distance (WD) has been used to solve the Fifteen Puzzle problems. The three mentioned heuristics will be hybridized in a way that can dramatically reduce the number of generated states by the algorithm. Moreover, all those heuristics require only 25KB of storage but help the algorithm effectively reduce the number of generated states and expand fewer nodes. Our implementation of BA* search can significantly reduce the space complexity, and guarantee either optimal or near-optimal solutions.1
Formal context reduction in deriving concept hierarchies from corpora using adaptive evolutionary clustering algorithm star
Hassan, Bryar A., Rashid, Tarik A., Mirjalili, Seyedali
It is beneficial to automate the process of deriving concept hierarchies from corpora since a manual construction of concept hierarchies is typically a time consuming and resource-intensive process. As such, the overall process of learning concept hierarchies from corpora encompasses a set of steps: parsing the text into sentences, splitting the sentences and then tokenised it. After the lemmatisation step, the pairs are extracted using formal context analysis (FCA). However, there might be some uninteresting and erroneous pairs in the formal context. Generating formal context may lead to a time-consuming process, so formal context size reduction is require to remove uninterested and erroneous pairs, taking less time to extract the concept lattice and concept hierarchies accordingly. In this premise, this study aims to propose two frameworks: i) A framework to review the current process of deriving concept hierarchies from corpus utilising formal concept analysis (FCA); ii) A framework to decrease the formal context's ambiguity of the first framework using an adaptive version of evolutionary clustering algorithm (ECA*). Experiments are conducted by applying 385 samples corpora from Wikipedia on the two frameworks to examine the reducing size of formal context, which leads to yield concept lattice and concept hierarchy. The resulting lattice of formal context is evaluated to the standad one using concept latticeinvariants. Accordingly, the homomorphic between the two lattices preserves the quality of resulting concept hierarchies by 89% in contrast to the basic ones, and the reduced concept lattice inherits the structural relation of the standard one. The adaptive ECA* is examined against its four counterpart baseline algorithms (Fuzzy K-means, JBOS approach, AddIntent algorithm, and FastAddExtent) to measure the execution time on random datasets with different densities (fill ratios). The results show that adaptive ECA* performs concept lattice faster than other mentioned competitive techniques in different fill ratios. Keywords Concept hierarchies, formal context reduction, concept lattice reduction, adaptive ECA*, FCA, WordNet. 1. Introduction The Semantic Web is an extended web of machine-readable data, which provides a program to process data via machine directly or indirectly [1]. As an expansion of the latest Web, the Semantic Web can add meaning to the World Wide Web content and thus support automated services on the basis os semantic representations. Meanwhile, the Semantic Web depends on structured ontologies to organize the underlying data and provide a detailed and portable interpretation of computing machines [2].
Performance evaluation results of evolutionary clustering algorithm star for clustering heterogeneous datasets
Hassan, Bryar A., Rashid, TarikA., Mirjalili, Seyedali
This article presents the data used to evaluate the performance of evolutionary clustering algorithm star (ECA*) compared to five traditional and modern clustering algorithms. Two experimental methods are employed to examine the performance of ECA* against genetic algorithm for clustering++ (GENCLUST++), learning vector quantisation (LVQ) , expectation maximisation (EM) , K-means++ (KM++) and K-means (KM). These algorithms are applied to 32 heterogenous and multi-featured datasets to determine which one performs well on the three tests. For one, ther paper examines the efficiency of ECA* in contradiction of its corresponding algorithms using clustering evaluation measures. These validation criteria are objective function and cluster quality measures. For another, it suggests a performance rating framework to measurethe the performance sensitivity of these algorithms on varos dataset features (cluster dimensionality, number of clusters, cluster overlap, cluster shape and cluster structure). The contributions of these experiments are two-folds: (i) ECA* exceeds its counterpart aloriths in ability to find out the right cluster number; (ii) ECA* is less sensitive towards dataset features compared to its competitive techniques. Nonetheless, the results of the experiments performed demonstrate some limitations in the ECA*: (i) ECA* is not fully applied based on the premise that no prior knowledge exists; (ii) Adapting and utilising ECA* on several real applications has not been achieved yet.
Swarm Intelligence for Next-Generation Wireless Networks: Recent Advances and Applications
Pham, Quoc-Viet, Nguyen, Dinh C., Mirjalili, Seyedali, Hoang, Dinh Thai, Nguyen, Diep N., Pathirana, Pubudu N., Hwang, Won-Joo
Due to the proliferation of smart devices and emerging applications, many next-generation technologies have been paid for the development of wireless networks. Even though commercial 5G has just been widely deployed in some countries, there have been initial efforts from academia and industrial communities for 6G systems. In such a network, a very large number of devices and applications are emerged, along with heterogeneity of technologies, architectures, mobile data, etc., and optimizing such a network is of utmost importance. Besides convex optimization and game theory, swarm intelligence (SI) has recently appeared as a promising optimization tool for wireless networks. As a new subdivision of artificial intelligence, SI is inspired by the collective behaviors of societies of biological species. In SI, simple agents with limited capabilities would achieve intelligent strategies for high-dimensional and challenging problems, so it has recently found many applications in next-generation wireless networks (NGN). However, researchers may not be completely aware of the full potential of SI techniques. In this work, our primary focus will be the integration of these two domains: NGN and SI. Firstly, we provide an overview of SI techniques from fundamental concepts to well-known optimizers. Secondly, we review the applications of SI to settle emerging issues in NGN, including spectrum management and resource allocation, wireless caching and edge computing, network security, and several other miscellaneous issues. Finally, we highlight open challenges and issues in the literature, and introduce some interesting directions for future research.