Naser, M. Z.
A Review of 315 Benchmark and Test Functions for Machine Learning Optimization Algorithms and Metaheuristics with Mathematical and Visual Descriptions
Naser, M. Z., al-Bashiti, Mohammad Khaled, Tapeh, Arash Teymori Gharah, Eslamlou, Armin Dadras, Naser, Ahmed, Kodur, Venkatesh, Hawileeh, Rami, Abdalla, Jamal, Khodadadi, Nima, Gandomi, Amir H.
In the rapidly evolving optimization and metaheuristics domains, the efficacy of algorithms is crucially determined by the benchmark (test) functions. While several functions have been developed and derived over the past decades, little information is available on the mathematical and visual description, range of suitability, and applications of many such functions. To bridge this knowledge gap, this review provides an exhaustive survey of more than 300 benchmark functions used in the evaluation of optimization and metaheuristics algorithms. This review first catalogs benchmark and test functions based on their characteristics, complexity, properties, visuals, and domain implications to offer a wide view that aids in selecting appropriate benchmarks for various algorithmic challenges. This review also lists the 25 most commonly used functions in the open literature and proposes two new, highly dimensional, dynamic and challenging functions that could be used for testing new algorithms. Finally, this review identifies gaps in current benchmarking practices and suggests directions for future research.
Large Language Models in Fire Engineering: An Examination of Technical Questions Against Domain Knowledge
Hostetter, Haley, Naser, M. Z., Huang, Xinyan, Gales, John
This communication presents preliminary findings from comparing two recent chatbots, OpenAI's ChatGPT and Google's Bard, in the context of fire engineering by evaluating their responses in handling fire safety related queries. A diverse range of fire engineering questions and scenarios were created and examined, including structural fire design, fire prevention strategies, evacuation, building code compliance, and fire suppression systems (some of which resemble those commonly present in the Fire Protection exam (FPE)). The results reveal some key differences in the performance of the chatbots, with ChatGPT demonstrating a relatively superior performance. Then, this communication highlights the potential for chatbot technology to revolutionize fire engineering practices by providing instant access to critical information while outlining areas for further improvement and research. Evidently, and when it matures, this technology will likely be elemental to our engineers' practice and education.
SPINEX: Similarity-based Predictions and Explainable Neighbors Exploration for Regression and Classification Tasks in Machine Learning
Naser, M. Z., albashiti, M. K., Naser, A. Z.
The field of machine learning (ML) has witnessed significant advancements in recent years. However, many existing algorithms lack interpretability and struggle with high-dimensional and imbalanced data. This paper proposes SPINEX, a novel similarity-based interpretable neighbor exploration algorithm designed to address these limitations. This algorithm combines ensemble learning and feature interaction analysis to achieve accurate predictions and meaningful insights by quantifying each feature's contribution to predictions and identifying interactions between features, thereby enhancing the interpretability of the algorithm. To evaluate the performance of SPINEX, extensive experiments on 59 synthetic and real datasets were conducted for both regression and classification tasks. The results demonstrate that SPINEX achieves comparative performance and, in some scenarios, may outperform commonly adopted ML algorithms. The same findings demonstrate the effectiveness and competitiveness of SPINEX, making it a promising approach for various real-world applications.
Can AI Chatbots Pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) Structural Exams?
Naser, M. Z., Ross, Brandon, Ogle, Jennier, Kodur, Venkatesh, Hawileh, Rami, Abdalla, Jamal, Thai, Huu-Tai
The engineering community has recently witnessed the emergence of chatbot technology with the release of OpenAI ChatGPT-4 and Google Bard. While these chatbots have been reported to perform well and even pass various standardized tests, including medical and law exams, this forum paper explores whether these chatbots can also pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) exams. A diverse range of civil and environmental engineering questions and scenarios are used to evaluate the chatbots' performance, as commonly present in the FE and PE exams. The chatbots' responses were analyzed based on their relevance, accuracy, and clarity and then compared against the recommendations of the National Council of Examiners for Engineering and Surveying (NCEES). Our report shows that ChatGPT-4 and Bard, respectively scored 70.9% and 39.2% in the FE exam and 46.2% and 41% in the PE exam. It is evident that the current version of ChatGPT-4 could potentially pass the FE exam. While future editions are much more likely to pass both exams, this study also highlights the potential of using chatbots as teaching assistants and guiding engineers.
Simplifying Causality: A Brief Review of Philosophical Views and Definitions with Examples from Economics, Education, Medicine, Policy, Physics and Engineering
Naser, M. Z.
This short paper compiles the big ideas behind some philosophical views, definitions, and examples of causality. This collection spans the realms of the four commonly adopted approaches to causality: Hume's regularity, counterfactual, manipulation, and mechanisms. This short review is motivated by presenting simplified views and definitions and then supplements them with examples from various fields, including economics, education, medicine, politics, physics, and engineering. It is the hope that this short review comes in handy for new and interested readers with little knowledge of causality and causal inference. Introduction Causality is the science of cause and effect [1]. As identifying causal mechanisms is often regarded as a fundamental purist in most sciences, causality becomes elemental in advancing our knowledge. While causality is more profound in some research areas, the concept of causality is often vague or forgotten in others [2]. With the advent of data science and the ready accessibility of big data, there is a rising potential to leverage such data in pursuit of unlocking previously unknown, hidden mechanisms or perhaps confirming ongoing hypotheses and empirical knowledge [3]. Traditionally, researchers would collect data pertaining to a phenomenon and then analyze this data to describe it, creating a model that could be used to predict such a phenomenon and/or causally infer (or explain/understand) interesting questions about such a phenomenon (see Table 1) [4]. When the primary goal is to describe the data on hand, the researcher simply aims to visualize the data to tell its story.
Causality, Causal Discovery, and Causal Inference in Structural Engineering
Naser, M. Z.
Much of our experiments are designed to uncover the cause(s) and effect(s) behind a data generating mechanism (i.e., phenomenon) we happen to be interested in. Uncovering such relationships allows us to identify the true working of a phenomenon and, most importantly, articulate a model that may enable us to further explore the phenomenon on hand and/or allow us to predict it accurately. Fundamentally, such models are likely to be derived via a causal approach (as opposed to an observational or empirical mean). In this approach, causal discovery is required to create a causal model, which can then be applied to infer the influence of interventions, and answer any hypothetical questions (i.e., in the form of What ifs? Etc.) that we might have. This paper builds a case for causal discovery and causal inference and contrasts that against traditional machine learning approaches; all from a civil and structural engineering perspective. More specifically, this paper outlines the key principles of causality and the most commonly used algorithms and packages for causal discovery and causal inference. Finally, this paper also presents a series of examples and case studies of how causal concepts can be adopted for our domain.
Demystifying Ten Big Ideas and Rules Every Fire Scientist & Engineer Should Know About Blackbox, Whitebox & Causal Artificial Intelligence
Naser, M. Z.
Artificial intelligence (AI) is paving the way towards the fourth industrial revolution with the fire domain (Fire 4.0). As a matter of fact, the next few years will be elemental to how this technology will shape our academia, practice, and entrepreneurship. Despite the growing interest between fire research groups, AI remains absent of our curriculum, and we continue to lack a methodical framework to adopt, apply and create AI solutions suitable for our problems. The above is also true for parallel engineering domains (i.e., civil/mechanical engineering), and in order to negate the notion of history repeats itself (e.g., look at the continued debate with regard to modernizing standardized fire testing, etc.), it is the motivation behind this letter to the Editor to demystify some of the big ideas behind AI to jump-start prolific and strategic discussions on the front of AI & Fire. In addition, this letter intends to explain some of the most fundamental concepts and clear common misconceptions specific to the adoption of AI in fire engineering. This short letter is a companion to the Smart Systems in Fire Engineering special issue sponsored by Fire Technology. An in-depth review of AI algorithms [1] and success stories to the proper implementations of such algorithms can be found in the aforenoted special issue and collection of papers. This letter comprises two sections. The first section outlines big ideas pertaining to AI, and answers some of the burning questions with regard to the merit of adopting AI in our domain. The second section presents a set of rules or technical recommendations an AI user may deem helpful to practice whenever AI is used as an investigation methodology. The presented set of rules are complementary to the big ideas.
Insights into Performance Fitness and Error Metrics for Machine Learning
Naser, M. Z., Alavi, Amir
Machine learning (ML) is the field of training machines to achieve high level of cognition and perform human-like analysis. Since ML is a data-driven approach, it seemingly fits into our daily lives and operations as well as complex and interdisciplinary fields. With the rise of commercial, open-source and user-catered ML tools, a key question often arises whenever ML is applied to explore a phenomenon or a scenario: what constitutes a good ML model? Keeping in mind that a proper answer to this question depends on a variety of factors, this work presumes that a good ML model is one that optimally performs and best describes the phenomenon on hand. From this perspective, identifying proper assessment metrics to evaluate performance of ML models is not only necessary but is also warranted. As such, this paper examines a number of the most commonly-used performance fitness and error metrics for regression and classification algorithms, with emphasis on engineering applications.