performance parameter
Domain Consistent Industrial Decarbonisation of Global Coal Power Plants
Ashraf, Waqar Muhammad, Dua, Vivek, Debnath, Ramit
Machine learning and optimisation techniques (MLOPT) hold significant potential to accelerate the decarbonisation of industrial systems by enabling data-driven operational improvements. However, the practical application of MLOPT in industrial settings is often hindered by a lack of domain compliance and system-specific consistency, resulting in suboptimal solutions with limited real-world applicability. To address this challenge, we propose a novel human-in-the-loop (HITL) constraint-based optimisation framework that integrates domain expertise with data-driven methods, ensuring solutions are both technically sound and operationally feasible. We demonstrate the efficacy of this framework through a case study focused on enhancing the thermal efficiency and reducing the turbine heat rate of a 660 MW supercritical coal-fired power plant. By embedding domain knowledge as constraints within the optimisation process, our approach yields solutions that align with the plant's operational patterns and are seamlessly integrated into its control systems. Empirical validation confirms a mean improvement in thermal efficiency of 0.64\% and a mean reduction in turbine heat rate of 93 kJ/kWh. Scaling our analysis to 59 global coal power plants with comparable capacity and fuel type, we estimate a cumulative lifetime reduction of 156.4 million tons of carbon emissions. These results underscore the transformative potential of our HITL-MLOPT framework in delivering domain-compliant, implementable solutions for industrial decarbonisation, offering a scalable pathway to mitigate the environmental impact of coal-based power generation worldwide.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > China (0.05)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (9 more...)
- Materials > Metals & Mining > Coal (1.00)
- Energy > Power Industry > Utilities (1.00)
- Energy > Coal (1.00)
Confidence Intervals for Evaluation of Data Mining
In data mining, when binary prediction rules are used to predict a binary outcome, many performance measures are used in a vast array of literature for the purposes of evaluation and comparison. Some examples include classification accuracy, precision, recall, F measures, and Jaccard index. Typically, these performance measures are only approximately estimated from a finite dataset, which may lead to findings that are not statistically significant. In order to properly quantify such statistical uncertainty, it is important to provide confidence intervals associated with these estimated performance measures. We consider statistical inference about general performance measures used in data mining, with both individual and joint confidence intervals. These confidence intervals are based on asymptotic normal approximations and can be computed fast, without needs to do bootstrap resampling. We study the finite sample coverage probabilities for these confidence intervals and also propose a `blurring correction' on the variance to improve the finite sample performance. This 'blurring correction' generalizes the plus-four method from binomial proportion to general performance measures used in data mining. Our framework allows multiple performance measures of multiple classification rules to be inferred simultaneously for comparisons.
- North America > United States > California > Orange County > Irvine (0.14)
- North America > United States > New York (0.04)
- Oceania > Australia > Tasmania (0.04)
- (2 more...)
- Research Report > New Finding (0.70)
- Research Report > Experimental Study (0.50)
Optimal Kernel Tuning Parameter Prediction using Deep Sequence Models
Mahmood, Khawir, Khan, Jehandad, Afzal, Hammad
GPU kernels have come to the forefront of computing due to their utility in varied fields, from high-performance computing to machine learning. A typical GPU compute kernel is invoked millions, if not billions of times in a typical application, which makes their performance highly critical. Due to the unknown nature of the optimization surface, an exhaustive search is required to discover the global optimum, which is infeasible due to the possible exponential number of parameter combinations. In this work, we propose a methodology that uses deep sequence-to-sequence models to predict the optimal tuning parameters governing compute kernels. This work considers the prediction of kernel parameters as a sequence to the sequence translation problem, borrowing models from the Natural Language Processing (NLP) domain. Parameters describing the input, output and weight tensors are considered as the input language to the model that emits the corresponding kernel parameters. In essence, the model translates the problem parameter language to kernel parameter language. The core contributions of this work are: a) Proposing that a sequence to sequence model can accurately learn the performance dynamics of a GPU compute kernel b) A novel network architecture which predicts the kernel tuning parameters for GPU kernels, c) A constrained beam search which incorporates the physical limits of the GPU hardware as well as other expert knowledge reducing the search space. The proposed algorithm can achieve more than 90% accuracy on various convolutional kernels in MIOpen, the AMD machine learning primitives library. As a result, the proposed technique can reduce the development time and compute resources required to tune unseen input configurations, resulting in shorter development cycles, reduced development costs, and better user experience.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Pakistan > Islamabad Capital Territory > Islamabad (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Identifying Best Practice Melting Patterns in Induction Furnaces: A Data-Driven Approach Using Time Series KMeans Clustering and Multi-Criteria Decision Making
Howard, Daniel Anthony, Jørgensen, Bo Nørregaard, Ma, Zheng
Improving energy efficiency in industrial production processes is crucial for competitiveness, and compliance with climate policies. This paper introduces a data-driven approach to identify optimal melting patterns in induction furnaces. Through time-series K-means clustering the melting patterns could be classified into distinct clusters based on temperature profiles. Using the elbow method, 12 clusters were identified, representing the range of melting patterns. Performance parameters such as melting time, energy-specific performance, and carbon cost were established for each cluster, indicating furnace efficiency and environmental impact. Multiple criteria decision-making methods including Simple Additive Weighting, Multiplicative Exponential Weighting, Technique for Order of Preference by Similarity to Ideal Solution, modified TOPSIS, and VlseKriterijumska Optimizacija I Kompromisno Resenje were utilized to determine the best-practice cluster. The study successfully identified the cluster with the best performance. Implementing the best practice operation resulted in an 8.6 % reduction in electricity costs, highlighting the potential energy savings in the foundry.
- Europe > Denmark > Southern Denmark (0.05)
- South America > Brazil > São Paulo > Campinas (0.04)
- Europe > Northern Europe (0.04)
- (3 more...)
- Energy > Power Industry (1.00)
- Law > Environmental Law (0.67)
- Materials > Metals & Mining > Iron (0.46)
Transformers for Trajectory Optimization with Application to Spacecraft Rendezvous
Guffanti, Tommaso, Gammelli, Daniele, D'Amico, Simone, Pavone, Marco
Reliable and efficient trajectory optimization methods are a fundamental need for autonomous dynamical systems, effectively enabling applications including rocket landing, hypersonic reentry, spacecraft rendezvous, and docking. Within such safety-critical application areas, the complexity of the emerging trajectory optimization problems has motivated the application of AI-based techniques to enhance the performance of traditional approaches. However, current AI-based methods either attempt to fully replace traditional control algorithms, thus lacking constraint satisfaction guarantees and incurring in expensive simulation, or aim to solely imitate the behavior of traditional methods via supervised learning. To address these limitations, this paper proposes the Autonomous Rendezvous Transformer (ART) and assesses the capability of modern generative models to solve complex trajectory optimization problems, both from a forecasting and control standpoint. Specifically, this work assesses the capabilities of Transformers to (i) learn near-optimal policies from previously collected data, and (ii) warm-start a sequential optimizer for the solution of non-convex optimal control problems, thus guaranteeing hard constraint satisfaction. From a forecasting perspective, results highlight how ART outperforms other learning-based architectures at predicting known fuel-optimal trajectories. From a control perspective, empirical analyses show how policies learned through Transformers are able to generate near-optimal warm-starts, achieving trajectories that are (i) more fuel-efficient, (ii) obtained in fewer sequential optimizer iterations, and (iii) computed with an overall runtime comparable to benchmarks based on convex optimization.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- (7 more...)
- Transportation (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- Energy (0.93)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Constraint-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Parallel and Sequential Resources Networks
Benatti, Alexandre, Costa, Luciano da F.
A large number of real and abstract systems involve the transformation of some basic resource into respective products under the action of multiple processing agents, which can be understood as multiple-agent production systems (MAP). At each discrete time instant, for each agent, a fraction of the resources is assumed to be kept, forwarded to other agents, or converted into work with some efficiency. The present work describes a systematic study of nine basic MAP architectures subdivided into two main groups, namely parallel and sequential distribution of resources from a single respective source. Several types of interconnections among the involved processing agents are also considered. The resulting MAP architectures are studied in terms of the total amount of work, the dispersion of the resources (states) among the agents, and the transition times from the start of operation until the respective steady state. Several interesting results are obtained and discussed, including the observation that some of the parallel designs were able to yield maximum work and minimum state dispersion, achieved at the expense of the transition time and use of several interconnections between the source and the agents. The results obtained for the sequential designs indicate that relatively high performance can be obtained for some specific cases.
- South America > Brazil > São Paulo (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Performance Tuning for GPU-Embedded Systems: Machine-Learning-based and Analytical Model-driven Tuning Methodologies
Dieguez, Adrian Perez, Lopez, Margarita Amor
GPU-embedded systems have gained popularity across various domains due to their efficient power consumption. However, in order to meet the demands of real-time or time-consuming applications running on these systems, it is crucial for them to be tuned to exhibit high performance. This paper addresses the issue by developing and comparing two tuning methodologies on GPU-embedded systems, and also provides performance insights for developers and researchers seeking to optimize applications running on these architectures. We focus on parallel prefix operations, such as FFT, scan primitives, and tridiagonal system solvers, which are performance-critical components in many applications. The study introduces an analytical model-driven tuning methodology and a Machine Learning (ML)-based tuning methodology. We evaluate the performance of the two tuning methodologies for different parallel prefix implementations of the BPLG library in an NVIDIA Jetson system, and compare their performance to the ones achieved through an exhaustive search. The findings shed light on the best strategies for handling the open challenge of performance portability for major computational patterns among server and embedded devices, providing practical guidance for offline and online tuning. We also address the existing gap in performance studies for parallel computational patterns in GPU-embedded systems by comparing the BPLG performance against other state-of-the-art libraries, including CUSPARSE, CUB, and CUFFT.
- North America > United States > California > Alameda County > Berkeley (0.14)
- Europe > Spain > Galicia > A Coruña Province > A Coruña (0.04)
- Asia > Russia > Siberian Federal District > Novosibirsk Oblast > Novosibirsk (0.04)
Demonstration of a Response Time Based Remaining Useful Life (RUL) Prediction for Software Systems
Prognostic and Health Management (PHM) has been widely applied to hardware systems in the electronics and non-electronics domains but has not been explored for software. While software does not decay over time, it can degrade over release cycles. Software health management is confined to diagnostic assessments that identify problems, whereas prognostic assessment potentially indicates when in the future a problem will become detrimental. Relevant research areas such as software defect prediction, software reliability prediction, predictive maintenance of software, software degradation, and software performance prediction, exist, but all of these represent diagnostic models built upon historical data, none of which can predict an RUL for software. This paper addresses the application of PHM concepts to software systems for fault predictions and RUL estimation. Specifically, this paper addresses how PHM can be used to make decisions for software systems such as version update and upgrade, module changes, system reengineering, rejuvenation, maintenance scheduling, budgeting, and total abandonment. This paper presents a method to prognostically and continuously predict the RUL of a software system based on usage parameters (e.g., the numbers and categories of releases) and performance parameters (e.g., response time). The model developed has been validated by comparing actual data, with the results that were generated by predictive models. Statistical validation (regression validation, and k-fold cross validation) has also been carried out. A case study, based on publicly available data for the Bugzilla application is presented. This case study demonstrates that PHM concepts can be applied to software systems and RUL can be calculated to make system management decisions.
- North America > United States > New York (0.04)
- North America > United States > New Jersey (0.04)
- North America > United States > Minnesota (0.04)
- (11 more...)
Towards Practical Application of Deep Learning in Diagnosis of Alzheimer's Disease
Accurate diagnosis of Alzheimer's disease (AD) is both challenging and time consuming. With a systematic approach for early detection and diagnosis of AD, steps can be taken towards the treatment and prevention of the disease. This study explores the practical application of deep learning models for diagnosis of AD. Due to computational complexity, large training times and limited availability of labelled dataset, a 3D full brain CNN (convolutional neural network) is not commonly used, and researchers often prefer 2D CNN variants. In this study, full brain 3D version of well-known 2D CNNs were designed, trained and tested for diagnosis of various stages of AD. Deep learning approach shows good performance in differentiating various stages of AD for more than 1500 full brain volumes. Along with classification, the deep learning model is capable of extracting features which are key in differentiating the various categories. The extracted features align with meaningful anatomical landmarks, that are currently considered important in identification of AD by experts. An ensemble of all the algorithm was also tested and the performance of the ensemble algorithm was superior to any individual algorithm, further improving diagnosis ability. The 3D versions of the trained CNNs and their ensemble have the potential to be incorporated in software packages that can be used by physicians/radiologists to assist them in better diagnosis of AD.
- North America > United States > Texas (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
Complementing the Linear-Programming Learning Experience with the Design and Use of Computerized Games: The Formula 1 Championship Game
This document focuses on modeling a complex situations to achieve an advantage within a competitive context. Our goal is to devise the characteristics of games to teach and exercise non-easily quantifiable tasks crucial to the math-modeling process. A computerized game to exercise the math-modeling process and optimization problem formulation is introduced. The game is named The Formula 1 Championship, and models of the game were developed in the computerized simulation platform MoNet. It resembles some situations in which team managers must make crucial decisions to enhance their racing cars up to the feasible, most advantageous conditions. This paper describes the game's rules, limitations, and five Formula 1 circuit simulators used for the championship development. We present several formulations of this situation in the form of optimization problems. Administering the budget to reach the best car adjustment to a set of circuits to win the respective races can be an approach. Focusing on the best distribution of each Grand Prix's budget and then deciding how to use the assigned money to improve the car is also the right approach. In general, there may be a degree of conflict among these approaches because they are different aspects of the same multi-scale optimization problem. Therefore, we evaluate the impact of assigning the highest priority to an element, or another, when formulating the optimization problem. Studying the effectiveness of solving such optimization problems turns out to be an exciting way of evaluating the advantages of focusing on one scale or another. Another thread of this research directs to the meaning of the game in the teaching-learning process. We believe applying the Formula 1 Game is an effective way to discover opportunities in a complex-system situation and formulate them to finally extract and concrete the related benefit to the context described.
- Asia > Malaysia (0.05)
- South America > Venezuela (0.04)
- Europe > United Kingdom > England (0.04)
- (4 more...)
- Research Report (0.64)
- Instructional Material > Course Syllabus & Notes (0.46)