papakonstantinou
Multi-agent deep reinforcement learning with centralized training and decentralized execution for transportation infrastructure management
Saifullah, M., Papakonstantinou, K. G., Andriotis, C. P., Stoffels, S. M.
Optimal management of cross-asset infrastructure is a complex problem that requires adept inspection and maintenance policies addressing stochastic degradation impacts. According to the 2021 ASCE infrastructure report card [1], the US infrastructure is in fair to poor condition, earning a cumulative grade of C-, with components nearing the end of their useful lives and at high risk of failure. Pavements and bridges are indicative examples of inadequate infrastructure. One in every five miles of pavements is in poor condition, and 7.5% of bridges are structurally deficient. Economic analyses indicate that the US Department of Transportation fell 50% short of the funds required to sustain the national transportation system [1], which is also reflected in the available resources at individual State transportation agencies. The Virginia Department of Transportation, for example, reported that 50% of the State's bridges have exceeded their useful lives, and the required funds to replace them are five times greater than the estimated available funds over the next fifty years [2]. Inspection and Maintenance (I&M) policies are therefore indispensable towards efficiently distributing available economic and environmental resources for transportation systems. Making optimal decisions in complex and uncertain environments presents a variety of difficulties, including heterogeneity of asset classes, a high number of components resulting in vast state and action spaces, unreliable observations, limited availability of resources, and several related risks. Optimal solutions that define inspection and maintenance policies should thus incorporate concepts such as (i) online and offline data learning, (ii) imperfect information support, (iii) stochastic action outcomes considerations, and (iv) optimization of long-term goals under multiple constraints (e.g., safety targets or resource constraints).
- North America > United States > Pennsylvania (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Michigan (0.04)
- (14 more...)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- (2 more...)
POMDP inference and robust solution via deep reinforcement learning: An application to railway optimal maintenance
Arcieri, Giacomo, Hoelzl, Cyprien, Schwery, Oliver, Straub, Daniel, Papakonstantinou, Konstantinos G., Chatzi, Eleni
Partially Observable Markov Decision Processes (POMDPs) can model complex sequential decision-making problems under stochastic and uncertain environments. A main reason hindering their broad adoption in real-world applications is the lack of availability of a suitable POMDP model or a simulator thereof. Available solution algorithms, such as Reinforcement Learning (RL), require the knowledge of the transition dynamics and the observation generating process, which are often unknown and non-trivial to infer. In this work, we propose a combined framework for inference and robust solution of POMDPs via deep RL. First, all transition and observation model parameters are jointly inferred via Markov Chain Monte Carlo sampling of a hidden Markov model, which is conditioned on actions, in order to recover full posterior distributions from the available data. The POMDP with uncertain parameters is then solved via deep RL techniques with the parameter distributions incorporated into the solution via domain randomization, in order to develop solutions that are robust to model uncertainty. As a further contribution, we compare the use of transformers and long short-term memory networks, which constitute model-free RL solutions, with a model-based/model-free hybrid approach. We apply these methods to the real-world problem of optimal maintenance planning for railway assets.
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > Pennsylvania (0.04)
Bridging POMDPs and Bayesian decision making for robust maintenance planning under model uncertainty: An application to railway systems
Arcieri, Giacomo, Hoelzl, Cyprien, Schwery, Oliver, Straub, Daniel, Papakonstantinou, Konstantinos G., Chatzi, Eleni
Structural Health Monitoring (SHM) describes a process for inferring quantifiable metrics of structural condition, which can serve as input to support decisions on the operation and maintenance of infrastructure assets. Given the long lifespan of critical structures, this problem can be cast as a sequential decision making problem over prescribed horizons. Partially Observable Markov Decision Processes (POMDPs) offer a formal framework to solve the underlying optimal planning task. However, two issues can undermine the POMDP solutions. Firstly, the need for a model that can adequately describe the evolution of the structural condition under deterioration or corrective actions and, secondly, the non-trivial task of recovery of the observation process parameters from available monitoring data. Despite these potential challenges, the adopted POMDP models do not typically account for uncertainty on model parameters, leading to solutions which can be unrealistically confident. In this work, we address both key issues. We present a framework to estimate POMDP transition and observation model parameters directly from available data, via Markov Chain Monte Carlo (MCMC) sampling of a Hidden Markov Model (HMM) conditioned on actions. The MCMC inference estimates distributions of the involved model parameters. We then form and solve the POMDP problem by exploiting the inferred distributions, to derive solutions that are robust to model uncertainty. We successfully apply our approach on maintenance planning for railway track assets on the basis of a "fractal value" indicator, which is computed from actual railway monitoring data.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Austria > Styria > Graz (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (3 more...)
Value of structural health monitoring quantification in partially observable stochastic environments
Andriotis, C. P., Papakonstantinou, K. G., Chatzi, E. N.
Sequential decision-making under uncertainty for optimal life-cycle control of deteriorating engineering systems and infrastructure entails two fundamental classes of decisions. The first class pertains to the various structural interventions, which can directly modify the existing properties of the system, while the second class refers to prescribing appropriate inspection and monitoring schemes, which are essential for updating our existing knowledge about the system states. The latter have to rely on quantifiable measures of efficiency, determined on the basis of objective criteria that, among others, consider the Value of Information (VoI) of different observational strategies, and the Value of Structural Health Monitoring (VoSHM) over the entire system life-cycle. In this work, we present general solutions for quantifying the VoI and VoSHM in partially observable stochastic domains, and although our definitions and methodology are general, we are particularly emphasizing and describing the role of Partially Observable Markov Decision Processes (POMDPs) in solving this problem, due to their advantageous theoretical and practical attributes in estimating arbitrarily well globally optimal policies. POMDP formulations are articulated for different structural environments having shared intervention actions but diversified inspection and monitoring options, thus enabling VoI and VoSHM estimation through their differentiated stochastic optimal control policies. POMDP solutions are derived using point-based solvers, which can efficiently approximate the POMDP value functions through Bellman backups at selected reachable points of the belief space. The suggested methodology is applied on stationary and non-stationary deteriorating environments, with both infinite and finite planning horizons, featuring single- or multi-component engineering systems.
- Europe > Austria > Vienna (0.14)
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > Pennsylvania > Centre County > University Park (0.04)
- (7 more...)
- Materials > Construction Materials (0.68)
- Health & Medicine > Consumer Health (0.62)
- Energy > Renewable > Wind (0.46)