Goto

Collaborating Authors

 Vago, Nicolò Oreste Pinciroli


Time Series Analysis in Compressor-Based Machines: A Survey

arXiv.org Artificial Intelligence

In both industrial and residential contexts, compressor-based machines, such as refrigerators, HVAC systems, heat pumps and chillers, are essential to fulfil production and consumers' needs. The diffusion of sensors and IoT connectivity supports the development of monitoring systems that can detect and predict faults, identify behavioural shifts and forecast the operational status of machines and their components. The focus of this paper is to survey the recent research on such tasks as FD, FP, Forecasting and CPD applied to multivariate time series characterizing the operations of compressor-based machines. These tasks play a critical role in improving the efficiency and longevity of machines by minimizing downtime and maintenance costs and improving the energy efficiency. Specifically, FD detects and diagnoses faults, FP predicts such occurrences, forecasting anticipates the future value of characteristic variables of machines and CPD identifies significant variations in the behaviour of the appliances, such as a change in the working regime. We identify and classify the approaches to the tasks mentioned above, compare the algorithms employed, highlight the gaps in the current status of the art and discuss the most promising future research directions in the field.


Predicting machine failures from multivariate time series: an industrial case study

arXiv.org Artificial Intelligence

Non-neural Machine Learning (ML) and Deep Learning (DL) models are often used to predict system failures in the context of industrial maintenance. However, only a few researches jointly assess the effect of varying the amount of past data used to make a prediction and the extension in the future of the forecast. This study evaluates the impact of the size of the reading window and of the prediction window on the performances of models trained to forecast failures in three data sets concerning the operation of (1) an industrial wrapping machine working in discrete sessions, (2) an industrial blood refrigerator working continuously, and (3) a nitrogen generator working continuously. The problem is formulated as a binary classification task that assigns the positive label to the prediction window based on the probability of a failure to occur in such an interval. Six algorithms (logistic regression, random forest, support vector machine, LSTM, ConvLSTM, and Transformers) are compared using multivariate telemetry time series. The results indicate that, in the considered scenarios, the dimension of the prediction windows plays a crucial role and highlight the effectiveness of DL approaches at classifying data with diverse time-dependent patterns preceding a failure and the effectiveness of ML approaches at classifying similar and repetitive patterns preceding a failure.


DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing Data

arXiv.org Artificial Intelligence

In astrophysics, a gravitational lens is a matter distribution (e.g., a black hole) able to bend the trajectory of transiting light, similar to an optical lens. Such apparent distortion is caused by the curvature of the geometry of space-time around the massive body acting as a lens, a phenomenon that forces the light to travel along the geodesics (i.e., the shortest paths in the curved space-time). Strong and weak gravitational lensing focus on the effects produced by particularly massive bodies (e.g., galaxies and black holes), while microlensing addresses the consequences produced by lighter entities (e.g., stars). This research proposes an approach to automatically classify strong gravitational lenses with respect to the lensed objects and to their evolution through time. Automatically finding and classifying gravitational lenses is a major challenge in astrophysics. As [103, 91, 39, 44] show, gravitational lensing systems can be complex, ubiquitous and hard to detect without computer-aided data processing. The volumes of data gathered by contemporary instruments make manual inspection unfeasible. As an example, the Vera C. Rubin Observatory is expected to collect petabytes of data [108]. Moreover, strong lensing is involved in major astrophysical problems: studying massive bodies that are too faint to be analyzed with current instrumentation; characterizing the geometry, content and kinematics of the universe; and investigating mass distribution in the galaxy formation process [103].


Using Convolutional Neural Networks for the Helicity Classification of Magnetic Fields

arXiv.org Artificial Intelligence

Magnetic fields are known to play a prominent role in the dynamics and the energy budget of astrophysical systems on galactic and smaller scales, but their role on larger scales is still elusive [1]. In galaxies and galaxy clusters, the observed magnetic fields are assumed to result from the amplification of much weaker seed fields. Such seeds could be created in the early universe, e.g. during phase transitions or inflation, and then amplified by plasma processes. If the generation mechanism of such primordial fields (e.g. by sphaleron processes) breaks CP, then the field will have a non-zero helicity. Since helical fields decay slower than non-helical ones, a small non-zero initial helicity is increasing with time, making the intergalactic magnetic field (IGMF) either completely left-or right-helical today. A clean signature for a primordial origin of the IGMF is therefore its non-zero helicity. In a series of works, Vachaspati and collaborators worked out possible observational consequences of a helical IGMF, introducing the " statistics" as a statistical estimator for the presence of helicity in the IGMF [2].