Goto

Collaborating Authors

 Fraternali, Piero


Illegal Waste Detection in Remote Sensing Images: A Case Study

arXiv.org Artificial Intelligence

Environmental crime currently represents the third largest criminal activity worldwide while threatening ecosystems as well as human health. Among the crimes related to this activity, improper waste management can nowadays be countered more easily thanks to the increasing availability and decreasing cost of Very-High-Resolution Remote Sensing images, which enable semi-automatic territory scanning in search of illegal landfills. This paper proposes a pipeline, developed in collaboration with professionals from a local environmental agency, for detecting candidate illegal dumping sites leveraging a classifier of Remote Sensing images. To identify the best configuration for such classifier, an extensive set of experiments was conducted and the impact of diverse image characteristics and training settings was thoroughly analyzed. The local environmental agency was then involved in an experimental exercise where outputs from the developed classifier were integrated in the experts' everyday work, resulting in time savings with respect to manual photo-interpretation. The classifier was eventually run with valuable results on a location outside of the training area, highlighting potential for cross-border applicability of the proposed pipeline.


Time Series Analysis in Compressor-Based Machines: A Survey

arXiv.org Artificial Intelligence

In both industrial and residential contexts, compressor-based machines, such as refrigerators, HVAC systems, heat pumps and chillers, are essential to fulfil production and consumers' needs. The diffusion of sensors and IoT connectivity supports the development of monitoring systems that can detect and predict faults, identify behavioural shifts and forecast the operational status of machines and their components. The focus of this paper is to survey the recent research on such tasks as FD, FP, Forecasting and CPD applied to multivariate time series characterizing the operations of compressor-based machines. These tasks play a critical role in improving the efficiency and longevity of machines by minimizing downtime and maintenance costs and improving the energy efficiency. Specifically, FD detects and diagnoses faults, FP predicts such occurrences, forecasting anticipates the future value of characteristic variables of machines and CPD identifies significant variations in the behaviour of the appliances, such as a change in the working regime. We identify and classify the approaches to the tasks mentioned above, compare the algorithms employed, highlight the gaps in the current status of the art and discuss the most promising future research directions in the field.


Predicting machine failures from multivariate time series: an industrial case study

arXiv.org Artificial Intelligence

Non-neural Machine Learning (ML) and Deep Learning (DL) models are often used to predict system failures in the context of industrial maintenance. However, only a few researches jointly assess the effect of varying the amount of past data used to make a prediction and the extension in the future of the forecast. This study evaluates the impact of the size of the reading window and of the prediction window on the performances of models trained to forecast failures in three data sets concerning the operation of (1) an industrial wrapping machine working in discrete sessions, (2) an industrial blood refrigerator working continuously, and (3) a nitrogen generator working continuously. The problem is formulated as a binary classification task that assigns the positive label to the prediction window based on the probability of a failure to occur in such an interval. Six algorithms (logistic regression, random forest, support vector machine, LSTM, ConvLSTM, and Transformers) are compared using multivariate telemetry time series. The results indicate that, in the considered scenarios, the dimension of the prediction windows plays a crucial role and highlight the effectiveness of DL approaches at classifying data with diverse time-dependent patterns preceding a failure and the effectiveness of ML approaches at classifying similar and repetitive patterns preceding a failure.


Solid Waste Detection in Remote Sensing Images: A Survey

arXiv.org Artificial Intelligence

The detection and characterization of illegal solid waste disposal sites are essential for environmental protection, particularly for mitigating pollution and health hazards. Improperly managed landfills contaminate soil and groundwater via rainwater infiltration, posing threats to both animals and humans. Traditional landfill identification approaches, such as on-site inspections, are time-consuming and expensive. Remote sensing is a cost-effective solution for the identification and monitoring of solid waste disposal sites that enables broad coverage and repeated acquisitions over time. Earth Observation (EO) satellites, equipped with an array of sensors and imaging capabilities, have been providing high-resolution data for several decades. Researchers proposed specialized techniques that leverage remote sensing imagery to perform a range of tasks such as waste site detection, dumping site monitoring, and assessment of suitable locations for new landfills. This review aims to provide a detailed illustration of the most relevant proposals for the detection and monitoring of solid waste sites by describing and comparing the approaches, the implemented techniques, and the employed data. Furthermore, since the data sources are of the utmost importance for developing an effective solid waste detection model, a comprehensive overview of the satellites and publicly available data sets is presented. Finally, this paper identifies the open issues in the state-of-the-art and discusses the relevant research directions for reducing the costs and improving the effectiveness of novel solid waste detection methods.


DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing Data

arXiv.org Artificial Intelligence

In astrophysics, a gravitational lens is a matter distribution (e.g., a black hole) able to bend the trajectory of transiting light, similar to an optical lens. Such apparent distortion is caused by the curvature of the geometry of space-time around the massive body acting as a lens, a phenomenon that forces the light to travel along the geodesics (i.e., the shortest paths in the curved space-time). Strong and weak gravitational lensing focus on the effects produced by particularly massive bodies (e.g., galaxies and black holes), while microlensing addresses the consequences produced by lighter entities (e.g., stars). This research proposes an approach to automatically classify strong gravitational lenses with respect to the lensed objects and to their evolution through time. Automatically finding and classifying gravitational lenses is a major challenge in astrophysics. As [103, 91, 39, 44] show, gravitational lensing systems can be complex, ubiquitous and hard to detect without computer-aided data processing. The volumes of data gathered by contemporary instruments make manual inspection unfeasible. As an example, the Vera C. Rubin Observatory is expected to collect petabytes of data [108]. Moreover, strong lensing is involved in major astrophysical problems: studying massive bodies that are too faint to be analyzed with current instrumentation; characterizing the geometry, content and kinematics of the universe; and investigating mass distribution in the galaxy formation process [103].


Black-box error diagnosis in deep neural networks: a survey of tools

arXiv.org Artificial Intelligence

The application of Deep Neural Networks (DNNs) to a broad variety of tasks demands methods for coping with the complex and opaque nature of these architectures. The analysis of performance can be pursued in two ways. On one side, model interpretation techniques aim at "opening the box" to assess the relationship between the input, the inner layers, and the output. For example, saliency and attention models exploit knowledge of the architecture to capture the essential regions of the input that have the most impact on the inference process and output. On the other hand, models can be analysed as "black boxes", e.g., by associating the input samples with extra annotations that do not contribute to model training but can be exploited for characterizing the model response. Such performance-driven meta-annotations enable the detailed characterization of performance metrics and errors and help scientists identify the features of the input responsible for prediction failures and focus their model improvement efforts. This paper presents a structured survey of the tools that support the "black box" analysis of DNNs and discusses the gaps in the current proposals and the relevant future directions in this research field.


Taxonomy-Based Discovery and Annotation of Functional Areas in the City

AAAI Conferences

Mapping the functional use of city areas (e.g., mapping clusters of hotels or of electronic shops) enables a variety of applications (e.g., innovative way-finding tools). To do that mapping, researchers have recently processed geo-referenced data with spatial clustering algorithms. These algorithms usually perform two consecutive steps: they cluster nearby points on the map, and then assign labels (e.g., 'electronics') to the resulting clusters. When applied in the city context, these algorithms do not fully work, not least because they consider the two steps of clustering and labeling as separate. Since there is no reason to keep those two steps separate, we propose a framework that clusters points based not only on their density but also on their semantic relatedness. We evaluate this framework upon Foursquare data in the cities of Barcelona, Milan, and London. We find that it is more effective than the baseline method of DBSCAN in discovering functional areas. We complement that quantitative evaluation with a user study involving 111 participants in the three cities. Finally, to illustrate the generalizability of our framework, we process temporal data with it and successfully discover seasonal uses of the city.