Goto

Collaborating Authors

 Di Cataldo, Santa


Renal Cell Carcinoma subtyping: learning from multi-resolution localization

arXiv.org Artificial Intelligence

Its mortality rate is considered high, with respect to its incidence rate, as this tumor is typically asymptomatic at the early stages for many patients [1, 2]. This leads to a late diagnosis of the tumor, where the curability likelihood is lower. RCC can be categorized into multiple histological subtypes, mainly: Clear Cell Renal Cell Carcinoma (ccRCC) forming 75% of RCCs, Papillary Renal Cell Carcinoma (pRCC) accounting for 10%, and Chromophobe Renal Cell Carcinoma (chRCC) accounting for 5%. Some of the other sutypes include Collecting Duct Renal Cell Carcinoma (cdRCC), Tubulocystic Renal Cell Carcinoma (tRCC), and unclassified [1]. Approximately 10% of renal tumors belong to the benign entities neoplasms, being Oncocytoma (ONCO) the most frequent subtype with an incidence of 3-7% among all RCCs [3, 2]. These subtypes show different cytological signature as well as histological features [2], which ends up in significantly different prognosis. The correct categorization of the tumor subtype is indeed of major importance, as prognosis and treatment approaches depend on it and on the disease stage. For instance, the overall 5-year survival rate significantly differs among the different histological subtypes, being 55-60% for ccRCC, 80-90% for pRCC and 90% for chRCC.


VARADE: a Variational-based AutoRegressive model for Anomaly Detection on the Edge

arXiv.org Artificial Intelligence

In an industrial CPS scenario, the most crucial resource is the availability of data reflecting the different aspects of production. Detecting complex anomalies on massive amounts of data is a crucial Such data consist of multiple interdependent variables rapidly evolving task in Industry 4.0, best addressed by deep learning. However, over time, thus falling under the typical definition of Multivariate available solutions are computationally demanding, requiring cloud Time Series (MTS) [14]. After collection, the time series, originated architectures prone to latency and bandwidth issues. This work by heterogeneous sensors and data sources, are integrated presents VARADE, a novel solution implementing a light autoregressive through Industrial Internet of Things (IIoT) technologies and made framework based on variational inference, which is best available for anomaly detection, visualization, and analysis [27].


Neuro-symbolic Empowered Denoising Diffusion Probabilistic Models for Real-time Anomaly Detection in Industry 4.0

arXiv.org Artificial Intelligence

Industry 4.0 involves the integration of digital technologies, such as IoT, Big Data, and AI, into manufacturing and industrial processes to increase efficiency and productivity. As these technologies become more interconnected and interdependent, Industry 4.0 systems become more complex, which brings the difficulty of identifying and stopping anomalies that may cause disturbances in the manufacturing process. This paper aims to propose a diffusion-based model for real-time anomaly prediction in Industry 4.0 processes. Using a neuro-symbolic approach, we integrate industrial ontologies in the model, thereby adding formal knowledge on smart manufacturing. Finally, we propose a simple yet effective way of distilling diffusion models through Random Fourier Features for deployment on an embedded system for direct integration into the manufacturing process. To the best of our knowledge, this approach has never been explored before.


W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality

arXiv.org Artificial Intelligence

Since the milestone study by Alex Krizhevsky and colleagues in 2012 [1], Deep Learning (DL), with particular emphasis on Convolutional Neural Networks (CNNs), is the state-of-the-art method for image classification in many different applications. Besides classification performance, the reason for the success of CNNs is twofold: i) the recent boost of graphical processing units (GPUs) and parallel processing, that allows to train very large models; ii) the ever-growing availability of massive annotated task-specific datasets. Nonetheless, in many realistic applications many concerns may be raised about the reliability of such datasets both in terms of image and labelling quality, and consequently on the robustness of the CNN models trained and tested on them. As regards to image quality, standard CNNs are supposed to be fed with high quality samples. Nevertheless, in practical scenarios different kinds of image degradation can heavily affect the performance of a CNN both in the training and in the inference phase. Problems concerning image acquisition devices, poor image sensor, lighting conditions, focus, stabilization, exposure time or partial occlusion of the cameras may lead to produce low quality samples, which have been demonstrated to be one of the chief reasons for troublesome learning procedures of CNN models in many applications [2, 3, 4]. On the other hand, even though the CNN had been proficiently trained and validated on high quality data, noisy inputs can heavily affect the inference phase, and cause classification errors at run-time. A typical example are self-driving cars, where a partial occlusion of the image acquisition device may lead to misinterpret a road sign, with catastrophic consequences. In such settings, the well-known limitations of standard CNNs to broadcast information about how much the given input resembles the ones the model was trained on - and hence, whether the associated prediction should (or should not) be trusted - is also playing a major role.