Goto

Collaborating Authors

 cmmd


Credal Two-Sample Tests of Epistemic Ignorance

Chau, Siu Lun, Schrab, Antonin, Gretton, Arthur, Sejdinovic, Dino, Muandet, Krikamol

arXiv.org Machine Learning

Science is inherently inductive and thus involves uncertainties. They are commonly categorized as aleatoric uncertainty (AU), which refers to inherent variability, and epistemic uncertainty (EU), arising from limited information such as finite data or model assumptions (Hora, 1996). These uncertainties often overlap, as scientists may be epistemically uncertain about the aleatoric variation in their inquiry. Distinguishing and acknowledging them is crucial for the safe and trustworthy deployment of intelligent systems (Kendall and Gal, 2017; Hüllermeier and Waegeman, 2021), as they lead to different down-stream decisions. For example, experimental design aims to reduce EU (Nguyen et al., 2019; Chau et al., 2021b; Adachi et al., 2024), while risk management uses hedging strategy to address AU (Mashrur et al., 2020) While AU is often modelled using probability distributions, modelling EU--particularly in states of epistemic ignorance, also known as partial ignorance or incomplete knowledge (Dubois et al., 1996)--poses greater challenges. For instance, a scientist analysing insulin levels in Germany may have data from multiple hospitals, each representing aleatoric variation as a probability distribution. However, these distributions are merely proxies for the population-level insulin distribution, which is difficult to infer due to data collection limitations. A Bayesian approach could aggregate the data based on a prior if the representativeness of each source is known, but in many cases, scientists operate under partial ignorance, lacking such prior information (Bromberger, 1971). Assigning a uniform prior by following the principle of indifference (Keynes, 1921) and maximum entropy principle (Jaynes, 1957), or applying Jeffrey's prior by following the principle of transformation groups (Jaynes, 1968) only reflects indifference, not epistemic ignorance.


A-BDD: Leveraging Data Augmentations for Safe Autonomous Driving in Adverse Weather and Lighting

Assion, Felix, Gressner, Florens, Augustine, Nitin, Klemenc, Jona, Hammam, Ahmed, Krattinger, Alexandre, Trittenbach, Holger, Riemer, Sascha

arXiv.org Artificial Intelligence

High-autonomy vehicle functions rely on machine learning (ML) algorithms to understand the environment. Despite displaying remarkable performance in fair weather scenarios, perception algorithms are heavily affected by adverse weather and lighting conditions. To overcome these difficulties, ML engineers mainly rely on comprehensive real-world datasets. However, the difficulties in real-world data collection for critical areas of the operational design domain (ODD) often means synthetic data is required for perception training and safety validation. Thus, we present A-BDD, a large set of over 60,000 synthetically augmented images based on BDD100K that are equipped with semantic segmentation and bounding box annotations (inherited from the BDD100K dataset). The dataset contains augmented data for rain, fog, overcast and sunglare/shadow with varying intensity levels. We further introduce novel strategies utilizing feature-based image quality metrics like FID and CMMD, which help identify useful augmented and real-world data for ML training and testing. By conducting experiments on A-BDD, we provide evidence that data augmentations can play a pivotal role in closing performance gaps in adverse weather and lighting conditions.


Domain Adaptation for Industrial Time-series Forecasting via Counterfactual Inference

Min, Chao, Wen, Guoquan, Yuan, Jiangru, Yi, Jun, Guo, Xing

arXiv.org Artificial Intelligence

Industrial time-series, as a structural data responds to production process information, can be utilized to perform data-driven decision-making for effective monitoring of industrial production process. However, there are some challenges for time-series forecasting in industry, e.g., predicting few-shot caused by data shortage, and decision-confusing caused by unknown treatment policy. To cope with the problems, we propose a novel causal domain adaptation framework, Causal Domain Adaptation (CDA) forecaster to improve the performance on the interested domain with limited data (target). Firstly, we analyze the causality existing along with treatments, and thus ensure the shared causality over time. Subsequently, we propose an answer-based attention mechanism to achieve domain-invariant representation by the shared causality in both domains. Then, a novel domain-adaptation is built to model treatments and outcomes jointly training on source and target domain. The main insights are that our designed answer-based attention mechanism allows the target domain to leverage the existed causality in source time-series even with different treatments, and our forecaster can predict the counterfactual outcome of industrial time-series, meaning a guidance in production process. Compared with commonly baselines, our method on real-world and synthetic oilfield datasets demonstrates the effectiveness in across-domain prediction and the practicality in guiding production process


Constructing Synthetic Treatment Groups without the Mean Exchangeability Assumption

Zhang, Yuhang, Liu, Yue, Zhang, Zhihua

arXiv.org Machine Learning

The purpose of this work is to transport the information from multiple randomized controlled trials to the target population where we only have the control group data. Previous works rely critically on the mean exchangeability assumption. However, as pointed out by many current studies, the mean exchangeability assumption might be violated. Motivated by the synthetic control method, we construct a synthetic treatment group for the target population by a weighted mixture of treatment groups of source populations. We estimate the weights by minimizing the conditional maximum mean discrepancy between the weighted control groups of source populations and the target population. We establish the asymptotic normality of the synthetic treatment group estimator based on the sieve semiparametric theory. Our method can serve as a novel complementary approach when the mean exchangeability assumption is violated. Experiments are conducted on synthetic and real-world datasets to demonstrate the effectiveness of our methods.


CMMD: Cross-Metric Multi-Dimensional Root Cause Analysis

Yan, Shifu, Shan, Caihua, Yang, Wenyi, Xu, Bixiong, Li, Dongsheng, Qiu, Lili, Tong, Jie, Zhang, Qi

arXiv.org Artificial Intelligence

In large-scale online services, crucial metrics, a.k.a., key performance indicators (KPIs), are monitored periodically to check their running statuses. Generally, KPIs are aggregated along multiple dimensions and derived by complex calculations among fundamental metrics from the raw data. Once abnormal KPI values are observed, root cause analysis (RCA) can be applied to identify the reasons for anomalies, so that we can troubleshoot quickly. Recently, several automatic RCA techniques were proposed to localize the related dimensions (or a combination of dimensions) to explain the anomalies. However, their analyses are limited to the data on the abnormal metric and ignore the data of other metrics which may be also related to the anomalies, leading to imprecise or even incorrect root causes. To this end, we propose a cross-metric multi-dimensional root cause analysis method, named CMMD, which consists of two key components: 1) relationship modeling, which utilizes graph neural network (GNN) to model the unknown complex calculation among metrics and aggregation function among dimensions from historical data; 2) root cause localization, which adopts the genetic algorithm to efficiently and effectively dive into the raw data and localize the abnormal dimension(s) once the KPI anomalies are detected. Experiments on synthetic datasets, public datasets and online production environment demonstrate the superiority of our proposed CMMD method compared with baselines. Currently, CMMD is running as an online service in Microsoft Azure.


Multi-Representation Adaptation Network for Cross-domain Image Classification

Zhu, Yongchun, Zhuang, Fuzhen, Wang, Jindong, Chen, Jingwu, Shi, Zhiping, Wu, Wenjuan, He, Qing

arXiv.org Artificial Intelligence

In image classification, it is often expensive and time-consuming to acquire sufficient labels. To solve this problem, domain adaptation often provides an attractive option given a large amount of labeled data from a similar nature but different domain. Existing approaches mainly align the distributions of representations extracted by a single structure and the representations may only contain partial information, e.g., only contain part of the saturation, brightness, and hue information. Along this line, we propose Multi-Representation Adaptation which can dramatically improve the classification accuracy for cross-domain image classification and specially aims to align the distributions of multiple representations extracted by a hybrid structure named Inception Adaptation Module (IAM). Based on this, we present Multi-Representation Adaptation Network (MRAN) to accomplish the cross-domain image classification task via multi-representation alignment which can capture the information from different aspects. In addition, we extend Maximum Mean Discrepancy (MMD) to compute the adaptation loss. Our approach can be easily implemented by extending most feed-forward models with IAM, and the network can be trained efficiently via back-propagation. Experiments conducted on three benchmark image datasets demonstrate the effectiveness of MRAN. The code has been available at https://github.com/easezyc/deep-transfer-learning.


Discriminative Multimodal Learning via Conditional Priors in Generative Models

Mancisidor, Rogelio A., Kampffmeyer, Michael, Aas, Kjersti, Jenssen, Robert

arXiv.org Machine Learning

Deep generative models with latent variables have been used lately to learn joint representations and generative processes from multi-modal data. These two learning mechanisms can, however, conflict with each other and representations can fail to embed information on the data modalities. This research studies the realistic scenario in which all modalities and class labels are available for model training, but where some modalities and labels required for downstream tasks are missing. We show, in this scenario, that the variational lower bound limits mutual information between joint representations and missing modalities. We, to counteract these problems, introduce a novel conditional multi-modal discriminative model that uses an informative prior distribution and optimizes a likelihood-free objective function that maximizes mutual information between joint representations and missing modalities. Extensive experimentation shows the benefits of the model we propose, the empirical results showing that our model achieves state-of-the-art results in representative problems such as downstream classification, acoustic inversion and annotation generation.