Goto

Collaborating Authors

 naomi




ac01e21bb14609416760f790dd8966ae-Supplemental-Datasets_and_Benchmarks.pdf

Neural Information Processing Systems

In the hospital, patients may be in the ICU with ECG/PPG sensors to monitor their already-poor healthcondition. ML methods must rely onlearning toimpute missing signals based onthesignal that is present, rather than learning tocreate ageneral-purpose imputation template thatmimics standard healthybehavior. Likewise, participant movement inboth contexts can result in artifacts(e.g. Inabroadercontext, we want to match the high quality level of other datasets such as PTB-XL, in which 77.01% of thesignal data areofhighest assessed quality [18]. See below for examples of ECG signals with their associated periodogram.


NAOMI: Non-Autoregressive Multiresolution Sequence Imputation

Neural Information Processing Systems

Missing value imputation is a fundamental problem in spatiotemporal modeling, from motion tracking to the dynamics of physical systems. Deep autoregressive models suffer from error propagation which becomes catastrophic for imputing long-range sequences. In this paper, we take a non-autoregressive approach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) to impute long-range sequences given arbitrary missing patterns. NAOMI exploits the multiresolution structure of spatiotemporal data and decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. We further enhance our model with adversarial training. When evaluated extensively on benchmark datasets from systems of both deterministic and stochastic dynamics. NAOMI demonstrates significant improvement in imputation accuracy (reducing average prediction error by 60% compared to autoregressive counterparts) and generalization for long range sequences.



states (h

Neural Information Processing Systems

We thank the reviewers for their insightful comments. We first clarify our approach and then address specific concerns. Note that encoder and decoder share weights. We encourage the reviewers to check the supplementary material, with code and visualizations of our decoding strategy. Evaluating generative models is an open problem, e.g., log-likelihood does not correlate In our case, neither L2 nor log-likelihood can capture "realistic" L2-loss for the basketball dataset, but note that NAOMI ( 0.013) still outperforms SingleRes ( 0.040).


NAOMI: Non-Autoregressive Multiresolution Sequence Imputation

Neural Information Processing Systems

Missing value imputation is a fundamental problem in spatiotemporal modeling, from motion tracking to the dynamics of physical systems. Deep autoregressive models suffer from error propagation which becomes catastrophic for imputing long-range sequences. In this paper, we take a non-autoregressive approach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) to impute long-range sequences given arbitrary missing patterns. NAOMI exploits the multiresolution structure of spatiotemporal data and decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. We further enhance our model with adversarial training.


Annotating and Detecting Fine-grained Factual Errors for Dialogue Summarization

Zhu, Rongxin, Qi, Jianzhong, Lau, Jey Han

arXiv.org Artificial Intelligence

A series of datasets and models have been proposed for summaries generated for well-formatted documents such as news articles. Dialogue summaries, however, have been under explored. In this paper, we present the first dataset with fine-grained factual error annotations named DIASUMFACT. We define fine-grained factual error detection as a sentence-level multi-label classification problem, and we evaluate two state-of-the-art (SOTA) models on our dataset. Both models yield sub-optimal results, with a macro-averaged F1 score of around 0.25 over 6 error classes. We further propose an unsupervised model ENDERANKER via candidate ranking using pretrained encoder-decoder models. Our model performs on par with the SOTA models while requiring fewer resources. These observations confirm the challenges in detecting factual errors from dialogue summaries, which call for further studies, for which our dataset and results offer a solid foundation.


NAOMI: Non-Autoregressive Multiresolution Sequence Imputation

Liu, Yukai, Yu, Rose, Zheng, Stephan, Zhan, Eric, Yue, Yisong

Neural Information Processing Systems

Missing value imputation is a fundamental problem in spatiotemporal modeling, from motion tracking to the dynamics of physical systems. Deep autoregressive models suffer from error propagation which becomes catastrophic for imputing long-range sequences. In this paper, we take a non-autoregressive approach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) to impute long-range sequences given arbitrary missing patterns. NAOMI exploits the multiresolution structure of spatiotemporal data and decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. We further enhance our model with adversarial training.


NAOMI: Non-Autoregressive Multiresolution Sequence Imputation

Liu, Yukai, Yu, Rose, Zheng, Stephan, Zhan, Eric, Yue, Yisong

arXiv.org Machine Learning

Missing value imputation is a fundamental problem in modeling spatiotemporal sequences, from motion tracking to the dynamics of physical systems. In this paper, we take a non-autoregressive approach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) for imputing long-range spatiotemporal sequences given arbitrary missing patterns. In particular, NAOMI exploits the multiresolution structure of spatiotemporal data to interpolate recursively from coarse to fine-grained resolutions. We further enhance our model with adversarial training using an imitation learning objective. When trained on billiards and basketball trajectories, NAOMI demonstrates significant improvement in imputation accuracy (reducing average prediction error by 60% compared to autoregressive counterparts) and generalization capability for long range trajectories in systems of both deterministic and stochastic dynamics.