Goto

Collaborating Authors

 stlnet


STLnet: SignalTemporalLogicEnforced MultivariateRecurrentNeuralNetworks

Neural Information Processing Systems

In practice, the target sequence often follows certain model properties or patterns (e.g., reasonable ranges, consecutive changes, resource constraint, temporal correlations between multiple variables, existence, unusual cases, etc.). However,RNNs cannot guarantee their learned distributions satisfy these properties.


STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

Neural Information Processing Systems

Recurrent Neural Networks (RNNs) have made great achievements for sequential prediction tasks. In practice, the target sequence often follows certain model properties or patterns (e.g., reasonable ranges, consecutive changes, resource constraint, temporal correlations between multiple variables, existence, unusual cases, etc.). However, RNNs cannot guarantee their learned distributions satisfy these model properties. It is even more challenging for predicting large-scale and complex Cyber-Physical Systems. Failure to produce outcomes that meet these model properties will result in inaccurate and even meaningless results. In this paper, we develop a new temporal logic-based learning framework, STLnet, which guides the RNN learning process with auxiliary knowledge of model properties, and produces a more robust model for improved future predictions. Our framework can be applied to general sequential deep learning models, and trained in an end-to-end manner with back-propagation. We evaluate the performance of STLnet using large-scale real-world city data. The experimental results show STLnet not only improves the accuracy of predictions, but importantly also guarantees the satisfaction of model properties and increases the robustness of RNNs.



Review for NeurIPS paper: STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

Neural Information Processing Systems

Weaknesses: *Temporal logic as such is often useful when considering infinite traces. Signal temporal logic is quite useful in the case of finite-time traces when dealing with continuous time systems. Neither of them are under consideration here, and I think this is the biggest draw back. Much of the ideas in the paper have been introduced before. I will list some out here (which hasn't been discussed in the paper): 1. Writing STL specifications in terms of DNF specifications/using logical operators has been done previous in works such as: a) https://arxiv.org/abs/1703.09563


Review for NeurIPS paper: STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

Neural Information Processing Systems

This paper initially received three reviews. The reviewers appreciated the integration of temporal logic with deep learning presented in the paper. The main concerns centered around the relations of the proposed method to existing literature and first-order logic specifications. After reading the authors' rebuttal the reviewers engaged in detailed debate around the merits of the paper. A fourth expert reviewer's opinion was sought to help come to a decision.


STLnet: Signal Temporal Logic Enforced Multivariate Recurrent Neural Networks

Neural Information Processing Systems

Recurrent Neural Networks (RNNs) have made great achievements for sequential prediction tasks. In practice, the target sequence often follows certain model properties or patterns (e.g., reasonable ranges, consecutive changes, resource constraint, temporal correlations between multiple variables, existence, unusual cases, etc.). However, RNNs cannot guarantee their learned distributions satisfy these model properties. It is even more challenging for predicting large-scale and complex Cyber-Physical Systems. Failure to produce outcomes that meet these model properties will result in inaccurate and even meaningless results.