Goto

Collaborating Authors

 Model-Based Reasoning


New earthquake probability model may help better predict the next big one

Daily Mail - Science & tech

A new model claims to predict when and where the next major earthquake may strike - just days after a 7.8 magnitude quake rocked Turkey and Syria, killing at least 19,000 people. Developed by a team of seismologists and statisticians at Northwestern University, the model takes into account previous earthquakes' specific order and timing rather than just relying on the average time between past earthquakes. This method also explains why earthquakes tend to come in clusters. The team found that faults have'long-term memory,' which means an earthquake did not release all the strain that built up on the fault over time, so some remains after a big earthquake and can cause another. Seismologists have traditionally assumed that big earthquakes on faults are relatively regular and that the next quake will occur after approximately the same amount of time as the previous two.


[2301.13868] PADL: Language-Directed Physics-Based Character Control

#artificialintelligence

Developing systems that can synthesize natural and life-like motions for simulated characters has long been a focus for computer animation. But in order for these systems to be useful for downstream applications, they need not only produce high-quality motions, but must also provide an accessible and versatile interface through which users can direct a character's behaviors. Natural language provides a simple-to-use and expressive medium for specifying a user's intent. Recent breakthroughs in natural language processing (NLP) have demonstrated effective use of language-based interfaces for applications such as image generation and program synthesis. In this work, we present PADL, which leverages recent innovations in NLP in order to take steps towards developing language-directed controllers for physics-based character animation. PADL allows users to issue natural language commands for specifying both high-level tasks and low-level skills that a character should perform. We present an adversarial imitation learning approach for training policies to map high-level language commands to low-level controls that enable a character to perform the desired task and skill specified by a user's commands. Furthermore, we propose a multi-task aggregation method that leverages a language-based multiple-choice question-answering approach to determine high-level task objectives from language commands. We show that our framework can be applied to effectively direct a simulated humanoid character to perform a diverse array of complex motor skills.


A Data-Driven Modeling and Control Framework for Physics-Based Building Emulators

arXiv.org Artificial Intelligence

We present a data-driven modeling and control framework for physics-based building emulators. Our approach comprises: (a) Offline training of differentiable surrogate models that speed up model evaluations, provide cheap gradients, and have good predictive accuracy for the receding horizon in Model Predictive Control (MPC) and (b) Formulating and solving nonlinear building HVAC MPC problems. We extensively verify the modeling and control performance using multiple surrogate models and optimization frameworks for different available test cases in the Building Optimization Testing Framework (BOPTEST). The framework is compatible with other modeling techniques and customizable with different control formulations. The modularity makes the approach future-proof for test cases currently in development for physics-based building emulators and provides a path toward prototyping predictive controllers in large buildings.


Probabilistic Variational Causal Effect as A new Theory for Causal Reasoning

arXiv.org Artificial Intelligence

In this paper, we introduce a new causal framework capable of dealing with probabilistic and non-probabilistic problems. Indeed, we provide a direct causal effect formula called Probabilistic vAriational Causal Effect (PACE) and its variations satisfying some ideas and postulates. Our formula of causal effect uses the idea of the total variation of a function integrated with probability theory. The probabilistic part is the natural availability of changing an exposure values given some variables. These variables interfere with the effect of the exposure on a given outcome. PACE has a parameter $d$ determining the degree of considering the natural availability of changing the exposure values. The lower values of $d$ refer to the scenarios for which rare cases are important. In contrast, with the higher values of $d$, our framework deals with the problems that are in nature probabilistic. Hence, instead of a single value for causal effect, we provide a causal effect vector by discretizing $d$. Further, we introduce the positive and negative PACE to measure the positive and the negative causal changes in the outcome while changing the exposure values. Furthermore, we provide an identifiability criterion for PACE to deal with observational studies. We also address the problem of computing counterfactuals in causal reasoning. We compare our framework to the Pearl, the mutual information, the conditional mutual information, and the Janzing et al. frameworks by investigating several examples.


Machine Learning AI Has Beat Chess, but Now It's Close to Beating Physics-Based Sports Games as Well

#artificialintelligence

Artificial intelligence has already beaten chess. Hell, the most sophisticated AI systems have a very good chance against top players in the incredibly complicated game of Go. But, in the uber-complicated car-based soccer game of Rocket League, can an AI do a boosted 360 aerial bicycle kick power shot from the midline? Can it pinch a ball off the side ramp so precisely it sails into the goal at 90 MPH? No, at least not yet, but AI can apparently dribble like a madman. For more than a week, players have been driven up the wall (sometimes literally, in game) by machine learning-based AI that's been hacked into games of Rocket League.


Physics-Informed Kernel Embeddings: Integrating Prior System Knowledge with Data-Driven Control

arXiv.org Artificial Intelligence

Data-driven control algorithms use observations of system dynamics to construct an implicit model for the purpose of control. However, in practice, data-driven techniques often require excessive sample sizes, which may be infeasible in real-world scenarios where only limited observations of the system are available. Furthermore, purely data-driven methods often neglect useful a priori knowledge, such as approximate models of the system dynamics. We present a method to incorporate such prior knowledge into data-driven control algorithms using kernel embeddings, a nonparametric machine learning technique based in the theory of reproducing kernel Hilbert spaces. Our proposed approach incorporates prior knowledge of the system dynamics as a bias term in the kernel learning problem. We formulate the biased learning problem as a least-squares problem with a regularization term that is informed by the dynamics, that has an efficiently computable, closed-form solution. Through numerical experiments, we empirically demonstrate the improved sample efficiency and out-of-sample generalization of our approach over a purely data-driven baseline. We demonstrate an application of our method to control through a target tracking problem with nonholonomic dynamics, and on spring-mass-damper and F-16 aircraft state prediction tasks.


Causal Reasoning Meets Visual Representation Learning: A Prospective Study - Machine Intelligence Research

#artificialintelligence

Visual representation learning is ubiquitous in various real-world applications, including visual comprehension, video understanding, multi-modal analysis, human-computer interaction, and urban computing. The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge, which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities. Inspired by the strong inference ability of human-level agents, recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability. In this paper, we conduct a comprehensive review of existing causal reasoning methods for visual representation learning, covering fundamental theories, models, and datasets. The limitations of current methods and datasets are also discussed.


Mechanism Design With Predictions for Obnoxious Facility Location

arXiv.org Artificial Intelligence

The theory of algorithms with predictions [1, 2, 3] is, without a doubt, one of the most exciting recent research directions in algorithmics: when supplemented by a (correct) predictor, often based on machine learning, the newly-developed algorithms are capable of outcompeting their worst-case classical counterparts. A desirable feature of such algorithms is, of course, to perform comparably to the (worst-case) algorithms when the predictors are really bad. This requirement often results [2] in tradeoffs between two measures of algorithm performance, robustness and consistency. A significant amount of subsequent research has followed, summarized by the algorithms with predictions webpage [3]. Recently, the idea of augmenting algorithms by predictions has been adapted to the game-theoretic setting of mechanism design [4, 5, 6, 7]: indeed, strategyproof mechanisms often yield solutions that are only approximately optimal [8]. On the other hand, if the designer has access to a predictor for the desired outcome it could conceivably take advantage of this information by creating mechanisms that lead to an improved approximation ratio, compared to their existing (worst-case) counterparts. Tradeoffs between robustness and consistency similar to the ones from [2] apply to this setting as well.


Predicting Energy Consumption of Ground Robots On Uneven Terrains

arXiv.org Artificial Intelligence

Optimizing energy consumption for robot navigation in fields requires energy-cost maps. However, obtaining such a map is still challenging, especially for large, uneven terrains. Physics-based energy models work for uniform, flat surfaces but do not generalize well to these terrains. Furthermore, slopes make the energy consumption at every location directional and add to the complexity of data collection and energy prediction. In this paper, we address these challenges in a data-driven manner. We consider a function which takes terrain geometry and robot motion direction as input and outputs expected energy consumption. The function is represented as a ResNet-based neural network whose parameters are learned from field-collected data. The prediction accuracy of our method is within 12% of the ground truth in our test environments that are unseen during training. We compare our method to a baseline method in the literature: a method using a basic physics-based model. We demonstrate that our method significantly outperforms it by more than 10% measured by the prediction error. More importantly, our method generalizes better when applied to test data from new environments with various slope angles and navigation directions.


Neural modal ordinary differential equations: Integrating physics-based modeling with neural ordinary differential equations for modeling high-dimensional monitored structures

arXiv.org Artificial Intelligence

The order/dimension of models derived on the basis of data is commonly restricted by the number of observations, or in the context of monitored systems, sensing nodes. This is particularly true for structural systems (e.g., civil or mechanical structures), which are typically high-dimensional in nature. In the scope of physics-informed machine learning, this paper proposes a framework -- termed Neural Modal ODEs -- to integrate physics-based modeling with deep learning for modeling the dynamics of monitored and high-dimensional engineered systems. Neural Ordinary Differential Equations -- Neural ODEs are exploited as the deep learning operator. In this initiating exploration, we restrict ourselves to linear or mildly nonlinear systems. We propose an architecture that couples a dynamic version of variational autoencoders with physics-informed Neural ODEs (Pi-Neural ODEs). An encoder, as a part of the autoencoder, learns the abstract mappings from the first few items of observational data to the initial values of the latent variables, which drive the learning of embedded dynamics via physics-informed Neural ODEs, imposing a modal model structure on that latent space. The decoder of the proposed model adopts the eigenmodes derived from an eigen-analysis applied to the linearized portion of a physics-based model: a process implicitly carrying the spatial relationship between degrees-of-freedom (DOFs). The framework is validated on a numerical example, and an experimental dataset of a scaled cable-stayed bridge, where the learned hybrid model is shown to outperform a purely physics-based approach to modeling. We further show the functionality of the proposed scheme within the context of virtual sensing, i.e., the recovery of generalized response quantities in unmeasured DOFs from spatially sparse data.