Goto

Collaborating Authors

 Model-Based Reasoning


DeepMind Research Introduces Algorithms for Causal Reasoning in Probability Trees

#artificialintelligence

For cutting-edge AI researchers looking for clean semantics models to represent the context-specific causal dependencies essential for causal induction, this DeepMind's algorithm encourages you to look at good old-fashioned probability trees. The probability tree diagram is used to represent a probability space. Tree diagrams illustrate a series of independent events or conditional probabilities. The Node on the probability tree diagram represents an event, and it's probability. The root node represents a particular event where probability equals one.


DeepMind Introduces Algorithms for Causal Reasoning in Probability Trees

#artificialintelligence

Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees. Probability trees may have been around for decades, but they have received little attention from the AI and ML community. "Probability trees are one of the simplest models of causal generative processes," explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees. Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc. Causal induction is also a classic problem in statistics and machine learning.


A programming language for scientific machine learning and differentiable programming

#artificialintelligence

In this episode of the Data Exchange I speak with Viral Shah, co-founder and CEO, Julia Computing. Along with his Julia language co-creators, Viral was awarded the 2019 Wilkinson prize, for outstanding contributions in the field of numerical software. I first tweeted about Julia at the beginning of March 2012 after seeing Jeff Bezanson give a talk in Stanford. I've dabbled with it here and there, but have never used it for a major project. Over the past few years, Julia continued to add packages at a steady pace and the package manager is really quite impressive and solid.



A Feedback Scheme to Reorder a Multi-Agent Execution Schedule by Persistently Optimizing a Switchable Action Dependency Graph

arXiv.org Artificial Intelligence

In this paper we consider multiple Automated Guided Vehicles (AGVs) navigating a common workspace to fulfill various intralogistics tasks, typically formulated as the Multi-Agent Path Finding (MAPF) problem. To keep plan execution deadlock-free, one approach is to construct an Action Dependency Graph (ADG) which encodes the ordering of AGVs as they proceed along their routes. Using this method, delayed AGVs occasionally require others to wait for them at intersections, thereby affecting the plan execution efficiency. If the workspace is shared by dynamic obstacles such as humans or third party robots, AGVs can experience large delays. A common mitigation approach is to re-solve the MAPF using the current, delayed AGV positions. However, solving the MAPF is time-consuming, making this approach inefficient, especially for large AGV teams. In this work, we present an online method to repeatedly modify a given acyclic ADG to minimize route completion times of each AGV. Our approach persistently maintains an acyclic ADG, necessary for deadlock-free plan execution. We evaluate the approach by considering simulations with random disturbances on the execution and show faster route completion times compared to the baseline ADG-based execution management approach.


COMET: An Application of Model-Based Reasoning to Accounting Systems

AI Magazine

An important problem faced by auditors is gauging how much reliance can be placed on the accounting systems that process millions of transactions to produce the numbers summarized in a company's financial statements. Accounting sys-ems contain internal controls, procedures designed to detect and correct errors and irregularities that can occur in the processing of transactions. In a complex accounting system, it can be an extremely difficult task for the auditor to anticipate the possible errors that can occur and evaluate the effectiveness of the controls at detecting them. An accurate analysis must take into account the unique features of each company's business processes. To cope with this complexity and variability, the COMET system applies a model-based reasoning approach to the analysis of accounting systems and their controls.


Using Mechanism Design to Prevent False-Name Manipulations

AI Magazine

The basic notion of false-name-proofness allows for useful mechanisms under certain circumstances, but in general there are impossibility results that show that false-name-proof mechanisms have severe limitations. One may react to these impossibility results by saying that, since false-name-proof mechanisms are unsatisfactory, we should not run any important mechanisms in highly anonymous settings--unless, perhaps, we can find some methodology that directly prevents false-name manipulation even in such settings, so that we are back in a more typical mechanism design context. However, it seems unlikely that the phenomenon of false-name manipulation will disappear anytime soon. Because the Internet is so attractive as a platform for running certain types of mechanisms, it seems unlikely that the organizations running these mechanisms will take them offline. Moreover, because a goal of these organizations is often to get as many users to participate as possible, they will be reluctant to use high-overhead solutions that discourage users from participating. As a result, perhaps the most promising approaches at this point are those that combine techniques from mechanism design with other techniques discussed in this article.


The Scheduling Job-Set Optimization Problem: A Model-Based Diagnosis Approach

arXiv.org Artificial Intelligence

A common issue for companies is that the volume of product orders may at times exceed the production capacity. We formally introduce two novel problems dealing with the question which orders to discard or postpone in order to meet certain (timeliness) goals, and try to approach them by means of model-based diagnosis. In thorough analyses, we identify many similarities of the introduced problems to diagnosis problems, but also reveal crucial idiosyncracies and outline ways to handle or leverage them. Finally, a proof-of-concept evaluation on industrial-scale problem instances from a well-known scheduling benchmark suite demonstrates that one of the two formalized problems can be well attacked by out-of-the-box model-based diagnosis tools.


A round-up of topology-based papers at ICML 2020

AIHub

With this year's International Conference on Machine Learning (ICML) being over, it is time to have another instalment of this series. Similar to last year's post, I shall cover several papers that caught my attention because of their use of topological concepts--however, unlike last year, I shall not restrict the selection to papers using topological data analysis (TDA). Caveat lector: I might have missed some promising papers. Any suggestions for additions are more than welcome! Please reach out to me via Twitter or e-mail.


Learning Compact Physics-Aware Delayed Photocurrent Models Using Dynamic Mode Decomposition

arXiv.org Machine Learning

Radiation-induced photocurrent in semiconductor devices can be simulated using complex physics-based models, which are accurate, but computationally expensive. This presents a challenge for implementing device characteristics in high-level circuit simulations where it is computationally infeasible to evaluate detailed models for multiple individual circuit elements. In this work we demonstrate a procedure for learning compact delayed photocurrent models that are efficient enough to implement in large-scale circuit simulations, but remain faithful to the underlying physics. Our approach utilizes Dynamic Mode Decomposition (DMD), a system identification technique for learning reduced order discrete-time dynamical systems from time series data based on singular value decomposition. To obtain physics-aware device models, we simulate the excess carrier density induced by radiation pulses by solving numerically the Ambipolar Diffusion Equation, then use the simulated internal state as training data for the DMD algorithm. Our results show that the significantly reduced order delayed photocurrent models obtained via this method accurately approximate the dynamics of the internal excess carrier density -- which can be used to calculate the induced current at the device boundaries -- while remaining compact enough to incorporate into larger circuit simulations.