Goto

Collaborating Authors

 predictive






No AI Without PI! Object-Centric Process Mining as the Enabler for Generative, Predictive, and Prescriptive Artificial Intelligence

van der Aalst, Wil M. P.

arXiv.org Artificial Intelligence

The uptake of Artificial Intelligence (AI) impacts the way we work, interact, do business, and conduct research. However, organizations struggle to apply AI successfully in industrial settings where the focus is on end-to-end operational processes. Here, we consider generative, predictive, and prescriptive AI and elaborate on the challenges of diagnosing and improving such processes. We show that AI needs to be grounded using Object-Centric Process Mining (OCPM). Process-related data are structured and organization-specific and, unlike text, processes are often highly dynamic. OCPM is the missing link connecting data and processes and enables different forms of AI. We use the term Process Intelligence (PI) to refer to the amalgamation of process-centric data-driven techniques able to deal with a variety of object and event types, enabling AI in an organizational context. This paper explains why AI requires PI to improve operational processes and highlights opportunities for successfully combining OCPM and generative, predictive, and prescriptive AI.


Introduction to Predictive Coding Networks for Machine Learning

Stenlund, Mikko

arXiv.org Artificial Intelligence

Predictive coding networks (PCNs) constitute a biologically inspired framework for understanding hierarchical computation in the brain, and offer an alternative to traditional feedforward neural networks in ML. This note serves as a quick, onboarding introduction to PCNs for machine learning practitioners. We cover the foundational network architecture, inference and learning update rules, and algorithmic implementation. A concrete image-classification task (CIFAR-10) is provided as a benchmark-smashing application, together with an accompanying Python notebook containing the PyTorch implementation.


Review for NeurIPS paper: Predictive coding in balanced neural networks with noise, chaos and delays

Neural Information Processing Systems

This paper theoretically investigates and experimentally verifies properties of predictive coding network. Although limited to simple representations, the mathematical analysis has impressed the reviewers. The AC thus recommends acceptance of this work.


Review for NeurIPS paper: Predictive coding in balanced neural networks with noise, chaos and delays

Neural Information Processing Systems

Additional Feedback: Minor comments: l. 87: "were" - "where" l.128: the relation to E-I balanced networks could be made more explicit. In some versions of those networks, there are also two independent effective parameters that scale separately the negative feedback and the variance of the connectivity (see e.g. Mastrogiuseppe and Ostojic 2017) l. 223 "the full solution for the chaotic system is highly involved" - the solution for adiabatic inputs seems to be available from Ref.23, but perhaps the situation here is different? My understanding is that we are here in the adiabatic limit, not in the case of Ref 38? In the adiabatic case, why does the (finite) correlation timescale of the noise matter for coding?


Predictive policing has prejudice built in Letters

The Guardian

Re your article ('Dystopian' tool aims to predict murder, 9 April), the collection and automation of data has repeatedly led to the targeting of racialised and low-income communities, and must come to an end. This has been found by both Amnesty International in our Automated Racism report and by Statewatch in its findings on the "murder prediction" tool. For many years, successive governments have invested in data-driven and data-based systems, stating they will increase public safety – yet individual police forces and Home Office evaluations have found no compelling evidence that these systems have had any impact on reducing crime. Feedback loops are created by training these systems using historically discriminatory data, which leads to the same areas being targeted once again. These systems are neither revelatory nor objective.


On the Interplay Between Sparsity and Training in Deep Reinforcement Learning

Davelouis, Fatima, Martin, John D., Bowling, Michael

arXiv.org Artificial Intelligence

We study the benefits of different sparse architectures for deep reinforcement learning. In particular, we focus on image-based domains where spatially-biased and fully-connected architectures are common. Using these and several other architectures of equal capacity, we show that sparse structure has a significant effect on learning performance. We also observe that choosing the best sparse architecture for a given domain depends on whether the hidden layer weights are fixed or learned.