Goto

Collaborating Authors

 rewrite rule



ActPC-Chem: Discrete Active Predictive Coding for Goal-Guided Algorithmic Chemistry as a Potential Cognitive Kernel for Hyperon & PRIMUS-Based AGI

Goertzel, Ben

arXiv.org Artificial Intelligence

We explore a novel paradigm (labeled ActPC-Chem) for biologically inspired, goal-guided artificial intelligence (AI) centered on a form of Discrete Active Predictive Coding (ActPC) operating within an algorithmic chemistry of rewrite rules. ActPC-Chem is envisioned as a foundational "cognitive kernel" for advanced cognitive architectures, such as the OpenCog Hyperon system, incorporating essential elements of the PRIMUS cognitive architecture. The central thesis is that general-intelligence-capable cognitive structures and dynamics can emerge in a system where both data and models are represented as evolving patterns of metagraph rewrite rules, and where prediction errors, intrinsic and extrinsic rewards, and semantic constraints guide the continual reorganization and refinement of these rules. Using a virtual "robot bug" thought experiment, we illustrate how such a system might self-organize to handle challenging tasks involving delayed and context-dependent rewards, integrating causal rule inference (AIRIS) and probabilistic logical abstraction (PLN) to discover and exploit conceptual patterns and causal constraints. Next, we describe how continuous predictive coding neural networks, which excel at handling noisy sensory data and motor control signals, can be coherently merged with the discrete ActPC substrate. Finally, we outline how these ideas might be extended to create a transformer-like architecture that foregoes traditional backpropagation in favor of rule-based transformations guided by ActPC. This layered architecture, supplemented with AIRIS and PLN, promises structured, multi-modal, and logically consistent next-token predictions and narrative sequences.


Automating Reformulation of Essence Specifications via Graph Rewriting

Miguel, Ian, Salamon, András Z., Stone, Christopher

arXiv.org Artificial Intelligence

Formulating an effective constraint model of a parameterised problem class is crucial to the efficiency with which instances of the class can subsequently be solved. It is difficult to know beforehand which of a set of candidate models will perform best in practice. This paper presents a system that employs graph rewriting to reformulate an input model for improved performance automatically. By situating our work in the Essence abstract constraint specification language, we can use the structure in its high level variable types to trigger rewrites directly. We implement our system via rewrite rules expressed in the Graph Programs 2 language, applied to the abstract syntax tree of an input specification. We show how to automatically translate the solution of the reformulated problem into a solution of the original problem for verification and presentation. We demonstrate the efficacy of our system with a detailed case study.


A Distribution Semantics for Probabilistic Term Rewriting

Vidal, Germán

arXiv.org Artificial Intelligence

Probabilistic programming is becoming increasingly popular thanks to its ability to specify problems with a certain degree of uncertainty. In this work, we focus on term rewriting, a well-known computational formalism. In particular, we consider systems that combine traditional rewriting rules with probabilities. Then, we define a distribution semantics for such systems that can be used to model the probability of reducing a term to some value. We also show how to compute a set of "explanations" for a given reduction, which can be used to compute its probability. Finally, we illustrate our approach with several examples and outline a couple of extensions that may prove useful to improve the expressive power of probabilistic rewrite systems.


Optimizing Tensor Computation Graphs with Equality Saturation and Monte Carlo Tree Search

Hartmann, Jakob, He, Guoliang, Yoneki, Eiko

arXiv.org Artificial Intelligence

The real-world effectiveness of deep neural networks often depends on their latency, thereby necessitating optimization techniques that can reduce a model's inference time while preserving its performance. One popular approach is to sequentially rewrite the input computation graph into an equivalent but faster one by replacing individual subgraphs. This approach gives rise to the so-called phase-ordering problem in which the application of one rewrite rule can eliminate the possibility to apply an even better one later on. Recent work has shown that equality saturation, a technique from compiler optimization, can mitigate this issue by first building an intermediate representation (IR) that efficiently stores multiple optimized versions of the input program before extracting the best solution in a second step. In practice, however, memory constraints prevent the IR from capturing all optimized versions and thus reintroduce the phase-ordering problem in the construction phase. In this paper, we present a tensor graph rewriting approach that uses Monte Carlo tree search to build superior IRs by identifying the most promising rewrite rules. We also introduce a novel extraction algorithm that can provide fast and accurate runtime estimates of tensor programs represented in an IR. Our approach improves the inference speedup of neural networks by up to 11% compared to existing methods.


Equality Saturation for Tensor Graph Superoptimization

Yang, Yichen, Phothilimtha, Phitchaya Mangpo, Wang, Yisu Remy, Willsey, Max, Roy, Sudip, Pienaar, Jacques

arXiv.org Artificial Intelligence

One of the major optimizations employed in deep learning frameworks is graph rewriting. Production frameworks rely on heuristics to decide if rewrite rules should be applied and in which order. Prior research has shown that one can discover more optimal tensor computation graphs if we search for a better sequence of substitutions instead of relying on heuristics. However, we observe that existing approaches for tensor graph superoptimization both in production and research frameworks apply substitutions in a sequential manner. Such sequential search methods are sensitive to the order in which the substitutions are applied and often only explore a small fragment of the exponential space of equivalent graphs. This paper presents a novel technique for tensor graph superoptimization that employs equality saturation to apply all possible substitutions at once. We show that our approach can find optimized graphs with up to 16% speedup over state-of-the-art, while spending on average 48x less time optimizing.


LA county officials want to rewrite rules to remove a sheriff

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. The Los Angeles County Board of Supervisors on Tuesday voted on a motion to explore options for removing Sheriff Alex Villanueva, including an amendment to the state's Constitution. The 3-2 vote directs county officials to examine ways in which they could impeach Villanueva from his position, or, at least, scale back his responsibilities. One of those options proposed would amend California's Constitution to make county sheriffs be appointed, rather than elected.


Situation Calculus by Term Rewriting

Plaisted, David A.

arXiv.org Artificial Intelligence

A version of the situation calculus in which situations are represented as first-order terms is presented. Fluents can be computed from the term structure, and actions on the situations correspond to rewrite rules on the terms. Actions that only depend on or influence a subset of the fluents can be described as rewrite rules that operate on subterms of the terms in some cases. If actions are bidirectional then efficient completion methods can be used to solve planning problems. This representation for situations and actions is most similar to the fluent calculus of Thielscher \cite{Thielscher98}, except that this representation is more flexible and more use is made of the subterm structure. Some examples are given, and a few general methods for constructing such sets of rewrite rules are presented. This paper was submitted to FSCD 2020 on December 23, 2019.


Online Non-Additive Path Learning under Full and Partial Information

Cortes, Corinna, Kuznetsov, Vitaly, Mohri, Mehryar, Rahmanian, Holakou, Warmuth, Manfred K.

arXiv.org Machine Learning

We consider the online path learning problem in a graph with non-additive gains/losses. Various settings of full information, semi-bandit, and full bandit are explored. We give an efficient implementation of EXP3 algorithm for the full bandit setting with any (non-additive) gain. Then, focusing on the large family of non-additive count-based gains, we construct an intermediate graph which has equivalent gains that are additive. By operating on this intermediate graph, we are able to use algorithms like Component Hedge and ComBand for the first time for non-additive gains. Finally, we apply our methods to the important application of ensemble structured prediction. Keywords: online learning, experts, non-additive losses or gains, structured prediction, bandit.


Artificial intelligence set to rewrite rules for legal profession

#artificialintelligence

If ever there was an industry ripe for disruption it is surely the legal profession. Unlike many other sectors, however, it has tended to be a little reticent about embracing technology to innovate. After all, the traditional way of doing business for legal firms has been extremely profitable. The model typically involves a bunch of low-paid minions doing grunt work while a few partners earn eye-wateringly high sums. Moreover, many legal professionals look upon technology with fear and who could blame them when a forecast from Deloitte published last year predicted that more than 100,000 jobs in the sector could be automated within the next 20 years.