Goto

Collaborating Authors

 subprogram




FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning Lisha Chen

Neural Information Processing Systems

Finding specific preference-guided Pareto solutions that represent different tradeoffs among multiple objectives is critical yet challenging in multi-objective problems. Existing methods are restrictive in preference definitions and/or their theoretical guarantees.


A Appendix

Neural Information Processing Systems

A.1 T ensorFlow Primitives V ocabulary Name TF Function Argument Mapping Input 1 Input 2 Constant Dim Size ADD tf.math.add "Name" is the name of the operation in our search "TF Function" is the TensorFlow function that the name is mapped to when a DNA instruction "Argument Mapping" describes how the values in a DNA's argument set are mapped to the corresponding TensorFlow function arguments. TensorFlow graphs are built from DNA programs as described in Section 2 of the main text. The vocabulary for these relative dimensions is [1, 2, 4, 8, 12, 16, 24, 32, 48, 64]. This vocabulary was not tuned.



GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis

Neural Information Processing Systems

Figure 2: The illustration of knowledge reused from DoorKey to BoxKey. BoxKey As shown in Figure 1b, different from DoorKey, it has to open the box to get the key. Thus the learned program is color-agnostic (i.e., the agent's policy would remain robust no matter The valuation vector representations are fed to all the methods as input. The reward from the MiniGrid environment is sparse (i.e., only a positive reward will be given after We use a batch size of 256. The code is available at: https://github.com/caoysh/GALOIS


Transductively Informed Inductive Program Synthesis

Zenkner, Janis, Sesterhenn, Tobias, Bartelt, Christian

arXiv.org Artificial Intelligence

Abstraction and reasoning in program synthesis has seen significant progress through both inductive and transductive paradigms. Inductive approaches generate a program or latent function from input-output examples, which can then be applied to new inputs. Transductive approaches directly predict output values for given inputs, effectively serving as the function themselves. Current approaches combine inductive and transductive models via isolated ensembling, but they do not explicitly model the interaction between both paradigms. In this work, we introduce \acs{tiips}, a novel framework that unifies transductive and inductive strategies by explicitly modeling their interactions through a cooperative mechanism: an inductive model generates programs, while a transductive model constrains, guides, and refines the search to improve synthesis accuracy and generalization. We evaluate \acs{tiips} on two widely studied program synthesis domains: string and list manipulation. Our results show that \acs{tiips} solves more tasks and yields functions that more closely match optimal solutions in syntax and semantics, particularly in out-of-distribution settings, yielding state-of-the-art performance. We believe that explicitly modeling the synergy between inductive and transductive reasoning opens promising avenues for general-purpose program synthesis and broader applications.


Shedding Light in Task Decomposition in Program Synthesis: The Driving Force of the Synthesizer Model

Zenkner, Janis, Sesterhenn, Tobias, Bartelt, Christian

arXiv.org Artificial Intelligence

Task decomposition is a fundamental mechanism in program synthesis, enabling complex problems to be broken down into manageable subtasks. ExeDec, a state-of-the-art program synthesis framework, employs this approach by combining a Subgoal Model for decomposition and a Synthesizer Model for program generation to facilitate compositional generalization. In this work, we develop REGISM, an adaptation of ExeDec that removes decomposition guidance and relies solely on iterative execution-driven synthesis. By comparing these two exemplary approaches-ExeDec, which leverages task decomposition, and REGISM, which does not-we investigate the interplay between task decomposition and program generation. Our findings indicate that ExeDec exhibits significant advantages in length generalization and concept composition tasks, likely due to its explicit decomposition strategies. At the same time, REGISM frequently matches or surpasses ExeDec's performance across various scenarios, with its solutions often aligning more closely with ground truth decompositions. These observations highlight the importance of repeated execution-guided synthesis in driving task-solving performance, even within frameworks that incorporate explicit decomposition strategies. Our analysis suggests that task decomposition approaches like ExeDec hold significant potential for advancing program synthesis, though further work is needed to clarify when and why these strategies are most effective.


Learning Semantics-aware Search Operators for Genetic Programming

Wyrwiński, Piotr, Krawiec, Krzysztof

arXiv.org Artificial Intelligence

Fitness landscapes in test-based program synthesis are known to be extremely rugged, with even minimal modifications of programs often leading to fundamental changes in their behavior and, consequently, fitness values. Relying on fitness as the only guidance in iterative search algorithms like genetic programming is thus unnecessarily limiting, especially when combined with purely syntactic search operators that are agnostic about their impact on program behavior. In this study, we propose a semantics-aware search operator that steers the search towards candidate programs that are valuable not only actually (high fitness) but also only potentially, i.e. are likely to be turned into high-quality solutions even if their current fitness is low. The key component of the method is a graph neural network that learns to model the interactions between program instructions and processed data, and produces a saliency map over graph nodes that represents possible search decisions. When applied to a suite of symbolic regression benchmarks, the proposed method outperforms conventional tree-based genetic programming and the ablated variant of the method.


FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning

Chen, Lisha, Saif, AFM, Shen, Yanning, Chen, Tianyi

arXiv.org Artificial Intelligence

Finding specific preference-guided Pareto solutions that represent different trade-offs among multiple objectives is critical yet challenging in multi-objective problems. Existing methods are restrictive in preference definitions and/or their theoretical guarantees. In this work, we introduce a Flexible framEwork for pREfeRence-guided multi-Objective learning (FERERO) by casting it as a constrained vector optimization problem. Specifically, two types of preferences are incorporated into this formulation -- the relative preference defined by the partial ordering induced by a polyhedral cone, and the absolute preference defined by constraints that are linear functions of the objectives. To solve this problem, convergent algorithms are developed with both single-loop and stochastic variants. Notably, this is the first single-loop primal algorithm for constrained vector optimization to our knowledge. The proposed algorithms adaptively adjust to both constraint and objective values, eliminating the need to solve different subproblems at different stages of constraint satisfaction. Experiments on multiple benchmarks demonstrate the proposed method is very competitive in finding preference-guided optimal solutions. Code is available at https://github.com/lisha-chen/FERERO/.