Goto

Collaborating Authors

 fgg




Factor Graph Grammars

Neural Information Processing Systems

We propose the use of hyperedge replacement graph grammars for factor graphs, or factor graph grammars (FGGs) for short. FGGs generate sets of factor graphs and can describe a more general class of models than plate notation, dynamic graphical models, case-factor diagrams, and sum-product networks can. Moreover, inference can be done on FGGs without enumerating all the generated factor graphs. For finite variable domains (but possibly infinite sets of graphs), a generalization of variable elimination to FGGs allows exact and tractable inference in many situations. For finite sets of graphs (but possibly infinite variable domains), a FGG can be converted to a single factor graph amenable to standard inference techniques.




We thank all four reviewers for their thoughtful reviews, and are happy that they value the contribution of a new

Neural Information Processing Systems

's suggestion to draw the external nodes on the left-hand side differs from But this wouldn't be as flexible as we'd like; for example, we'd like to query a HMM for the But conjunction is an operation on FGGs, not factor graphs, so at the time of conjunction, no renaming has taken place. We agree that the notation should be improved and will think about how to do so. Lemma 15 does not change the generated graphs and cannot change their treewidth. We mean that an FGG can't generate the


FracAug: Fractional Augmentation boost Graph-level Anomaly Detection under Limited Supervision

Dong, Xiangyu, Zhang, Xingyi, Wang, Sibo

arXiv.org Artificial Intelligence

Graph-level anomaly detection (GAD) is critical in diverse domains such as drug discovery, yet high labeling costs and dataset imbalance hamper the performance of Graph Neural Networks (GNNs). To address these issues, we propose FracAug, an innovative plug-in augmentation framework that enhances GNNs by generating semantically consistent graph variants and pseudo-labeling with mutual verification. Unlike previous heuristic methods, FracAug learns semantics within given graphs and synthesizes fractional variants, guided by a novel weighted distance-aware margin loss. This captures multi-scale topology to generate diverse, semantic-preserving graphs unaffected by data imbalance. Then, FracAug utilizes predictions from both original and augmented graphs to pseudo-label unlabeled data, iteratively expanding the training set. As a model-agnostic module compatible with various GNNs, FracAug demonstrates remarkable universality and efficacy: experiments across 14 GNNs on 12 real-world datasets show consistent gains, boosting average AUROC, AUPRC, and F1-score by up to 5.72%, 7.23%, and 4.18%, respectively.


Review for NeurIPS paper: Factor Graph Grammars

Neural Information Processing Systems

Clarity: The paper is fairly dense because of the unfortunate 8-page limit, but well and carefully written. I think the most confusing part for readers ls likely to be the conjunction operation -- if there's an extra page in the camera-ready, the presentation here should be slowed down with some qualitative discussion. You should probably clarify early on that you're talking about undirected hypergraphs. Notation in section 2.1: I regard 52-53 as a commutation property, basically vertices(\bar{e}) \bar{vertices(e)}, where \bar lifts from variables or variable-tuples to their labels. I don't understand where the name "att" comes from ("attachment"?) or why you use the name "type" in the way you do.


Factor Graph Grammars

Neural Information Processing Systems

We propose the use of hyperedge replacement graph grammars for factor graphs, or factor graph grammars (FGGs) for short. FGGs generate sets of factor graphs and can describe a more general class of models than plate notation, dynamic graphical models, case-factor diagrams, and sum-product networks can. Moreover, inference can be done on FGGs without enumerating all the generated factor graphs. For finite variable domains (but possibly infinite sets of graphs), a generalization of variable elimination to FGGs allows exact and tractable inference in many situations. For finite sets of graphs (but possibly infinite variable domains), a FGG can be converted to a single factor graph amenable to standard inference techniques.


Structural Optimization Ambiguity and Simplicity Bias in Unsupervised Neural Grammar Induction

Park, Jinwook, Kim, Kangil

arXiv.org Artificial Intelligence

Neural parameterization has significantly advanced unsupervised grammar induction. However, training these models with a traditional likelihood loss for all possible parses exacerbates two issues: 1) $\textit{structural optimization ambiguity}$ that arbitrarily selects one among structurally ambiguous optimal grammars despite the specific preference of gold parses, and 2) $\textit{structural simplicity bias}$ that leads a model to underutilize rules to compose parse trees. These challenges subject unsupervised neural grammar induction (UNGI) to inevitable prediction errors, high variance, and the necessity for extensive grammars to achieve accurate predictions. This paper tackles these issues, offering a comprehensive analysis of their origins. As a solution, we introduce $\textit{sentence-wise parse-focusing}$ to reduce the parse pool per sentence for loss evaluation, using the structural bias from pre-trained parsers on the same dataset. In unsupervised parsing benchmark tests, our method significantly improves performance while effectively reducing variance and bias toward overly simplistic parses. Our research promotes learning more compact, accurate, and consistent explicit grammars, facilitating better interpretability.