On the Internal Representations of Graph Metanetworks
–arXiv.org Artificial Intelligence
Weight space learning is an emerging paradigm in the deep learning community. The primary goal of weight space learning is to extract informative features from a set of parameters using specially designed neural networks, often referred to as metanetworks. However, it remains unclear how these metanetworks learn solely from parameters. To address this, we take the first step toward understanding representations of metanetworks, specifically graph metanetworks (GMNs), which achieve state-of-the-art results in this field, using centered kernel alignment (CKA). Through various experiments, we reveal that GMNs and general neural networks (e.g., multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs)) differ in terms of their representation space. The field of weight space learning has gained a lot of attention these days.
arXiv.org Artificial Intelligence
Mar-12-2025