Similarly,wederivethelowerboundasbelow. E(X(k)) =E(PX(k 1) W(k)) =tr((PX(k 1) W(k)) > (PX(k 1) W(k))) tr((PX(k 1)) > (PX(k 1)))σmin(W(k)(W(k)) >) =tr((PX(k 1)) > (PX(k 1)))s(k)
–Neural Information Processing Systems
They are widely used to study the over-smoothing issue and test the performance of deep GNNs. We use the public train/validation/test split in Cora and Pubmed, and randomly split Coauthor-Physics by following the previous practice. Their data statistics are summarizedinTable3. Approximate personalized propagation of neural predictions (APPNP) [50]. We then re-express the graph convolution at layerk, which is simplified to depend only on node embeddingX(k 1).
Neural Information Processing Systems
Feb-10-2026, 20:46:03 GMT