Goto

Collaborating Authors

 cmin


DirichletEnergyConstrainedLearningforDeep GraphNeuralNetworks

Neural Information Processing Systems

However,theperformance ofexisting GNNs would decrease significantly when they stack many layers, because of the oversmoothing issue. Node embeddings tend to converge to similar vectors when GNNs keep recursively aggregating the representations ofneighbors.


81e793dc8317a3dbc3534ed3f242c418-Supplemental.pdf

Neural Information Processing Systems

Leveraging themodel-based nature ofDisCo,wecanalso readily compute anε/cmin-optimal policy for any cost-sensitive shortest-path problem defined on theL-controllable states with minimum costcmin.


7a006957be65e608e863301eb98e1808-Supplemental.pdf

Neural Information Processing Systems

In Appendix A, we review some statistical results for sparse linear regression. We review some classical results in sparse linear regression. Let the design matrix beX = (x1,...,xn)> Rn d. Second, we derive a regret lower bound of alternative banditeθ.