A Appendix A.1 Proof of Theorem

Neural Information Processing Systems 

"friendship" and "message"), the result is extended trivially (through with more (l 1) (l 1) ( l 1) Algorithms 3 and 4 show how to extend KTN . Using the minimum length of meta-paths is enough for KTN. We also present the results with error bars on OAG-computer networks and OAG-machine learning in Tables 6 and 7, respectively. KTN consistently outperforms all baselines. These reversed results are a consequence of HGNN's unique feature extractors On the other hand, DAN and JAN define a loss in terms of higher-order MMD between source and target features.