Transfer Learning for Latent Variable Network Models
Jalan, Akhil, Mazumdar, Arya, Mukherjee, Soumendu Sundar, Sarkar, Purnamrita
–arXiv.org Artificial Intelligence
We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by $P$ for the source and $Q$ for the target. We wish to estimate $Q$ given two kinds of data: (1) edge data from a subgraph induced by an $o(1)$ fraction of the nodes of $Q$, and (2) edge data from all of $P$. If the source $P$ has no relation to the target $Q$, the estimation error must be $\Omega(1)$. However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems.
arXiv.org Artificial Intelligence
Jun-6-2024
- Country:
- Asia (0.46)
- Europe > United Kingdom
- Scotland (0.14)
- North America > United States
- California (0.14)
- Genre:
- Research Report (0.50)
- Industry:
- Technology:
- Information Technology
- Artificial Intelligence > Machine Learning
- Neural Networks > Deep Learning (0.46)
- Statistical Learning (0.67)
- Transfer Learning (0.72)
- Communications > Networks (1.00)
- Data Science (1.00)
- Artificial Intelligence > Machine Learning
- Information Technology