Goto

Collaborating Authors

 applicability




Revealing Distribution Discrepancy by Sampling Transfer in Unlabeled Data

Neural Information Processing Systems

The assumption that data are independently and identically distributed (IID) is staple in statistical machine learning. It suggests that a hypothesis selected by an algorithm, after observing several training samples, should perform effectively on test samples from the same unknown distribution.







The proposed LP filter is fundamentally different from previous weighted

Neural Information Processing Systems

Due to space constraints we only address major concerns; all suggestions will be included in the final version. Experimentally we've observed that when using previous weighted We will compare and cite related work (gTop-k) in the final draft. In sec.3 we assume min. SGD has a small critical batch size to approximate a full gradient descent iteration, no matter the size of dataset. Appendix-F shows ScaleCom's scalability in system performance; more Analogously, we perform filtering on the residual gradients (see eq.(5)) Connection will be discussed in the revised version.


Dual Knowledge Graph (Supplementary Materials)

Neural Information Processing Systems

Sec. 2 provides more experimental details on Few-shot Learning for our GraphAdapter. Sec. 3 describes more details about datasets and implementation. Sec. 4 visualizes the textual graph nodes used for classification before and after utilizing our Sec. Notably, the TaskRes* exploits the enhanced base classifier. We present the numerical results of "Figure 3 in the main text" as Table 2.