Goto

Collaborating Authors

 applicability





Revealing Distribution Discrepancy by Sampling Transfer in Unlabeled Data

Neural Information Processing Systems

The assumption that data are independently and identically distributed (IID) is staple in statistical machine learning. It suggests that a hypothesis selected by an algorithm, after observing several training samples, should perform effectively on test samples from the same unknown distribution.







The proposed LP filter is fundamentally different from previous weighted

Neural Information Processing Systems

Due to space constraints we only address major concerns; all suggestions will be included in the final version. Experimentally we've observed that when using previous weighted We will compare and cite related work (gTop-k) in the final draft. In sec.3 we assume min. SGD has a small critical batch size to approximate a full gradient descent iteration, no matter the size of dataset. Appendix-F shows ScaleCom's scalability in system performance; more Analogously, we perform filtering on the residual gradients (see eq.(5)) Connection will be discussed in the revised version.