Table 1: Comparison of GRL applied to fundament network (FADA) and HADAUDA A C A P A R Avg MSDA Clipart Infograph Painting Avg SSDA(1

Neural Information Processing Systems 

We apply gradient reversal layer (GRL) to the single fundament network (FADA). Some results of FADA could be found in Table 1, which performs worse than HADA by a large margin. For example, on A C on Office-Home, its accuracy is 30.1 compared to 56.8 by HADA. The experiment settings and hyper-parameters follow [30][33]. So we emphasize that early stopping is not used.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found