Factorizable joint shift revisited
Such failure can be caused by distribution shift (also known as dataset shift) between the training and test datasets. For this reason, distribution shift and domain adaptation (a notion comprising techniques for tackling distribution shift) has been a major research topic in machine learning for some time. This paper takes the perspective of Kouw and Loog (2021) and studies the case where feature observations from the test dataset are available for analysis but observations of labels are missing. Under these circumstances, without any assumptions on the nature of the distribution shift between the training and test datasets meaningful prediction of the labels in the test dataset or of their distribution is not feasible. See Kouw and Loog (2021) for a survey of approaches to domain adaptation and their related assumptions. Arguably, covariate shift (also known as population drift) and label shift (also known as prior probability shift or target shift) are the most popular specific distribution shift assumptions, both for their intuiveness as well as their computational manageability. However, exclusive covariate and label shift assumptions have been criticised for being insufficient for common domain adaptation tasks (e.g.
Jan-30-2026
- Country:
- Africa > South Africa (0.04)
- Europe
- Poland > Lesser Poland Province
- Kraków (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Poland > Lesser Poland Province
- North America > United States
- Florida > Palm Beach County
- Boca Raton (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New York (0.04)
- Florida > Palm Beach County
- Genre:
- Research Report (0.81)