Performative Federated Learning: A Solution to Model-Dependent and Heterogeneous Distribution Shifts

Jin, Kun, Yin, Tongxin, Chen, Zhongzhu, Sun, Zeyu, Zhang, Xueru, Liu, Yang, Liu, Mingyan

arXiv.org Artificial Intelligence 

Traditional learning problems typically assume data distributions to be static. For applications such as face recognition, this is largely true and designing algorithms under such an assumption in general does not impact learning efficacy. This, however, is not true in many other domains. In some cases, there may be a natural evolution and shift in the distribution, e.g., in weather and climate data, in which case new data need to be acquired periodically and the algorithm re-trained to remain up to date. In other cases, the distribution shift is the result of the very learning outcome, when individuals respond to the algorithmic decisions they are subjected to. For instance, when users with certain accents perceive larger-than-acceptable errors from a speech recognition software and therefore stop using it, this can directly impact the type of speech samples collected by the software used for training the next generation of the product. Another example is "gaming the algorithm", where users through honest or dishonest means attempt to improve critical features so as to obtain a favorable decision by the algorithm (e.g., in loan approvals or job applications). This again can directly lead to the distributional change in features and label that the algorithm relies on for decision making.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found