On Measuring Fairness in Generative Models Supplementary Material

Neural Information Processing Systems 

These were not included in the main paper due to space limitations. In Sec 4.1 of main paper, we have proposed a statistical model for the sensitive attribute classifier Generators are not completely biased. Given that a generator is trained on a reliable dataset with the availability of all classes of a given sensitive attribute, coupled with the advancement in generator's architecture, it is a fair assumption that the generator would learn some representation of each class in the sensitive attribute and not be completely Here, we provide more information on the necessary assumptions and the expanded forms of the equations. A.2, we will similarly provide more information on MLE value of Population Mean. A.1, we can equate the sample mean to the expanded theoretical model: µ Now given that the classifier's accuracy Fairness in generative models is defined as Equal Representation meaning that the generator is supposed to generate an equal number of samples for each element of an attribute, e.g., an equal number In the main paper Sec.3, we discussed that there could be considerable error in the fairness measurement, In our extended experiments in Sec.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found