cleam
On Measuring Fairness in Generative Models Supplementary Material
These were not included in the main paper due to space limitations. In Sec 4.1 of main paper, we have proposed a statistical model for the sensitive attribute classifier Generators are not completely biased. Given that a generator is trained on a reliable dataset with the availability of all classes of a given sensitive attribute, coupled with the advancement in generator's architecture, it is a fair assumption that the generator would learn some representation of each class in the sensitive attribute and not be completely Here, we provide more information on the necessary assumptions and the expanded forms of the equations. A.2, we will similarly provide more information on MLE value of Population Mean. A.1, we can equate the sample mean to the expanded theoretical model: µ Now given that the classifier's accuracy Fairness in generative models is defined as Equal Representation meaning that the generator is supposed to generate an equal number of samples for each element of an attribute, e.g., an equal number In the main paper Sec.3, we discussed that there could be considerable error in the fairness measurement, In our extended experiments in Sec.
- Asia > Singapore (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Asia > Singapore (0.05)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- North America > United States > Minnesota (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
On Measuring Fairness in Generative Models Supplementary Material
These were not included in the main paper due to space limitations. In Sec 4.1 of main paper, we have proposed a statistical model for the sensitive attribute classifier Generators are not completely biased. Given that a generator is trained on a reliable dataset with the availability of all classes of a given sensitive attribute, coupled with the advancement in generator's architecture, it is a fair assumption that the generator would learn some representation of each class in the sensitive attribute and not be completely Here, we provide more information on the necessary assumptions and the expanded forms of the equations. A.2, we will similarly provide more information on MLE value of Population Mean. A.1, we can equate the sample mean to the expanded theoretical model: µ Now given that the classifier's accuracy Fairness in generative models is defined as Equal Representation meaning that the generator is supposed to generate an equal number of samples for each element of an attribute, e.g., an equal number In the main paper Sec.3, we discussed that there could be considerable error in the fairness measurement, In our extended experiments in Sec.
- Asia > Singapore (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Asia > Singapore (0.05)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- North America > United States > Minnesota (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
On Measuring Fairness in Generative Models
Recently, there has been increased interest in fair generative models. In this work,we conduct, for the first time, an in-depth study on fairness measurement, acritical component in gauging progress on fair generative models. First, we conduct a study that reveals that the existing fairnessmeasurement framework has considerable measurement errors, even when highlyaccurate sensitive attribute (SA) classifiers are used. These findings cast doubtson previously reported fairness improvements. Second, to address this issue,we propose CLassifier Error-Aware Measurement (CLEAM), a new frameworkwhich uses a statistical model to account for inaccuracies in SA classifiers.
On Measuring Fairness in Generative Models
Teo, Christopher T. H., Abdollahzadeh, Milad, Cheung, Ngai-Man
Recently, there has been increased interest in fair generative models. In this work, we conduct, for the first time, an in-depth study on fairness measurement, a critical component in gauging progress on fair generative models. We make three contributions. First, we conduct a study that reveals that the existing fairness measurement framework has considerable measurement errors, even when highly accurate sensitive attribute (SA) classifiers are used. These findings cast doubts on previously reported fairness improvements. Second, to address this issue, we propose CLassifier Error-Aware Measurement (CLEAM), a new framework which uses a statistical model to account for inaccuracies in SA classifiers. Our proposed CLEAM reduces measurement errors significantly, e.g., 4.98% $\rightarrow$ 0.62% for StyleGAN2 w.r.t. Gender. Additionally, CLEAM achieves this with minimal additional overhead. Third, we utilize CLEAM to measure fairness in important text-to-image generator and GANs, revealing considerable biases in these models that raise concerns about their applications. Code and more resources: https://sutd-visual-computing-group.github.io/CLEAM/.
- Asia > Singapore (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.67)