Algorithmic fairness is a major concern in recent years as the influence of machine learning algorithms becomes more widespread. In this paper, we investigate the issue of algorithmic fairness from a network-centric perspective. Specifically, we introduce a novel yet intuitive function known as network-centric fairness perception and provide an axiomatic approach to analyze its properties. Using a peer-review network as case study, we also examine its utility in terms of assessing the perception of fairness in paper acceptance decisions. We show how the function can be extended to a group fairness metric known as fairness visibility and demonstrate its relationship to demographic parity. We also illustrate a potential pitfall of the fairness visibility measure that can be exploited to mislead individuals into perceiving that the algorithmic decisions are fair. We demonstrate how the problem can be alleviated by increasing the local neighborhood size of the fairness perception function.
As it is, the world is unfair. The question now is, do we want automated tech to be unfair too? As we build more and more AI-dependent smart digital infrastructure in our cities and beyond, we have pretty much overlooked the emerging character of artificial intelligence that would have a profound bearing on our nature and future. Are we happy with algorithms making decisions for us? Naturally, one would expect the algorithm to possess discretion.
What happens when injustices are propagated not by individuals or organizations but by a collection of machines? Lately, there's been increased attention on the downsides of artificial intelligence and the harms it may produce in our society, from unequitable access to opportunities to the escalation of polarization in our communities. Not surprisingly, there's been a corresponding rise in discussion around how to regulate AI. Do we need new laws and rules from governmental authorities to police companies and their conduct when designing and deploying AI into the world? Part of the conversation arises from the fact that the public questions -- and rightly so -- the ethical restraints that organizations voluntarily choose to comply with.
Artificial intelligence (AI) is rapidly becoming integral to how organizations are run. This should not be a surprise; when analyzing sales calls and market trends, for example, the judgments of computational algorithms can be considered superior to those of humans. As a result, AI techniques are increasingly used to make decisions. Organizations are employing algorithms to allocate valuable resources, design work schedules, analyze employee performance, and even decide whether employees can stay on the job. This creates a new set of problems even as it solves old ones.