mistreatment
Reducing the Filtering Effect in Public School Admissions: A Bias-aware Analysis for Targeted Interventions
Faenza, Yuri, Gupta, Swati, Vuorinen, Aapeli, Zhang, Xuan
Problem definition: Traditionally, New York City's top 8 public schools have selected candidates solely based on their scores in the Specialized High School Admissions Test (SHSAT). These scores are known to be impacted by socioeconomic status of students and test preparation received in middle schools, leading to a massive filtering effect in the education pipeline. The classical mechanisms for assigning students to schools do not naturally address problems like school segregation and class diversity, which have worsened over the years. The scientific community, including policymakers, have reacted by incorporating group-specific quotas and proportionality constraints, with mixed results. The problem of finding effective and fair methods for broadening access to top-notch education is still unsolved. Methodology/results: We take an operations approach to the problem different from most established literature, with the goal of increasing opportunities for students with high economic needs. Using data from the Department of Education (DOE) in New York City, we show that there is a shift in the distribution of scores obtained by students that the DOE classifies as "disadvantaged" (following criteria mostly based on economic factors). We model this shift as a "bias" that results from an underestimation of the true potential of disadvantaged students. We analyze the impact this bias has on an assortative matching market. We show that centrally planned interventions can significantly reduce the impact of bias through scholarships or training, when they target the segment of disadvantaged students with average performance.
- North America > United States > Texas (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Michigan (0.04)
- (4 more...)
- Education > Educational Setting > K-12 Education (1.00)
- Education > Operations > Student Enrollment (0.72)
Toward a Fairness-Aware Scoring System for Algorithmic Decision-Making
Yang, Yi, Wu, Ying, Li, Mei, Chang, Xiangyu, Tan, Yong
Scoring systems, as a type of predictive model, have significant advantages in interpretability and transparency and facilitate quick decision-making. As such, scoring systems have been extensively used in a wide variety of industries such as healthcare and criminal justice. However, the fairness issues in these models have long been criticized, and the use of big data and machine learning algorithms in the construction of scoring systems heightens this concern. In this paper, we propose a general framework to create fairness-aware, data-driven scoring systems. First, we develop a social welfare function that incorporates both efficiency and group fairness. Then, we transform the social welfare maximization problem into the risk minimization task in machine learning, and derive a fairness-aware scoring system with the help of mixed integer programming. Lastly, several theoretical bounds are derived for providing parameter selection suggestions. Our proposed framework provides a suitable solution to address group fairness concerns in the development of scoring systems. It enables policymakers to set and customize their desired fairness requirements as well as other application-specific constraints. We test the proposed algorithm with several empirical data sets. Experimental evidence supports the effectiveness of the proposed scoring system in achieving the optimal welfare of stakeholders and in balancing the needs for interpretability, fairness, and efficiency.
- South America > Uruguay > Artigas > Artigas (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Asia > Middle East > Jordan (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Google engineers leave the company over controversial exit of top AI ethicist
Google has lost a couple of talents due to the way it treated and the departure of its former top AI ethics researcher, Dr. Timnit Gebru. According to Reuters, engineering director David Baker left the tech giant last month after 16 years with the company. In a letter seen by the news organization, Baker said Gebru's exit "extinguished [his] desire to continue as a Googler." He added: "We cannot say we believe in diversity, and then ignore the conspicuous absence of many voices from within our walls." Software engineer Vinesh Kannan, who built infrastructure and features for organic shopping on the website, has also left the company.
Trends Transforming The Banking Industry
The industry is commencing to integrate the characteristics and procedures that after were fintech startups' domain. With a faster rate of development, banks and credit unions became more well-off exploitation data and analytics a lot of extensively and digitalizing procedures instead of simply changing paper into PDFs. Here are several tey trends rising within the industry. Continued transition to prophetical banking is going to be one in every of the foremost exciting innovation trends. The industry will consolidate all inner and external data for the primary time, constructing client and member prophetical profiles in a period.
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (0.34)
The Morals Of How We Treat Robots - Disruption Hub
It slips, but regains its balance. Stumbling over the uneven terrain is a central part of its training. As is being violently attacked by an engineer with a hockey stick. . . The field of advanced robotics has come on leaps and bounds in recent years. Humanoid, biped robots are capable of walking on their own and are performing ever more complex tasks.
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
Zafar, Muhammad Bilal, Valera, Isabel, Rodriguez, Manuel Gomez, Gummadi, Krishna P.
Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.
- North America > United States > New York (0.04)
- Oceania > Australia > Western Australia > Perth (0.04)
- North America > United States > Florida > Broward County (0.04)
- North America > United States > California (0.04)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.68)