Mishra, Anoop
Perceived Fairness of the Machine Learning Development Process: Concept Scale Development
Mishra, Anoop, Khazanchi, Deepak
In machine learning (ML) applications, unfairness is triggered due to bias in the data, the data curation process, erroneous assumptions, and implicit bias rendered during the development process. It is also well-accepted by researchers that fairness in ML application development is highly subjective, with a lack of clarity of what it means from an ML development and implementation perspective. Thus, in this research, we investigate and formalize the notion of the perceived fairness of ML development from a sociotechnical lens. Our goal in this research is to understand the characteristics of perceived fairness in ML applications. We address this research goal using a three-pronged strategy: 1) conducting virtual focus groups with ML developers, 2) reviewing existing literature on fairness in ML, and 3) incorporating aspects of justice theory relating to procedural and distributive justice. Based on our theoretical exposition, we propose operational attributes of perceived fairness to be transparency, accountability, and representativeness. These are described in terms of multiple concepts that comprise each dimension of perceived fairness. We use this operationalization to empirically validate the notion of perceived fairness of machine learning (ML) applications from both the ML practioners and users perspectives. The multidimensional framework for perceived fairness offers a comprehensive understanding of perceived fairness, which can guide the creation of fair ML systems with positive implications for society and businesses.
Integrating Edge-AI in Structural Health Monitoring domain
Mishra, Anoop, Gangisetti, Gopinath, Khazanchi, Deepak
Structural health monitoring (SHM) tasks like damage detection are crucial for decision-making regarding maintenance and deterioration. For example, crack detection in SHM is crucial for bridge maintenance as crack progression can lead to structural instability. However, most AI/ML models in the literature have low latency and late inference time issues while performing in real-time environments. This study aims to explore the integration of edge-AI in the SHM domain for real-time bridge inspections. Based on edge-AI literature, its capabilities will be valuable integration for a real-time decision support system in SHM tasks such that real-time inferences can be performed on physical sites. This study will utilize commercial edge-AI platforms, such as Google Coral Dev Board or Kneron KL520, to develop and analyze the effectiveness of edge-AI devices. Thus, this study proposes an edge AI framework for the structural health monitoring domain. An edge-AI-compatible deep learning model is developed to validate the framework to perform real-time crack classification. The effectiveness of this model will be evaluated based on its accuracy, the confusion matrix generated, and the inference time observed in a real-time setting.
Assessing Perceived Fairness from Machine Learning Developer's Perspective
Mishra, Anoop, Khazanchi, Deepak
Fairness in machine learning (ML) applications is an important practice for developers in research and industry. In ML applications, unfairness is triggered due to bias in the data, curation process, erroneous assumptions, and implicit bias rendered within the algorithmic development process. As ML applications come into broader use developing fair ML applications is critical. Literature suggests multiple views on how fairness in ML is described from the users perspective and students as future developers. In particular, ML developers have not been the focus of research relating to perceived fairness. This paper reports on a pilot investigation of ML developers perception of fairness. In describing the perception of fairness, the paper performs an exploratory pilot study to assess the attributes of this construct using a systematic focus group of developers. In the focus group, we asked participants to discuss three questions- 1) What are the characteristics of fairness in ML? 2) What factors influence developers belief about the fairness of ML? and 3) What practices and tools are utilized for fairness in ML development? The findings of this exploratory work from the focus group show that to assess fairness developers generally focus on the overall ML application design and development, i.e., business-specific requirements, data collection, pre-processing, in-processing, and post-processing. Thus, we conclude that the procedural aspects of organizational justice theory can explain developers perception of fairness. The findings of this study can be utilized further to assist development teams in integrating fairness in the ML application development lifecycle. It will also motivate ML developers and organizations to develop best practices for assessing the fairness of ML-based applications.