A Computational Method for Measuring "Open Codes" in Qualitative Analysis
Chen, John, Lotsos, Alexandros, Zhao, Lexie, Wang, Caiyi, Hullman, Jessica, Sherin, Bruce, Wilensky, Uri, Horn, Michael
–arXiv.org Artificial Intelligence
Qualitative analysis is critical to understanding human datasets in many social science disciplines. Open coding is an inductive qualitative process that identifies and interprets "open codes" from datasets. Yet, meeting methodological expectations (such as "as exhaustive as possible") can be challenging. While many machine learning (ML)/generative AI (GAI) studies have attempted to support open coding, few have systematically measured or evaluated GAI outcomes, increasing potential bias risks. Building on Grounded Theory and Thematic Analysis theories, we present a computational method to measure and identify potential biases from "open codes" systematically. Instead of operationalizing human expert results as the "ground truth," our method is built upon a team-based approach between human and machine coders. We experiment with two HCI datasets to establish this method's reliability by 1) comparing it with human analysis, and 2) analyzing its output stability. We present evidence-based suggestions and example workflows for ML/GAI to support open coding.
arXiv.org Artificial Intelligence
Nov-25-2024
- Country:
- Europe (0.93)
- North America > United States
- California
- San Francisco County > San Francisco (0.14)
- Ventura County > Thousand Oaks (0.14)
- California
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (1.00)
- Information Technology > Security & Privacy (0.67)
- Technology: