Goto

Collaborating Authors

 match quality


Who is a Better Matchmaker? Human vs. Algorithmic Judge Assignment in a High-Stakes Startup Competition

Xi, Sarina, Pi, Orelia, Zhang, Miaomiao, Xiong, Becca, Lane, Jacqueline Ng, Shah, Nihar B.

arXiv.org Artificial Intelligence

There is growing interest in applying artificial intelligence (AI) to automate and support complex decision-making tasks. However, it remains unclear how algorithms compare to human judgment in contexts requiring semantic understanding and domain expertise. We examine this in the context of the judge assignment problem, matching submissions to suitably qualified judges. Specifically, we tackled this problem at the Harvard President's Innovation Challenge, the university's premier venture competition awarding over \$500,000 to student and alumni startups. This represents a real-world environment where high-quality judge assignment is essential. We developed an AI-based judge-assignment algorithm, Hybrid Lexical-Semantic Similarity Ensemble (HLSE), and deployed it at the competition. We then evaluated its performance against human expert assignments using blinded match-quality scores from judges on $309$ judge-venture pairs. Using a Mann-Whitney U statistic based test, we found no statistically significant difference in assignment quality between the two approaches ($AUC=0.48, p=0.40$); on average, algorithmic matches are rated $3.90$ and manual matches $3.94$ on a 5-point scale, where 5 indicates an excellent match. Furthermore, manual assignments that previously required a full week could be automated in several hours by the algorithm during deployment. These results demonstrate that HLSE achieves human-expert-level matching quality while offering greater scalability and efficiency, underscoring the potential of AI-driven solutions to support and enhance human decision-making for judge assignment in high-stakes settings.


Assortment Optimization for Patient-Provider Matching

Raman, Naveen, Wiberg, Holly

arXiv.org Artificial Intelligence

Primary care providers are essential to the healthcare ecosystem because they are the first point of contact for many patients (Pearson and Raeke, 2000; Wu et al., 2022). Patients rely on primary care providers for routine checkups and referrals to specialists. Moreover, care continuity can instill trust and improve medication uptake rates and patient health (Wu et al., 2022). Unfortunately, high provider turnover rates frequently lead to patients without an assigned provider (Reddy et al., 2015). Provider turnover disrupts patient care and leads to worse care (Reddy et al., 2015). In principle, healthcare administrators reassign unmatched patients to other providers; however, in practice, the process takes months due to provider scarcity and the logistical burden of rematching and coordinating patient matches (Hedden et al., 2021). While many patients find their new provider quickly, others have to wait years to find a new provider due to large numbers of patients, high turnover rates, and provider scarcity (Hedden et al., 2021; Shanafelt et al., 2012). Algorithms that automatically match patients and providers can reduce logistical hassle but require balancing patient autonomy and system-wide utility. For example, while automatically assigning each patient to a provider would decrease wait times, it also reduces patient autonomy because patients cannot select their provider (Entwistle et al., 2010; Gaynor et al., 2016). 1


Review for NeurIPS paper: Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments

Neural Information Processing Systems

Summary and Contributions: This paper aims to improve the reviewer-paper matching algorithms that many computer science conferences use to assign reviewers to submitted papers. Most conferences currently employ a deterministic algorithm with a linear program at its core that maximizes the total match quality (sum of similarity scores) subject to load balancing constraints ensuring that no reviewer is assigned too many papers and every paper is assigned enough reviewers. A problem with a deterministic algorithm is that unethical reviewers can manipulate their similarity scores (either through bids or submitted features) in order to try to get assigned one particular paper in order to boost it or nuke it. Another problem with a deterministic algorithm is that it cannot be shared to the public without the public being able to reverse engineer the match and reveal the reviewers assigned to a paper. The authors show that both problems can be alleviated by going with a randomized algorithm.