Goto

Collaborating Authors

 caseworker


Transparent and Fair Profiling in Employment Services: Evidence from Switzerland

Räz, Tim

arXiv.org Artificial Intelligence

Long-term unemployment (LTU) is a challenge for both jobseekers and public employment services. Statistical profiling tools are increasingly used to predict LTU risk. Some profiling tools are opaque, black-box machine learning models, which raise issues of transparency and fairness. This paper investigates whether interpretable models could serve as an alternative, using administrative data from Switzerland. Traditional statistical, interpretable, and black-box models are compared in terms of predictive performance, interpretability, and fairness. It is shown that explainable boosting machines, a recent interpretable model, perform nearly as well as the best black-box models. It is also shown how model sparsity, feature smoothing, and fairness mitigation can enhance transparency and fairness with only minor losses in performance. These findings suggest that interpretable profiling provides an accountable and trustworthy alternative to black-box models without compromising performance.


Beyond Predictive Algorithms in Child Welfare

Moon, Erina Seh-Young, Saxena, Devansh, Maharaj, Tegan, Guha, Shion

arXiv.org Artificial Intelligence

Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.


Beyond Eviction Prediction: Leveraging Local Spatiotemporal Public Records to Inform Action

Mashiat, Tasfia, DiChristofano, Alex, Fowler, Patrick J., Das, Sanmay

arXiv.org Artificial Intelligence

There has been considerable recent interest in scoring properties on the basis of eviction risk. The success of methods for eviction prediction is typically evaluated using different measures of predictive accuracy. However, the underlying goal of such prediction is to direct appropriate assistance to households that may be at greater risk so they remain stably housed. Thus, we must ask the question of how useful such predictions are in targeting outreach efforts - informing action. In this paper, we investigate this question using a novel dataset that matches information on properties, evictions, and owners. We perform an eviction prediction task to produce risk scores and then use these risk scores to plan targeted outreach policies. We show that the risk scores are, in fact, useful, enabling a theoretical team of caseworkers to reach more eviction-prone properties in the same amount of time, compared to outreach policies that are either neighborhood-based or focus on buildings with a recent history of evictions. We also discuss the importance of neighborhood and ownership features in both risk prediction and targeted outreach.


Causal Machine Learning for Moderation Effects

Bearth, Nora, Lechner, Michael

arXiv.org Machine Learning

It is valuable for any decision maker to know the impact of decisions (treatments) on average and for subgroups. The causal machine learning literature has recently provided tools for estimating group average treatment effects (GATE) to understand treatment heterogeneity better. This paper addresses the challenge of interpreting such differences in treatment effects between groups while accounting for variations in other covariates. We propose a new parameter, the balanced group average treatment effect (BGATE), which measures a GATE with a specific distribution of a priori-determined covariates. By taking the difference of two BGATEs, we can analyse heterogeneity more meaningfully than by comparing two GATEs. The estimation strategy for this parameter is based on double/debiased machine learning for discrete treatments in an unconfoundedness setting, and the estimator is shown to be $\sqrt{N}$-consistent and asymptotically normal under standard conditions. Adding additional identifying assumptions allows specific balanced differences in treatment effects between groups to be interpreted causally, leading to the causal balanced group average treatment effect. We explore the finite sample properties in a small-scale simulation study and demonstrate the usefulness of these parameters in an empirical example.


Discretionary Trees: Understanding Street-Level Bureaucracy via Machine Learning

Pokharel, Gaurab, Das, Sanmay, Fowler, Patrick J.

arXiv.org Artificial Intelligence

Street-level bureaucrats interact directly with people on behalf of government agencies to perform a wide range of functions, including, for example, administering social services and policing. A key feature of street-level bureaucracy is that the civil servants, while tasked with implementing agency policy, are also granted significant discretion in how they choose to apply that policy in individual cases. Using that discretion could be beneficial, as it allows for exceptions to policies based on human interactions and evaluations, but it could also allow biases and inequities to seep into important domains of societal resource allocation. In this paper, we use machine learning techniques to understand street-level bureaucrats' behavior. We leverage a rich dataset that combines demographic and other information on households with information on which homelessness interventions they were assigned during a period when assignments were not formulaic. We find that caseworker decisions in this time are highly predictable overall, and some, but not all of this predictivity can be captured by simple decision rules. We theorize that the decisions not captured by the simple decision rules can be considered applications of caseworker discretion. These discretionary decisions are far from random in both the characteristics of such households and in terms of the outcomes of the decisions. Caseworkers typically only apply discretion to households that would be considered less vulnerable. When they do apply discretion to assign households to more intensive interventions, the marginal benefits to those households are significantly higher than would be expected if the households were chosen at random; there is no similar reduction in marginal benefit to households that are discretionarily allocated less intensive interventions, suggesting that caseworkers are improving outcomes using their knowledge.


A Conceptual Framework for Using Machine Learning to Support Child Welfare Decisions

Chor, Ka Ho Brian, Rodolfa, Kit T., Ghani, Rayid

arXiv.org Artificial Intelligence

Human services systems make key decisions that impact individuals in the society. The U.S. child welfare system makes such decisions, from screening-in hotline reports of suspected abuse or neglect for child protective investigations, placing children in foster care, to returning children to permanent home settings. These complex and impactful decisions on children's lives rely on the judgment of child welfare decisionmakers. Child welfare agencies have been exploring ways to support these decisions with empirical, data-informed methods that include machine learning (ML). This paper describes a conceptual framework for ML to support child welfare decisions. The ML framework guides how child welfare agencies might conceptualize a target problem that ML can solve; vet available administrative data for building ML; formulate and develop ML specifications that mirror relevant populations and interventions the agencies are undertaking; deploy, evaluate, and monitor ML as child welfare context, policy, and practice change over time. Ethical considerations, stakeholder engagement, and avoidance of common pitfalls underpin the framework's impact and success. From abstract to concrete, we describe one application of this framework to support a child welfare decision. This ML framework, though child welfare-focused, is generalizable to solving other public policy problems.


A computer model predicts who will become homeless. Then these workers step in

Los Angeles Times

When her phone rang in February, Mashawn Cross was skeptical of the gentle voice offering help at the end of the line. "You said you do what? And you're with who?" the 52-year-old recalled saying. Cross, who wasn't working because of her ailing back and knees, was scraping by on roughly $200 a month in aid plus whatever she could make from recycling bottles and cans. Her gas and electric bills were chewing up her checks.


The future of work in health and human services

#artificialintelligence

Health and human services (HHS) agencies often struggle to serve some of society's most needy populations. At many HHS agencies today, tight budgets limit the size of the workforce, even as the volume of caseloads continues to grow. That imbalance makes it hard to provide efficient and effective solutions to address the critical needs of individuals and families, and can leave employees feeling stressed and overworked. Those same employees may also see few opportunities for career development or advancement. High rates of turnover can put a steady stream of inexperienced staff into critical jobs with little training to prepare them.


What Robots Can--and Can't--Do for the Old and Lonely

The New Yorker

It felt good to love again, in that big empty house. Virginia Kellner got the cat last November, around her ninety-second birthday, and now it's always nearby. It keeps her company as she moves, bent over her walker, from the couch to the bathroom and back again. The walker has a pair of orange scissors hanging from the handlebar, for opening mail. Virginia likes the pet's green eyes.


Artificial intelligence examines best ways to keep parolees from recommitting crimes

#artificialintelligence

Starting a new life is difficult for criminals transitioning from prison back to regular society. To help those individuals, Purdue University Polytechnic Institute researchers are using artificial intelligence to uncover risky behaviors which could then help identify when early intervention opportunities could be beneficial. Results of a U.S. Department of Justice study indicated more than 80 percent of people in state prisons were arrested at least once in the nine years following their release. Almost half of those arrests came in the first year following release. Marcus Rogers and Umit Karabiyik of Purdue Polytechnic's Department of Computer and Information Technology, are leading an ongoing project focused on using AI-enabled tools and technology to reduce the recidivism rates for convicted criminals who have been released.