Goto

Collaborating Authors

 discretion


Does Society Have Too Many Rules?

The New Yorker

Does Society Have Too Many Rules? When regular people seem burdened by bureaucracy, and the powerful act as they choose, it's worth asking whether we've forgotten what makes rules effective. I live in a three-generation household. Our place is big, but crowded: all of us have hobbies, and so every shelf or surface contains toys, books, art supplies, sporting goods, craft projects, cameras, musical instruments, or kitchen gadgets. Before the table can be set for dinner, it must be cleared of a board game or marble run. My desk, where I aim to write in the mornings, has been repurposed as a drone-repair workshop. The property includes two broken-down sheds and a garage.


Discretion in the Loop: Human Expertise in Algorithm-Assisted College Advising

Schechtman, Kara, Brandon, Benjamin, Stafford, Jenise, Li, Hannah, Liu, Lydia T.

arXiv.org Machine Learning

In higher education, many institutions use algorithmic alerts to flag at-risk students and deliver advising at scale. While much research has focused on evaluating algorithmic predictions, relatively little is known about how discretionary interventions by human experts shape outcomes in algorithm-assisted settings. We study this question using rich quantitative and qualitative data from a randomized controlled trial of an algorithm-assisted advising program at Georgia State University. Taking a mixed-methods approach, we examine whether and how advisors use context unavailable to an algorithm to guide interventions and influence student success. We develop a causal graphical framework for human expertise in the interventional setting, extending prior work on discretion in purely predictive settings. We then test a necessary condition for discretionary expertise using structured advisor logs and student outcomes data, identifying several interventions that meet the criterion for statistical significance. Accordingly, we estimate that 2 out of 3 interventions taken by advisors in the treatment arm were plausibly "expertly targeted" to students using non-algorithmic context. Systematic qualitative analysis of advisor notes corroborates these findings, showing a pattern of advisors incorporating diverse forms of contextual information--such as personal circumstances, financial issues, and student engagement--into their decisions. Our results offer theoretical and practical insight into the real-world effectiveness of algorithm-supported college advising, and underscore the importance of accounting for human expertise in the design, evaluation, and implementation of algorithmic decision systems.


It's Time to Worry About DOGE's AI Plans

The Atlantic - Technology

Donald Trump and Elon Musk's chaotic approach to reform is upending government operations. Critical functions have been halted, tens of thousands of federal staffers are being encouraged to resign, and congressional mandates are being disregarded. The next phase: The Department of Government Efficiency reportedly wants to use AI to cut costs. According to The Washington Post, Musk's group has started to run sensitive data from government systems through AI programs to analyze spending and determine what could be pruned. This may lead to the elimination of human jobs in favor of automation.


AI Alignment at Your Discretion

Buyl, Maarten, Khalaf, Hadi, Verdun, Claudio Mayrink, Paes, Lucas Monteiro, Machado, Caio C. Vieira, Calmon, Flavio du Pin

arXiv.org Artificial Intelligence

In AI alignment, extensive latitude must be granted to annotators, either human or algorithmic, to judge which model outputs are `better' or `safer.' We refer to this latitude as alignment discretion. Such discretion remains largely unexamined, posing two risks: (i) annotators may use their power of discretion arbitrarily, and (ii) models may fail to mimic this discretion. To study this phenomenon, we draw on legal concepts of discretion that structure how decision-making authority is conferred and exercised, particularly in cases where principles conflict or their application is unclear or irrelevant. Extended to AI alignment, discretion is required when alignment principles and rules are (inevitably) conflicting or indecisive. We present a set of metrics to systematically analyze when and how discretion in AI alignment is exercised, such that both risks (i) and (ii) can be observed. Moreover, we distinguish between human and algorithmic discretion and analyze the discrepancy between them. By measuring both human and algorithmic discretion over safety alignment datasets, we reveal layers of discretion in the alignment process that were previously unaccounted for. Furthermore, we demonstrate how algorithms trained on these datasets develop their own forms of discretion in interpreting and applying these principles, which challenges the purpose of having any principles at all. Our paper presents the first step towards formalizing this core gap in current alignment processes, and we call on the community to further scrutinize and control alignment discretion.


Elon Musk has been inescapable in this election. How could he affect the results?

The Guardian

Less than a month before the presidential election, Elon Musk has made himself a near-constant presence in the race. On social media, he posts AI-generated images attacking Kamala Harris. The billionaire CEO of Tesla and SpaceX has emerged as a unique influence on the campaign in ways that set him apart from even the most politically active billionaires and tech elite. He is all at once a vocal Trump surrogate, campaign mega-donor, informal policy adviser, media influencer and prolific source of online disinformation. At the same time, he is the world's richest man and the owner of one of the United States' most influential social networks, while also operating as a government defense contractor and wielding power over critical satellite communications infrastructure.


Automated legal reasoning with discretion to act using s(LAW)

Arias, Joaquín, Moreno-Rebato, Mar, Rodríguez-García, José A., Ossowski, Sascha

arXiv.org Artificial Intelligence

Automated legal reasoning and its application in smart contracts and automated decisions are increasingly attracting interest. In this context, ethical and legal concerns make it necessary for automated reasoners to justify in human-understandable terms the advice given. Logic Programming, specially Answer Set Programming, has a rich semantics and has been used to very concisely express complex knowledge. However, modelling discretionality to act and other vague concepts such as ambiguity cannot be expressed in top-down execution models based on Prolog, and in bottom-up execution models based on ASP the justifications are incomplete and/or not scalable. We propose to use s(CASP), a top-down execution model for predicate ASP, to model vague concepts following a set of patterns. We have implemented a framework, called s(LAW), to model, reason, and justify the applicable legislation and validate it by translating (and benchmarking) a representative use case, the criteria for the admission of students in the "Comunidad de Madrid".


Discretionary Trees: Understanding Street-Level Bureaucracy via Machine Learning

Pokharel, Gaurab, Das, Sanmay, Fowler, Patrick J.

arXiv.org Artificial Intelligence

Street-level bureaucrats interact directly with people on behalf of government agencies to perform a wide range of functions, including, for example, administering social services and policing. A key feature of street-level bureaucracy is that the civil servants, while tasked with implementing agency policy, are also granted significant discretion in how they choose to apply that policy in individual cases. Using that discretion could be beneficial, as it allows for exceptions to policies based on human interactions and evaluations, but it could also allow biases and inequities to seep into important domains of societal resource allocation. In this paper, we use machine learning techniques to understand street-level bureaucrats' behavior. We leverage a rich dataset that combines demographic and other information on households with information on which homelessness interventions they were assigned during a period when assignments were not formulaic. We find that caseworker decisions in this time are highly predictable overall, and some, but not all of this predictivity can be captured by simple decision rules. We theorize that the decisions not captured by the simple decision rules can be considered applications of caseworker discretion. These discretionary decisions are far from random in both the characteristics of such households and in terms of the outcomes of the decisions. Caseworkers typically only apply discretion to households that would be considered less vulnerable. When they do apply discretion to assign households to more intensive interventions, the marginal benefits to those households are significantly higher than would be expected if the households were chosen at random; there is no similar reduction in marginal benefit to households that are discretionarily allocated less intensive interventions, suggesting that caseworkers are improving outcomes using their knowledge.


Eduonix.com

#artificialintelligence

Use of the Website is offered to you conditioned on acceptance without modification of all the terms, conditions and notices contained in these Terms along with Eduonix Terms & Conditions, as may be posted on the Website from time to time. Eduonix at its sole discretion reserves the right not to accept a User from registering on the Website without assigning any reason thereof. Transfer of Account: You may not transfer your Account to any other person and you may not use anyone else's Account at any time. Limited User: The User agrees and undertakes not to reverse engineer, modify, copy, distribute, transmit, display, perform, reproduce, publish, license, create derivative works from, transfer, or sell any information or software obtained from the Website. For the removal of doubt, it is clarified that unlimited or wholesale reproduction, copying of the content for commercial or non-commercial purposes and unwarranted modification of data and information within the content of the Website is not permitted.


AI cannot be regulated by technical measures alone

#artificialintelligence

Any attempt to regulate artificial intelligence (AI) must not rely solely on technical measures to mitigate potential harms, and should instead move to address the fundamental power imbalances between those who develop or deploy the technology and those who are subject to it, says a report commissioned by European Digital Rights (EDRi). Published on 21 September 2021, the 155-page report Beyond debiasing: regulating AI and its inequalities specifically criticised the European Union's (EU) "technocratic" approach to AI regulation, which it said was too narrowly focused on implementing technical bias mitigation measures, otherwise known as "debiasing", to be effective at preventing the full range of AI-related harms. The European Commission's (EC) proposed Artificial Intelligence Act (AIA) was published in April 2021 and sought to create a risk-based, market-led approach to regulating AI through the establishment of self-assessments, transparency procedures and various technical standards. Digital civil rights experts and organisations have previously told Computer Weekly that although the regulation is a step in the right direction, it will ultimately fail to protect people's fundamental rights and mitigate the technology's worst abuses because it does not address the fundamental power imbalances between tech firms and those who are subject to their systems. The EDRi-commissioned report said that while European policymakers have publicly recognised that AI can produce a broad range of harms across different domains – including employment, housing, education, health and policing – their laser focus on algorithmic debiasing stems from a misunderstanding of the existing techniques and their effectiveness.


The False Comfort of Human Oversight as an Antidote to A.I. Harm

Slate

In April, the European Commission released a wide-ranging proposed regulation to govern the design, development, and deployment of A.I. systems. The regulation stipulates that "high-risk A.I. systems" (such as facial recognition and algorithms that determine eligibility for public benefits) should be designed to allow for oversight by humans who will be tasked with preventing or minimizing risks. Often expressed as the "human-in-the-loop" solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the "loop" of A.I. seems reassuring, this approach is instead "loopy" in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems. A.I. is celebrated for its superior accuracy, efficiency, and objectivity in comparison to humans.