Goto

Collaborating Authors

 entitlement


AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use

Shaayesteh, Mayssam Tarighi, Esfahani, Sara Memarian, Mohit, Hossein

arXiv.org Artificial Intelligence

Emerging artificial intelligence (AI) technology has a pronounced impact on higher education, addressing existing challenges in educational settings such as larger school sizes and the scarcity of elite instructors. In all these areas, it has been noted th at AI has led to massive changes: some estimates suggest that at least 80 percent of workers will have the quantity and quality of at least some of their tasks influenced (for the better) by AI (Canagasuriam & Lukacik, 2024) . This means that, in educational contexts, psychological empowerment has been shown to mitigate the combined enullects of emotional exhaustion and depression, demonstrating that social relationships and leadership can bolster mental health in institutions (Schermuly & Meyer, 2016) . However, this is not to say that AI is without dangers; cybercriminals have also turned to AI to bolster their attacks, for example, in the form of spear phishing or malware installation, showcasing how AI can be abused as a tool to harm enterprises (Mirsky et al., 2023) . Psychological empowerment -- comprising meaning, competence, self - determination, and impact -- has strong enullects on person - environment interactions, which ultimately influence how individuals feel about and perform their jobs (Gregory et al., 2010) .


ALKAFI-LLAMA3: Fine-Tuning LLMs for Precise Legal Understanding in Palestine

Qasem, Rabee, Hendi, Mohannad, Tantour, Banan

arXiv.org Artificial Intelligence

Large language models (LLMs) have gained significant attention over the past few years, particularly following the emergence of ChatGPT, both from researchers Movva et al. [2024] and in terms of their adoption in the private sector. This hype has revolutionized how we use AI in different fields and helped redefine how various domains utilize AI, such as in medicine Alghamdi and Mostafa [2024], Yuan et al. [2024], finance Xie et al. [2024], Malaysha et al. [2024], and even agriculture Gupta et al. [2024]. These advancements demonstrate the profound potential of AI to transform industries, driving innovation and efficiency in ways previously unimaginable. However, one domain that still has room for growth and the potential to bring about significant change is the legal domain Martin et al. [2024], Maree et al. [2024]. Although sectors such as healthcare and finance have rapidly adopted AI to address their unique challenges, the legal industry has been relatively slow to embrace these technologies Legg and Bell [2020]. The complex nature of legal language, coupled with jurisdictional variations and the high stakes involved, and the lack of AI regulations de Almeida et al. [2021], Nadjia [2024], has presented significant obstacles to developing effective AI-powered legal


The inversion paradox, and classification of fairness notions

Feige, Uriel

arXiv.org Artificial Intelligence

Several different fairness notions have been introduced in the context of fair allocation of goods. In this manuscript, we compare between some fairness notions that are used in settings in which agents have arbitrary (perhaps unequal) entitlements to the goods. This includes the proportional share, the anyprice share, the weighted maximin share, weighted envy freeness, maximum weight Nash social welfare and competitive equilibrium. We perform this comparison in two settings, that of a divisible homogeneous good and arbitrary valuations, and that of indivisible goods and additive valuations. Different fairness notions are not always compatible with each other, and might dictate selecting different allocations. The purpose of our work is to clarify various properties of fairness notions, so as to allow, when needed, to make an educated choice among them. Also, such a study may motivate introducing new fairness notions, or modifications to existing fairness notions. Among other properties, we introduce definitions for monotonicity that postulate that having higher entitlement should be better to the agent than having lower entitlement. Some monotonicity notions, such as population monotonicity and weight monotonicity, appeared in previous work, but we prefer to consider other monotonicity properties that we refer to as global monotonicity and individual monotonicity. We find that some of the fairness notions (but not all) violate our monotonicity properties in a strong sense, that we refer to as the inversion paradox. Under this paradox, a fairness notion enforces that the value received by an agent decreases when the entitlement of the agent increases.


Agent-Specific Deontic Modality Detection in Legal Language

Sancheti, Abhilasha, Garimella, Aparna, Srinivasan, Balaji Vasan, Rudinger, Rachel

arXiv.org Artificial Intelligence

Legal documents are typically long and written in legalese, which makes it particularly difficult for laypeople to understand their rights and duties. While natural language understanding technologies can be valuable in supporting such understanding in the legal domain, the limited availability of datasets annotated for deontic modalities in the legal domain, due to the cost of hiring experts and privacy issues, is a bottleneck. To this end, we introduce, LEXDEMOD, a corpus of English contracts annotated with deontic modality expressed with respect to a contracting party or agent along with the modal triggers. We benchmark this dataset on two tasks: (i) agent-specific multi-label deontic modality classification, and (ii) agent-specific deontic modality and trigger span detection using Transformer-based (Vaswani et al., 2017) language models. Transfer learning experiments show that the linguistic diversity of modal expressions in LEXDEMOD generalizes reasonably from lease to employment and rental agreements. A small case study indicates that a model trained on LEXDEMOD can detect red flags with high recall. We believe our work offers a new research direction for deontic modality detection in the legal domain.


Scientists identify key traits of A**HOLES including manipulation, aggression, and entitlement

Daily Mail - Science & tech

Whether it's a horrible manager at work or a particularly unlikeable ex-partner, everyone knows at least one person they'd describe as an a**hole. Now, scientists from the Franklin College of Arts and Sciences have revealed the key characteristics of a**holes – and say middle-aged men are most likely to have them. The core traits include manipulation, aggression, and entitlement, as well as irresponsibility and anger. The'Big Five' personality traits are: Openness - People who are generally open have a higher degree of intellectual curiosity and creativity. They are also more unpredictable and likely to be involved in risky behaviour such as drug taking.


Towards Accountability in the Use of Artificial Intelligence for Public Administrations

Loi, Michele, Spielkamp, Matthias

arXiv.org Artificial Intelligence

We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government. We analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer to processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.


Can You Pet The Dog? In many games, and in this article, you can.

Washington Post - Technology News

With every new generation of consoles and components, video games grow closer and closer to replicating reality. From the glistening sweat on star athletes' faces in sports franchises like "Madden" and "NBA 2K," to the soft swaying of grass in samurai thriller "Ghost of Tsushima," game-makers are always leveraging the latest in granular detail to sell the immersive power of the medium. Tristan Cooper, who owns the Twitter account "Can You Pet the Dog?," never set out to create a social media juggernaut. Rather, he was just trying to point out what he felt was a common quirk of many high-profile games: While many featured dogs, wolves and other furry creatures as hostile foes of the protagonist, those that did feature cuddly animal friends rarely let you pet them. Cooper says the account was particularly inspired by his early experience with online shooter "The Division 2." "'The Division 2′s' apocalyptic streets were rife with frightened dogs that you could not console or help in any way," he wrote in an email to The Washington Post.


Online Learning Demands in Max-min Fairness

Kandasamy, Kirthevasan, Sela, Gur-Eyal, Gonzalez, Joseph E, Jordan, Michael I, Stoica, Ion

arXiv.org Machine Learning

We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof, but when users do not know their resource requirements. The mechanism is repeated for multiple rounds and a user's requirements can change on each round. At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time. Such situations are common in the shared usage of a compute cluster among many users in an organisation, where all teams may not precisely know the amount of resources needed to execute their jobs. By understating their requirements, users will receive less than they need and consequently not achieve their goals. By overstating them, they may siphon away precious resources that could be useful to others in the organisation. We formalise this task of online learning in fair division via notions of efficiency, fairness, and strategy-proofness applicable to this setting, and study this problem under three types of feedback: when the users' observations are deterministic, when they are stochastic and follow a parametric model, and when they are stochastic and nonparametric. We derive mechanisms inspired by the classical max-min fairness procedure that achieve these requisites, and quantify the extent to which they are achieved via asymptotic rates. We corroborate these insights with an experimental evaluation on synthetic problems and a web-serving task.


A Defeasible Calculus for Zetetic Agents

Millson, Jared

arXiv.org Artificial Intelligence

The study of defeasible reasoning unites epistemologists with those working in AI, in part, because both are interested in epistemic rationality. While it is traditionally thought to govern the formation and (with)holding of beliefs, epistemic rationality may also apply to the interrogative attitudes associated with our core epistemic practice of inquiry, such as wondering, investigating, and curiosity. Since generally intelligent systems should be capable of rational inquiry, AI researchers have a natural interest in the norms that govern interrogative attitudes. Following its recent coinage, we use the term "zetetic" to refer to the properties and norms associated with the capacity to inquire. In this paper, we argue that zetetic norms can be modeled via defeasible inferences to and from questions---a.k.a erotetic inferences---in a manner similar to the way norms of epistemic rationality are represented by defeasible inference rules. We offer a sequent calculus that accommodates the unique features of "erotetic defeat" and that exhibits the computational properties needed to inform the design of zetetic agents. The calculus presented here is an improved version of the one presented in Millson (2019), extended to cover a new class of defeasible erotetic inferences.


Fair Allocation of Indivisible Goods to Asymmetric Agents

Farhadi, Alireza, Ghodsi, Mohammad, Hajiaghayi, Mohammad Taghi, Lahaie, Sébastien, Pennock, David, Seddighin, Masoud, Seddighin, Saeed, Yami, Hadi

Journal of Artificial Intelligence Research

We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is on the case where the goods are indivisible and agents have unequal entitlements. This problem is a generalization of the work by Procaccia and Wang (2014) wherein the agents are assumed to be symmetric with respect to their entitlements. Although Procaccia and Wang show an almost fair (constant approximation) allocation exists in their setting, our main result is in sharp contrast to their observation. We show that, in some cases with n agents, no allocation can guarantee better than 1/n approximation of a fair allocation when the entitlements are not necessarily equal. Furthermore, we devise a simple algorithm that ensures a 1/n approximation guarantee. Our second result is for a restricted version of the problem where the valuation of every agent for each good is bounded by the total value he wishes to receive in a fair allocation. Although this assumption might seem without loss of generality, we show it enables us to find a 1/2 approximation fair allocation via a greedy algorithm. Finally, we run some experiments on real-world data and show that, in practice, a fair allocation is likely to exist. We also support our experiments by showing positive results for two stochastic variants of the problem, namely stochastic agents and stochastic items.