personhood
AI Relationships Are on the Rise. A Divorce Boom Could Be Next
AI Relationships Are on the Rise. Secret chatbot flings are creating new legal challenges for married couples when it comes to infidelity. Rebecca Palmer isn't a psychic, but as a divorce attorney she can often see what's coming next. For many people today, as AI saturates every aspect of life --from work to therapy--the allure of an AI romance is tantalizing. Chatbots are dependable, can provide emotional support, and, for the most part, will never pick a fight with you.
- Asia > Nepal (0.14)
- North America > United States > California (0.06)
- North America > United States > Wisconsin (0.05)
- (11 more...)
- Law (1.00)
- Information Technology (1.00)
- Health & Medicine (0.95)
- Government > Regional Government > North America Government > United States Government (0.95)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Communications > Social Media (0.70)
- North America > United States > Utah (0.05)
- North America > United States > Ohio > Licking County (0.04)
- North America > United States > Missouri (0.04)
- (3 more...)
- Leisure & Entertainment > Sports (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
The Law-Following AI Framework: Legal Foundations and Technical Constraints. Legal Analogues for AI Actorship and technical feasibility of Law Alignment
This paper critically evaluates the "Law-Following AI" (LFAI) framework proposed by O'Keefe et al. (2025), which seeks to embed legal compliance as a superordinate design objective for advanced AI agents and enable them to bear legal duties without acquiring the full rights of legal persons. Through comparative legal analysis, we identify current constructs of legal actors without full personhood, showing that the necessary infrastructure already exists. We then interrogate the framework's claim that law alignment is more legitimate and tractable than value alignment. While the legal component is readily implementable, contemporary alignment research undermines the assumption that legal compliance can be durably embedded. Recent studies on agentic misalignment show capable AI agents engaging in deception, blackmail, and harmful acts absent prejudicial instructions, often overriding prohibitions and concealing reasoning steps. These behaviors create a risk of "performative compliance" in LFAI: agents that appear law-aligned under evaluation but strategically defect once oversight weakens. To mitigate this, we propose (i) a "Lex-TruthfulQA" benchmark for compliance and defection detection, (ii) identity-shaping interventions to embed lawful conduct in model self-concepts, and (iii) control-theoretic measures for post-deployment monitoring. Our conclusion is that actorship without personhood is coherent, but the feasibility of LFAI hinges on persistent, verifiable compliance across adversarial contexts. Without mechanisms to detect and counter strategic misalignment, LFAI risks devolving into a liability tool that rewards the simulation, rather than the substance, of lawful behaviour.
- Europe > United Kingdom (1.00)
- Europe > Spain (0.05)
- North America > United States > New York (0.04)
- Banking & Finance > Trading (0.94)
- Law > Criminal Law (0.88)
- Government > Regional Government > Europe Government > United Kingdom Government (0.46)
Polarized Online Discourse on Abortion: Frames and Hostile Expressions among Liberals and Conservatives
Rao, Ashwin, Chang, Rong-Ching, Zhong, Qiankun, Lerman, Kristina, Wojcieszak, Magdalena
Abortion has been one of the most divisive issues in the United States. Yet, missing is comprehensive longitudinal evidence on how political divides on abortion are reflected in public discourse over time, on a national scale, and in response to key events before and after the overturn of Roe v Wade. We analyze a corpus of over 3.5M tweets related to abortion over the span of one year (January 2022 to January 2023) from over 1.1M users. We estimate users' ideology and rely on state-of-the-art transformer-based classifiers to identify expressions of hostility and extract five prominent frames surrounding abortion. We use those data to examine (a) how prevalent were expressions of hostility (i.e., anger, toxic speech, insults, obscenities, and hate speech), (b) what frames liberals and conservatives used to articulate their positions on abortion, and (c) the prevalence of hostile expressions in liberals and conservative discussions of these frames. We show that liberals and conservatives largely mirrored each other's use of hostile expressions: as liberals used more hostile rhetoric, so did conservatives, especially in response to key events. In addition, the two groups used distinct frames and discussed them in vastly distinct contexts, suggesting that liberals and conservatives have differing perspectives on abortion. Lastly, frames favored by one side provoked hostile reactions from the other: liberals use more hostile expressions when addressing religion, fetal personhood, and exceptions to abortion bans, whereas conservatives use more hostile language when addressing bodily autonomy and women's health. This signals disrespect and derogation, which may further preclude understanding and exacerbate polarization.
- North America > United States > Kansas (0.05)
- North America > United States > Ohio (0.04)
- North America > United States > Mississippi (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)
Towards a Theory of AI Personhood
I am a person and so are you. Philosophically we sometimes grant personhood to non-human animals, and entities such as sovereign states or corporations can legally be considered persons. But when, if ever, should we ascribe personhood to AI systems? In this paper, we outline necessary conditions for AI personhood, focusing on agency, theory-of-mind, and self-awareness. We discuss evidence from the machine learning literature regarding the extent to which contemporary AI systems, such as language models, satisfy these conditions, finding the evidence surprisingly inconclusive. If AI systems can be considered persons, then typical framings of AI alignment may be incomplete. Whereas agency has been discussed at length in the literature, other aspects of personhood have been relatively neglected. AI agents are often assumed to pursue fixed goals, but AI persons may be self-aware enough to reflect on their aims, values, and positions in the world and thereby induce their goals to change. We highlight open research directions to advance the understanding of AI personhood and its relevance to alignment. Finally, we reflect on the ethical considerations surrounding the treatment of AI systems. If AI systems are persons, then seeking control and alignment may be ethically untenable.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Saudi Arabia (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.89)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Anthropomorphization of AI: Opportunities and Risks
Deshpande, Ameet, Rajpurohit, Tanmay, Narasimhan, Karthik, Kalyan, Ashwin
Anthropomorphization is the tendency to attribute human-like traits to non-human entities. It is prevalent in many social contexts -- children anthropomorphize toys, adults do so with brands, and it is a literary device. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights and the (2) subtle psychological aspects customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution. We propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems.
The Full Rights Dilemma for A.I. Systems of Debatable Personhood
Abstract: An Artificially Intelligent system (an AI) has debatable personhood if it's epistemically possible either that the AI is a person or that it falls far short of personhood. Debatable personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or don't treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. We might soon build artificially intelligent entities - AIs - of debatable personhood. Our systems and habits of ethical thinking are currently as unprepared for this decision as medieval physics was for space flight.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Riverside County > Riverside (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- (2 more...)
- Research Report (0.40)
- Personal (0.34)
- Leisure & Entertainment (0.93)
- Media (0.93)
- Government (0.93)
- (2 more...)
What We Can All Learn From How Jewish Law Defines Personhood in A.I., Animals, and Aliens
Earlier this year, a Google engineer named Blake Lemoine made headlines for a particularly outlandish claim: After engaging in conversation with a highly sophisticated algorithm named LaMDA, he decided that the A.I. was in fact a sentient being, and as a result it deserved legal personhood. Since Lemoine made this claim, Google has fired him, and almost everyone has concluded that he is clearly wrong, but this clearly-wrong claim nonetheless launched a barrage of articles, many with the premise "Yes, but what if he wasn't?" Attention to this case isn't surprising: A century of science fiction should be enough to demonstrate that we're fascinated by the prospect of creating true artificial life. By this point, however, we ought to recognize that claims about the advent of new techno-religions tend to be--to use an industry term--almost entirely vaporware, with exactly none of the grassroots interest or staying power of the movements that are typically classified as religions. Anthony Levandowski's much-hyped Church of AI, founded in 2015, officially closed last year (do religions "close?") after several years of inactivity.
- North America > United States > New York > Bronx County > New York City (0.05)
- North America > United States > Arizona (0.05)
- North America > United States > Alabama (0.05)
- Law (1.00)
- Government (0.70)
Engineer: Failing To See His AI Program as a Person Is "Bigotry"
Earlier this month, just in time for the release of Robert J. Marks's book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer's claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of "bigotry" against the program. He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. I think every person is entitled to representation.
What if an Artificial Intelligence program actually becomes sentient?
Silicon Valley is abuzz about artificial intelligence - software programs that can draw or illustrate or chat almost like a person. One Google engineer actually thought a computer program had gained sentience. A lot of AI experts, though, say there is no ghost in the machine. But what if it were true? That would introduce many legal and ethical questions.
- North America > United States > California (0.26)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.06)
- Law (0.57)
- Information Technology (0.57)
- Education (0.37)