Goto

Collaborating Authors

 presumption


Presumed Cultural Identity: How Names Shape LLM Responses

Pawar, Siddhesh, Arora, Arnav, Kaffee, Lucie-Aimée, Augenstein, Isabelle

arXiv.org Artificial Intelligence

Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names are an important point of information for personalisation. Names can enter chatbot conversations through direct user input (requested by chatbots), as part of task contexts such as CV reviews, or as built-in memory features that store user information for personalisation. We study biases associated with names by measuring cultural presumptions in the responses generated by LLMs when presented with common suggestion-seeking queries, which might involve making assumptions about the user. Our analyses demonstrate strong assumptions about cultural identity associated with names present in LLM generations across multiple cultures. Our work has implications for designing more nuanced personalisation systems that avoid reinforcing stereotypes while maintaining meaningful customisation.


Update law on computer evidence to avoid Horizon repeat, ministers urged

The Guardian

Ministers need to "immediately" update the law to acknowledge that computers are fallible or risk a repeat of the Horizon scandal, legal experts say. In English and Welsh law, computers are assumed to be "reliable" unless proven otherwise. But critics of this approach say this reverses the burden of proof normally applied in criminal cases. Stephen Mason, a barrister and expert on electronic evidence, said: "It says, for the person who's saying'there's something wrong with this computer', that they have to prove it. Even if it's the person accusing them who has the information."


Defeasible Reasoning with Knowledge Graphs

Raggett, Dave

arXiv.org Artificial Intelligence

Human knowledge is subject to uncertainties, imprecision, incompleteness and inconsistencies. Moreover, the meaning of many everyday terms is dependent on the context. That poses a huge challenge for the Semantic Web. This paper introduces work on an intuitive notation and model for defeasible reasoning with imperfect knowledge, and relates it to previous work on argumentation theory. PKN is to N3 as defeasible reasoning is to deductive logic. Further work is needed on an intuitive syntax for describing reasoning strategies and tactics in declarative terms, drawing upon the AIF ontology for inspiration. The paper closes with observations on symbolic approaches in the era of large language models.


Formalizing the presumption of independence

Christiano, Paul, Neyman, Eric, Xu, Mark

arXiv.org Artificial Intelligence

Mathematical proof aims to deliver confident conclusions, but a very similar process of deduction can be used to make uncertain estimates that are open to revision. A key ingredient in such reasoning is the use of a "default" estimate of $\mathbb{E}[XY] = \mathbb{E}[X] \mathbb{E}[Y]$ in the absence of any specific information about the correlation between $X$ and $Y$, which we call *the presumption of independence*. Reasoning based on this heuristic is commonplace, intuitively compelling, and often quite successful -- but completely informal. In this paper we introduce the concept of a heuristic estimator as a potential formalization of this type of defeasible reasoning. We introduce a set of intuitively desirable coherence properties for heuristic estimators that are not satisfied by any existing candidates. Then we present our main open problem: is there a heuristic estimator that formalizes intuitively valid applications of the presumption of independence without also accepting spurious arguments?


Do Charge Prediction Models Learn Legal Theory?

An, Zhenwei, Huang, Quzhe, Jiang, Cong, Feng, Yansong, Zhao, Dongyan

arXiv.org Artificial Intelligence

The charge prediction task aims to predict the charge for a case given its fact description. Recent models have already achieved impressive accuracy in this task, however, little is understood about the mechanisms they use to perform the judgment.For practical applications, a charge prediction model should conform to the certain legal theory in civil law countries, as under the framework of civil law, all cases are judged according to certain local legal theories. In China, for example, nearly all criminal judges make decisions based on the Four Elements Theory (FET).In this paper, we argue that trustworthy charge prediction models should take legal theories into consideration, and standing on prior studies in model interpretation, we propose three principles for trustworthy models should follow in this task, which are sensitive, selective, and presumption of innocence.We further design a new framework to evaluate whether existing charge prediction models learn legal theories. Our findings indicate that, while existing charge prediction models meet the selective principle on a benchmark dataset, most of them are still not sensitive enough and do not satisfy the presumption of innocence. Our code and dataset are released at https://github.com/ZhenweiAn/EXP_LJP.


EU proposes new approach to liability for artificial intelligence systems

#artificialintelligence

The European Commission has published (28 September 2022) proposals for adapting civil litigation rules in European Union Member States – and in the European Economic Area – to reduce perceived difficulties in claiming non-contractual damages for harm caused by artificial intelligence (AI). The proposal sits alongside wider reforms to the product liability regime. Both are closely intertwined with the EU's proposed AI Act. The AI liability reforms are aimed at making it less burdensome for claimants to secure compensation, with the intention of promoting trust in this increasingly pervasive technology. Claimants in civil law systems (typically without common law-style disclosure obligations) often have much less information than the defendant about the events that they believe have caused harm to them.


Who is liable for my racist robot? - Innovation Origins

#artificialintelligence

Manufacturers of products that make use of artificial intelligence are liable for any eventual damage at all times. In an effort to provide users' rights with better protection, the European Commission is tightening the AI Liability Directive. This summer, the new Meta chatbot became the target of scorn. Just days after Blenderbot 3 of Facebook's parent company launched online in the United States, the self-learning program had degenerated into a racist spreader of fake news. The same thing happened in 2016 with the Tay chatbot developed by Microsoft which was designed to engage in conversations with real people on Twitter.


LEAK: Commission to propose rebuttable presumption for AI-related damages

#artificialintelligence

The European Commission will present a liability regime targeted to damage originating from Artificial Intelligence (AI) that would put causality presumption on the defendant, according to a draft obtained by EURACTIV. The AI Liability Directive is scheduled to be published on 28 September, and it is meant to complement the Artificial Intelligence Act, an upcoming regulation that introduces requirements for AI systems based on their level of risk. "This directive provides in a very targeted and proportionate manner alleviations of the burden of proof through the use of disclosure and rebuttable presumptions," the draft reads. "These measures will help persons seeking compensation for damage caused by AI systems to handle their burden of proof so that justified liability claims can be successful ." The proposal follows the European Parliament's own-initiative resolution adopted in October 2020 that called for facilitating the burden of proof and a strict liability regime for AI-enabled technologies.

  Country: Europe (0.36)
  Industry:

MEPs demand strict rules over AI applications in criminal matters

#artificialintelligence

Ahead of the artificial intelligence regulation, MEPs insisted that its use by law enforcement authorities and in the judiciary be subject to tight controls in Strasbourg on Monday (October 4). "The idea behind this report is not only to catch up but to create a framework", rapporteur, MEP Petar Vitanov (S&D), told EURACTIV. Although not binding, the new report on artificial intelligence (AI) in criminal matters could pave the way for the European Parliament's to back a risk-based approach, while MEPs will soon have to consider the AI Act proposed by the Commission in April. The text sets out the "principles of fairness, data minimisation, accountability, transparency, non-discrimination and explainability" in order to protect fundamental rights. "AI can be very useful", said Vitanov, but "we are trying to separate the areas where it can be useful from those that bring subjective results". "Facial recognition in the public spaces can easily be turned into mass surveillance", Viatnov said.


Artificial Intelligence Might Make Us Rethink Contract Law - AI Summary

#artificialintelligence

While I do think it presents an existential threat to some lawyer jobs -- specifically those doing low-skill tasks as part of Biglaw behemoths -- when a company told me several years ago that they would license AI based off the brains of famous attorneys within the decade I went right ahead and laughed. But then we started talking about some of the cool technology Casepoint is bringing to the party and discussed how the system's ability to break down all the data and map out the connections for itself and build a real story of events. Hurt feelings aren't necessarily a fraud claim and keeping the presumption in favor of the four corners of the document can dissuade cases that -- even if they're true -- would be difficult to prove because we can't reliably muster the whole life cycle to sort out in litigation. Would judges eyeing a motion to dismiss start to balk at the risk of missing a fraud when the costs of getting at the whole story in discovery isn't prohibitive? Transactional lawyers might need to think about the well-worn language to avoid an influx of fights if this is the sort of material that a party could easily compile in a dispute.