liability
CARLoS: Retrieval via Concise Assessment Representation of LoRAs at Scale
Sarfaty, Shahar, Haviv, Adi, Hacohen, Uri, Elkin-Koren, Niva, Livni, Roi, Bermano, Amit H.
The rapid proliferation of generative components, such as LoRAs, has created a vast but unstructured ecosystem. Existing discovery methods depend on unreliable user descriptions or biased popularity metrics, hindering usability. We present CARLoS, a large-scale framework for characterizing LoRAs without requiring additional metadata. Analyzing over 650 LoRAs, we employ them in image generation over a variety of prompts and seeds, as a credible way to assess their behavior. Using CLIP embeddings and their difference to a base-model generation, we concisely define a three-part representation: Directions, defining semantic shift; Strength, quantifying the significance of the effect; and Consistency, quantifying how stable the effect is. Using these representations, we develop an efficient retrieval framework that semantically matches textual queries to relevant LoRAs while filtering overly strong or unstable ones, outperforming textual baselines in automated and human evaluations. While retrieval is our primary focus, the same representation also supports analyses linking Strength and Consistency to legal notions of substantiality and volition, key considerations in copyright, positioning CARLoS as a practical system with broader relevance for LoRA analysis.
- North America > United States (0.68)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Law > Intellectual Property & Technology Law (1.00)
- Government (0.93)
Are Foundation Models Useful for Bankruptcy Prediction?
Kostrzewa, Marcin, Furman, Oleksii, Furman, Roman, Tomczak, Sebastian, Zięba, Maciej
Foundation models have shown promise across various financial applications, yet their effectiveness for corporate bankruptcy prediction remains systematically unevaluated against established methods. We study bankruptcy forecasting using Llama-3.3-70B-Instruct and TabPFN, evaluated on large, highly imbalanced datasets of over one million company records from the Visegrád Group. We provide the first systematic comparison of foundation models against classical machine learning baselines for this task. Our results show that models such as XGBoost and CatBoost consistently outperform foundation models across all prediction horizons. LLM-based approaches suffer from unreliable probability estimates, undermining their use in risk-sensitive financial settings. TabPFN, while competitive with simpler baselines, requires substantial computational resources with costs not justified by performance gains. These findings suggest that, despite their generality, current foundation models remain less effective than specialized methods for bankruptcy forecasting.
Credit Network Modeling and Analysis via Large Language Models
Sun, Enbo, Wang, Yongzhao, Zhou, Hao
We investigate the application of large language models (LLMs) to construct credit networks from firms' textual financial statements and to analyze the resulting network structures. We start with using LLMs to translate each firm's financial statement into a credit network that pertains solely to that firm. These networks are then aggregated to form a comprehensive credit network representing the whole financial system. During this process, the inconsistencies in financial statements are automatically detected and human intervention is involved. We demonstrate that this translation process is effective across financial statements corresponding to credit networks with diverse topological structures. We further investigate the reasoning capabilities of LLMs in analyzing credit networks and determining optimal strategies for executing financial operations to maximize network performance measured by the total assets of firms, which is an inherently combinatorial optimization challenge. To demonstrate this capability, we focus on two financial operations: portfolio compression and debt removal, applying them to both synthetic and real-world datasets. Our findings show that LLMs can generate coherent reasoning and recommend effective executions of these operations to enhance overall network performance.
- Europe > United Kingdom > England > Merseyside > Liverpool (0.40)
- Asia > China > Jiangxi Province > Nanchang (0.04)
- North America > United States > Florida > Duval County > Jacksonville (0.04)
The Algebra of Meaning: Why Machines Need Montague More Than Moore's Law
Jeong, Cheonkam, Kim, Sungdo, Park, Jewoo
Contemporary language models are fluent yet routinely mis-handle the types of meaning their outputs entail. We argue that hallucination, brittle moderation, and opaque compliance outcomes are symptoms of missing type-theoretic semantics rather than data or scale limitations. Building on Montague's view of language as typed, compositional algebra, we recast alignment as a parsing problem: natural-language inputs must be compiled into structures that make explicit their descriptive, normative, and legal dimensions under context. We present Savassan, a neuro-symbolic architecture that compiles utterances into Montague-style logical forms and maps them to typed ontologies extended with deontic operators and jurisdictional contexts. Neural components extract candidate structures from unstructured inputs; symbolic components perform type checking, constraint reasoning, and cross-jurisdiction mapping to produce compliance-aware guidance rather than binary censorship. In cross-border scenarios, the system "parses once" (e.g., defect claim(product x, company y)) and projects the result into multiple legal ontologies (e.g., defamation risk in KR/JP, protected opinion in US, GDPR checks in EU), composing outcomes into a single, explainable decision. This paper contributes: (i) a diagnosis of hallucination as a type error; (ii) a formal Montague-ontology bridge for business/legal reasoning; and (iii) a production-oriented design that embeds typed interfaces across the pipeline. We outline an evaluation plan using legal reasoning benchmarks and synthetic multi-jurisdiction suites. Our position is that trustworthy autonomy requires compositional typing of meaning, enabling systems to reason about what is described, what is prescribed, and what incurs liability within a unified algebra of meaning.
- Information Technology > Security & Privacy (0.49)
- Law > Civil Rights & Constitutional Law (0.35)
Emergent Risk Awareness in Rational Agents under Resource Constraints
Ornia, Daniel Jarne, Bishop, Nicholas, Dyer, Joel, Lee, Wei-Chen, Calinescu, Ani, Farmer, Doyne, Wooldridge, Michael
Advanced reasoning models with agentic capabilities (AI agents) are deployed to interact with humans and to solve sequential decision-making problems under (approximate) utility functions and internal models. When such problems have resource or failure constraints where action sequences may be forcibly terminated once resources are exhausted, agents face implicit trade-offs that reshape their utility-driven (rational) behaviour. Additionally, since these agents are typically commissioned by a human principal to act on their behalf, asymmetries in constraint exposure can give rise to previously unanticipated misalignment between human objectives and agent incentives. We formalise this setting through a survival bandit framework, provide theoretical and empirical results that quantify the impact of survival-driven preference shifts, identify conditions under which misalignment emerges and propose mechanisms to mitigate the emergence of risk-seeking or risk-averse behaviours. As a result, this work aims to increase understanding and interpretability of emergent behaviours of AI agents operating under such survival pressure, and offer guidelines for safely deploying such AI systems in critical resource-limited environments.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Explainability matters: The effect of liability rules on the healthcare sector
Wei, Jiawen, Verona, Elena, Bertolini, Andrea, Mengaldo, Gianmarco
Explainability, the capability of an artificial intelligence system (AIS) to explain its outcomes in a manner that is comprehensible to human beings at an acceptable level, has been deemed essential for critical sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, ``Oracle'' (without explainability) versus ``AI Colleague'' (with explainability) for a thorough analysis. We discuss how the level of automation and explainability of AIS can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved parties and mitigate the risk of potential defensive medicine practices.
- Asia > Singapore (0.05)
- Europe > France (0.04)
- North America > United States (0.04)
- (5 more...)
- Law > Torts Law (0.96)
- Health & Medicine > Therapeutic Area (0.93)
Acquiescence Bias in Large Language Models
Acquiescence bias, i.e. the tendency of humans to agree with statements in surveys, independent of their actual beliefs, is well researched and documented. Since Large Language Models (LLMs) have been shown to be very influenceable by relatively small changes in input and are trained on human-generated data, it is reasonable to assume that they could show a similar tendency. We present a study investigating the presence of acquiescence bias in LLMs across different models, tasks, and languages (English, German, and Polish). Our results indicate that, contrary to humans, LLMs display a bias towards answering no, regardless of whether it indicates agreement or disagreement.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- (3 more...)
AI Agents and the Law
Riedl, Mark O., Desai, Deven R.
As AI becomes more "agentic," it faces technical and socio-legal issues it must address if it is to fulfill its promise of increased economic productivity and efficiency. This paper uses technical and legal perspectives to explain how things change when AI systems start being able to directly execute tasks on behalf of a user. We show how technical conceptions of agents track some, but not all, socio-legal conceptions of agency. That is, both computer science and the law recognize the problems of under-specification for an agent, and both disciplines have robust conceptions of how to address ensuring an agent does what the programmer, or in the law, the principal desires and no more. However, to date, computer science has under-theorized issues related to questions of loyalty and to third parties that interact with an agent, both of which are central parts of the law of agency. First, we examine the correlations between implied authority in agency law and the principle of value-alignment in AI, wherein AI systems must operate under imperfect objective specification. Second, we reveal gaps in the current computer science view of agents pertaining to the legal concepts of disclosure and loyalty, and how failure to account for them can result in unintended effects in AI ecommerce agents. In surfacing these gaps, we show a path forward for responsible AI agent development and deployment.
- North America > Canada (0.14)
- North America > United States > Ohio (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
- (3 more...)
- Law (1.00)
- Banking & Finance > Economy (0.48)
- Transportation > Passenger (0.46)
- Information Technology > Services (0.36)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
What if L.A.'s so-called flaws were underappreciated assets rather than liabilities?
In the wake of January's horrific fires, detractors of Los Angeles -- an urban reality often seen as a toxic mixture of unsustainable resource planning and structurally poor governance systems -- are having a field day. Los Angeles knows how to weather a crisis -- or two or three. Angelenos are tapping into that resilience, striving to build a city for everyone. Their criticism is not new: For most of the 20th century -- and certainly for the last five decades or so -- Los Angeles has been seen by many urbanists as less city and more cautionary tale -- a smoggy expanse of subdivisions and spaghetti junctions, where ambition came with a two-hour commute. Planners shuddered, while architects looked away, even as they accepted handsome commissions to build some of L.A.'s -- if not the world's -- most iconic buildings.
- North America > United States > California > Los Angeles County > Los Angeles (1.00)
- Asia > Middle East > Iran > Ilam Province (0.25)
- North America > United States > New York (0.06)
- (2 more...)
- Energy (1.00)
- Transportation > Ground > Rail (0.49)
Inherent and emergent liability issues in LLM-based agentic systems: a principal-agent perspective
Gabison, Garry A., Xian, R. Patrick
Agentic systems powered by large language models (LLMs) are becoming progressively more complex and capable. Their increasing agency and expanding deployment settings attract growing attention to effective governance policies, monitoring, and control protocols. Based on the emerging landscape of the agentic market, we analyze potential liability issues arising from the delegated use of LLM agents and their extended systems through a principal-agent perspective. Our analysis complements existing risk-based studies on artificial agency and covers the spectrum of important aspects of the principal-agent relationship and their potential consequences at deployment. Furthermore, we motivate method developments for technical governance along the directions of interpretability and behavior evaluations, reward and conflict management, and the mitigation of misalignment and misconduct through principled engineering of detection and fail-safe mechanisms. By illustrating the outstanding issues in AI liability for LLM-based agentic systems, we aim to inform the system design, auditing, and tracing to enhance transparency and liability attribution.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Alameda County > Berkeley (0.14)
- Europe > Austria > Vienna (0.14)
- (17 more...)
- Information Technology > Software (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (5 more...)