disclosure
Top UFO experts reveal 'whistleblower activity' will finally bring disclosure in 2026: 'The evidence is aligning'
Ilhan Omar is under investigation for her skyrocketing wealth... as she berates reporters for questioning her about'fraud' Startling maps reveal when you'll die based where you live... and the best states for growing old Iconic '90s femme fatale Men In Black star hasn't been seen in 16 years... now the Daily Mail reveals distressing truth behind her disappearance Investigator reveals hidden clues in Ellen Greenberg's crime scene photos that PROVE bride-to-be was brutally murdered Nicole Kidman's subtle but devastating digs at Keith Urban revealed... as insiders claim country star has MOVED IN with new squeeze NFL fans fume Bills-Broncos was'rigged' as controversial late call sparks debate: 'Completely scripted' Trending baby names for 2026 revealed: Posh'granola' monikers, farm-inspired nicknames and vintage classics are set to soar in popularity this year The'marry me' sex move that'll make even the most commitment-phobic of men beg to see you again... and it worked for THREE of ...
- North America > Canada > Alberta (0.14)
- North America > United States > New York (0.05)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- (21 more...)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (7 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.68)
AudAgent: Automated Auditing of Privacy Policy Compliance in AI Agents
AI agents can autonomously perform tasks and, often without explicit user consent, collect or disclose users' sensitive local data, which raises serious privacy concerns. Although AI agents' privacy policies describe their intended data practices, there remains limited transparency and accountability about whether runtime behavior matches those policies. To close this gap, we introduce AudAgent, a visual tool that continuously monitors AI agents' data practices in real time and guards compliance with stated privacy policies. AudAgent consists of four components for automated privacy auditing of AI agents. (i) Policy formalization: a novel cross-LLM voting mechanism to guarantee confidence of the parsed privacy policy model. (ii) Runtime annotation: a lightweight Presidio-based analyzer detects sensitive data and annotates data practices based on the AI agent's context and the privacy policy model. (iii) Compliance auditing: ontology graphs and automata-based checking connect the privacy policy model with runtime annotations, enabling on-the-fly compliance checking. (iv) User interface: an infrastructure-independent implementation visualizes the real-time execution trace of AI agents along with potential privacy policy violations, providing user-friendly transparency and accountability. We evaluate AudAgent with AI agents built using mainstream frameworks, demonstrating its effectiveness in detecting and visualizing privacy policy violations in real time. Using AudAgent, we also find that most privacy policies omit explicit safeguards for highly sensitive data such as SSNs, whose misuse violates legal requirements, and that many agents do not refuse handling such data via third-party tools, including those controlled by Claude, Gemini, and DeepSeek. AudAgent proactively blocks operations on such data, overriding the agents' original privacy policy and behavior.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- (11 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Your Data Might Determine How Much You Pay for Eggs
A newly enacted New York law requires retailers to say whether your data influences the price of basic goods like a dozen eggs or toilet paper, but not how. If you're near Rochester, New York, the price for a carton of Target's Good & Gather eggs is listed as $1.99 on its website. It's unclear why the prices differ, but a new notice on Target's website offers a potential hint: "This price was set by an algorithm using your personal data." A recently enacted New York State law requires businesses that algorithmically set prices using customers' personal data to disclose that. According to the law, personal data includes any data that can be "linked or reasonably linked, directly or indirectly, with a specific consumer or device." The law doesn't require businesses to explicitly state what information about a person or device is being used or how each piece of information affects the final price a customer sees.
- North America > United States > New York > Monroe County > Rochester (0.25)
- North America > United States > California (0.05)
- North America > United States > Pennsylvania (0.05)
- (2 more...)
- Retail (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (0.77)
- Government > Regional Government > North America Government > United States Government (0.50)
A Longitudinal Measurement of Privacy Policy Evolution for Large Language Models
Tao, Zhen, Pan, Shidong, Xing, Zhenchang, Black, Emily, Gillis, Talia, Chen, Chunyang
Large language model (LLM) services have been rapidly integrated into people's daily lives as chatbots and agentic systems. They are nourished by collecting rich streams of data, raising privacy concerns around excessive collection of sensitive personal information. Privacy policies are the fundamental mechanism for informing users about data practices in modern information privacy paradigm. Although traditional web and mobile policies are well studied, the privacy policies of LLM providers, their LLM-specific content, and their evolution over time remain largely underexplored. In this paper, we present the first longitudinal empirical study of privacy policies for mainstream LLM providers worldwide. We curate a chronological dataset of 74 historical privacy policies and 115 supplemental privacy documents from 11 LLM providers across 5 countries up to August 2025, and extract over 3,000 sentence-level edits between consecutive policy versions. We compare LLM privacy policies to those of other software formats, propose a taxonomy tailored to LLM privacy policies, annotate policy edits and align them with a timeline of key LLM ecosystem events. Results show they are substantially longer, demand college-level reading ability, and remain highly vague. Our taxonomy analysis reveals patterns in how providers disclose LLM-specific practices and highlights regional disparities in coverage. Policy edits are concentrated in first-party data collection and international/specific-audience sections, and that product releases and regulatory actions are the primary drivers, shedding light on the status quo and the evolution of LLM privacy policies.
- Oceania > Australia (0.14)
- South America > Brazil (0.04)
- North America > Canada (0.04)
- (10 more...)
AI use in American newspapers is widespread, uneven, and rarely disclosed
Russell, Jenna, Karpinska, Marzena, Akinode, Destiny, Thai, Katherine, Emi, Bradley, Spero, Max, Iyyer, Mohit
AI is rapidly transforming journalism, but the extent of its use in published newspaper articles remains unclear. We address this gap by auditing a large-scale dataset of 186K articles from online editions of 1.5K American newspapers published in the summer of 2025. Using Pangram, a state-of-the-art AI detector, we discover that approximately 9% of newly-published articles are either partially or fully AI-generated. This AI use is unevenly distributed, appearing more frequently in smaller, local outlets, in specific topics such as weather and technology, and within certain ownership groups. We also analyze 45K opinion pieces from Washington Post, New York Times, and Wall Street Journal, finding that they are 6.4 times more likely to contain AI-generated content than news articles from the same publications, with many AI-flagged op-eds authored by prominent public figures. Despite this prevalence, we find that AI use is rarely disclosed: a manual audit of 100 AI-flagged articles found only five disclosures of AI use. Overall, our audit highlights the immediate need for greater transparency and updated editorial standards regarding the use of AI in journalism to maintain public trust.
- South America > Guyana (0.28)
- Europe > Austria > Vienna (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- (22 more...)
- Research Report > New Finding (1.00)
- Personal (1.00)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Information Technology > Communications > Social Media (0.67)
ESGBench: A Benchmark for Explainable ESG Question Answering in Corporate Sustainability Reports
George, Sherine, Saji, Nithish
We present ESGBench, a benchmark dataset and evaluation framework designed to assess explainable ESG question answering systems using corporate sustainability reports. The benchmark consists of domain-grounded questions across multiple ESG themes, paired with human-curated answers and supporting evidence to enable fine-grained evaluation of model reasoning. We analyze the performance of state-of-the-art LLMs on ESGBench, highlighting key challenges in factual consistency, traceability, and domain alignment. ESGBench aims to accelerate research in transparent and accountable ESG-focused AI systems.
- Research Report (0.41)
- Public Relations > Community Relations (0.35)
Pharos-ESG: A Framework for Multimodal Parsing, Contextual Narration, and Hierarchical Labeling of ESG Report
Chen, Yan, Zou, Yu, Zeng, Jialei, You, Haoran, Zhou, Xiaorui, Zhong, Aixi
Environmental, Social, and Governance (ESG) principles are reshaping the foundations of global financial gover- nance, transforming capital allocation architectures, regu- latory frameworks, and systemic risk coordination mecha- nisms. However, as the core medium for assessing corpo- rate ESG performance, the ESG reports present significant challenges for large-scale understanding, due to chaotic read- ing order from slide-like irregular layouts and implicit hier- archies arising from lengthy, weakly structured content. To address these challenges, we propose Pharos-ESG, a uni- fied framework that transforms ESG reports into structured representations through multimodal parsing, contextual nar- ration, and hierarchical labeling. It integrates a reading-order modeling module based on layout flow, hierarchy-aware seg- mentation guided by table-of-contents anchors, and a multi- modal aggregation pipeline that contextually transforms vi- sual elements into coherent natural language. The framework further enriches its outputs with ESG, GRI, and sentiment labels, yielding annotations aligned with the analytical de- mands of financial research. Extensive experiments on anno- tated benchmarks demonstrate that Pharos-ESG consistently outperforms both dedicated document parsing systems and general-purpose multimodal models. In addition, we release Aurora-ESG, the first large-scale public dataset of ESG re- ports, spanning Mainland China, Hong Kong, and U.S. mar- kets, featuring unified structured representations of multi- modal content, enriched with fine-grained layout and seman- tic annotations to better support ESG integration in financial governance and decision-making.
- Asia > China > Hong Kong (0.25)
- North America > Mexico > Gulf of Mexico (0.04)
- Asia > Taiwan (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Grammars & Parsing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
From Passive to Persuasive: Steering Emotional Nuance in Human-AI Negotiation
Chebrolu, Niranjan, Yeo, Gerard Christopher, Jaidka, Kokil
Large Language Models (LLMs) demonstrate increasing conversational fluency, yet instilling them with nuanced, human-like emotional expression remains a significant challenge. Current alignment techniques often address surface-level output or require extensive fine-tuning. This paper demonstrates that targeted activation engineering can steer LLaMA 3.1-8B to exhibit more human-like emotional nuances. We first employ attribution patching to identify causally influential components, to find a key intervention locus by observing activation patterns during diagnostic conversational tasks. We then derive emotional expression vectors from the difference in the activations generated by contrastive text pairs (positive vs. negative examples of target emotions). Applying these vectors to new conversational prompts significantly enhances emotional characteristics: steered responses show increased positive sentiment (e.g., joy, trust) and more frequent first-person pronoun usage, indicative of greater personal engagement. Our findings offer a precise and interpretable framework and new directions for the study of conversational AI.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- Asia > Singapore (0.04)
- Research Report > Experimental Study (0.69)
- Research Report > New Finding (0.66)
Breaking Privacy in Federated Clustering: Perfect Input Reconstruction via Temporal Correlations
Yang, Guang, Luo, Lixia, Li, Qiongxiu
Federated clustering allows multiple parties to discover patterns in distributed data without sharing raw samples. To reduce overhead, many protocols disclose intermediate centroids during training. While often treated as harmless for efficiency, whether such disclosure compromises privacy remains an open question. Prior analyses modeled the problem as a so-called Hidden Subset Sum Problem (HSSP) and argued that centroid release may be safe, since classical HSSP attacks fail to recover inputs. We revisit this question and uncover a new leakage mechanism: temporal regularities in $k$-means iterations create exploitable structure that enables perfect input reconstruction. Building on this insight, we propose Trajectory-Aware Reconstruction (TAR), an attack that combines temporal assignment information with algebraic analysis to recover exact original inputs. Our findings provide the first rigorous evidence, supported by a practical attack, that centroid disclosure in federated clustering significantly compromises privacy, exposing a fundamental tension between privacy and efficiency.
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France (0.04)
- (2 more...)
Chitchat with AI: Understand the supply chain carbon disclosure of companies worldwide through Large Language Model
Hang, Haotian, Shen, Yueyang, Zhu, Vicky, Cruz, Jose, Li, Michelle
In the context of global sustainability mandates, corporate carbon disclosure has emerged as a critical mechanism for aligning business strategy with environmental responsibility. The Carbon Disclosure Project (CDP) hosts the world's largest longitudinal dataset of climate-related survey responses, combining structured indicators with open-ended narratives, but the heterogeneity and free-form nature of these disclosures present significant analytical challenges for benchmarking, compliance monitoring, and investment screening. This paper proposes a novel decision-support framework that leverages large language models (LLMs) to assess corporate climate disclosure quality at scale. It develops a master rubric that harmonizes narrative scoring across 11 years of CDP data (2010-2020), enabling cross-sector and cross-country benchmarking. By integrating rubric-guided scoring with percentile-based normalization, our method identifies temporal trends, strategic alignment patterns, and inconsistencies in disclosure across industries and regions. Results reveal that sectors such as technology and countries like Germany consistently demonstrate higher rubric alignment, while others exhibit volatility or superficial engagement, offering insights that inform key decision-making processes for investors, regulators, and corporate environmental, social, and governance (ESG) strategists. The proposed LLM-based approach transforms unstructured disclosures into quantifiable, interpretable, comparable, and actionable intelligence, advancing the capabilities of AI-enabled decision support systems (DSSs) in the domain of climate governance.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- South America > Brazil (0.05)
- (11 more...)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.47)
- Government (1.00)
- Education (1.00)
- Banking & Finance > Trading (1.00)
- (3 more...)