Goto

Collaborating Authors

 supervisory authority


Exposing the Illusion of Fairness: Auditing Vulnerabilities to Distributional Manipulation Attacks

Lafargue, Valentin, Monteiro, Adriana Laurindo, Claeys, Emmanuelle, Risser, Laurent, Loubes, Jean-Michel

arXiv.org Artificial Intelligence

Proving the compliance of AI algorithms has become an important challenge with the growing deployment of such algorithms for real-life applications. Inspecting possible biased behaviors is mandatory to satisfy the constraints of the regulations of the EU Artificial Intelligence's Act. Regulation-driven audits increasingly rely on global fairness metrics, with Disparate Impact being the most widely used. Yet such global measures depend highly on the distribution of the sample on which the measures are computed. We investigate first how to manipulate data samples to artificially satisfy fairness criteria, creating minimally perturbed datasets that remain statistically indistinguishable from the original distribution while satisfying prescribed fairness constraints. Then we study how to detect such manipulation. Our analysis (i) introduces mathematically sound methods for modifying empirical distributions under fairness constraints using entropic or optimal transport projections, (ii) examines how an auditee could potentially circumvent fairness inspections, and (iii) offers recommendations to help auditors detect such data manipulations. These results are validated through experiments on classical tabular datasets in bias detection.


Legal Requirements Analysis: A Regulatory Compliance Perspective

Abualhaija, Sallam, Ceci, Marcello, Briand, Lionel

arXiv.org Artificial Intelligence

Modern software has been an integral part of everyday activities in many disciplines and application contexts. Introducing intelligent automation by leveraging artificial intelligence (AI) led to break-throughs in many fields. The effectiveness of AI can be attributed to several factors, among which is the increasing availability of data. Regulations such as the general data protection regulation (GDPR) in the European Union (EU) are introduced to ensure the protection of personal data. Software systems that collect, process, or share personal data are subject to compliance with such regulations. Developing compliant software depends heavily on addressing legal requirements stipulated in applicable regulations, a central activity in the requirements engineering (RE) phase of the software development process. RE is concerned with specifying and maintaining requirements of a system-to-be, including legal requirements. Legal agreements which describe the policies organizations implement for processing personal data can provide an additional source to regulations for eliciting legal requirements. In this chapter, we explore a variety of methods for analyzing legal requirements and exemplify them on GDPR. Specifically, we describe possible alternatives for creating machine-analyzable representations from regulations, survey the existing automated means for enabling compliance verification against regulations, and further reflect on the current challenges of legal requirements analysis.


UK FCA, BoE, and PRA Publish Discussion Paper on Adopting AI in Financial Services

#artificialintelligence

On October 11, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the UK Financial Conduct Authority (FCA) (together, the Supervisory Authorities) published a discussion paper (DP5/22) on the safe and responsible adoption of artificial intelligence (AI) in financial services (Discussion Paper). The Discussion Paper forms part of the Supervisory Authorities' AI-related program of works, including the AI Public Private Forum and is being considered in light of the UK government's efforts towards regulating AI. The purpose of the Discussion Paper is to provide a platform for assessing the desirability of regulating AI technology adoption in UK financial services by safeguarding each of the Supervisory Authorities' own objectives. The BoE's objectives are to maintain financial stability and support the UK government's economic policy. The PRA focuses on the promotion of safety, soundness, and competition for services provided by PRA-authorized firms and insurance firms, while the FCA's strategic objective is to ensure market integrity, effective competition, and protection of consumers in the UK financial system. The Supervisory Authorities consider it useful to distinguish what constitutes AI by either (1) providing a more precise legal definition of what AI is (and what it is not); or (2) viewing AI as part of a wider spectrum of analytical techniques with a range of characteristics for mapping out AI.


UK FCA, PRA, and BoE publish discussion paper (DP5/22) on AI and machine learning

#artificialintelligence

In the discussion paper, the UK financial supervisory authorities have not provided a new legal framework or their intended future approaches for regulating the use of AI and machine learning in financial services. However, they have assessed the benefits, risks and harms related to the use of AI, and the current legal framework that applies to AI in financial services. The UK financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) (together Supervisory Authorities) jointly published a discussion paper (DP5/22) on artificial intelligence (AI) and machine learning on 11 October 2022. The purpose of the discussion paper was to facilitate a public debate on the safe and responsible adoption of AI in UK financial services. The Supervisory Authorities have also raised discussion questions for stakeholder input, with the aim of understanding whether the current regulatory framework is sufficient to address the potential risks and harms associated with AI and how any additional intervention may support the safe and responsible adoption of AI in UK financial services.


Supervisory Authorities publish discussion paper on artificial intelligence

#artificialintelligence

The UK financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the Financial Conduct Authority (FCA) – together Supervisory Authorities – jointly published a discussion paper (DP5/22) on artificial intelligence (AI) and machine learning on 11 October 2022. The purpose of the discussion paper was to facilitate a public debate on the safe and responsible adoption of AI in UK financial services. The Supervisory Authorities have also raised discussion questions for stakeholder input, with the aim of understanding whether the current regulatory framework is sufficient to address the potential risks and harms associated with AI and how any additional intervention may support the safe and responsible adoption of AI in UK financial services. The discussion paper provides a platform for the Supervisory Authorities, experts and stakeholders to collaborate and jointly assess whether the current legal framework can adequately regulate AI technology by safeguarding each of the Supervisory Authorities' objectives while at the same time promoting innovation in UK financial services. This consultation occurs in parallel to the UK government's ongoing work in developing its own cross-sector approach to the regulation of AI technology and will therefore provide a valuable contribution to this broader policy debate.


Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities

Kuiper, Ouren, Berg, Martin van den, van der Burgt, Joost, Leijnen, Stefan

arXiv.org Artificial Intelligence

Explainable artificial intelligence (xAI) is seen as a solution to making AI systems less of a "black box". It is essential to ensure transparency, fairness, and accountability - which are especially paramount in the financial sector. The aim of this study was a preliminary investigation of the perspectives of supervisory authorities and regulated entities regarding the application of xAI in the financial sector. Three use cases (consumer credit, credit risk, and anti-money laundering) were examined using semi-structured interviews at three banks and two supervisory authorities in the Netherlands. We found that for the investigated use cases a disparity exists between supervisory authorities and banks regarding the desired scope of explainability of AI systems. We argue that the financial sector could benefit from clear differentiation between technical AI (model) explainability requirements and explainability requirements of the broader AI system in relation to applicable laws and regulations.


Using Artificial Intelligence to Support Compliance with the General Data Protection Regulation

Kingston, John KC

arXiv.org Artificial Intelligence

The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR - and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: - Following compliance checklists and codes of conduct; - Supporting risk assessments; - Complying with the new regulations regarding technologies that perform automatic profiling; - Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances.