algorithmic accountability act
The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?
Mokander, Jakob, Juneja, Prathm, Watson, David, Floridi, Luciano
On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).
- North America > United States (1.00)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.05)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Enhancing Financial Inclusion and Regulatory Challenges: A Critical Analysis of Digital Banks and Alternative Lenders Through Digital Platforms, Machine Learning, and Large Language Models Integration
This paper explores the dual impact of digital banks and alternative lenders on financial inclusion and the regulatory challenges posed by their business models. It discusses the integration of digital platforms, machine learning (ML), and Large Language Models (LLMs) in enhancing financial services accessibility for underserved populations. Through a detailed analysis of operational frameworks and technological infrastructures, this research identifies key mechanisms that facilitate broader financial access and mitigate traditional barriers. Additionally, the paper addresses significant regulatory concerns involving data privacy, algorithmic bias, financial stability, and consumer protection. Employing a mixed-methods approach, which combines quantitative financial data analysis with qualitative insights from industry experts, this paper elucidates the complexities of leveraging digital technology to foster financial inclusivity. The findings underscore the necessity of evolving regulatory frameworks that harmonize innovation with comprehensive risk management. This paper concludes with policy recommendations for regulators, financial institutions, and technology providers, aiming to cultivate a more inclusive and stable financial ecosystem through prudent digital technology integration.
- North America > United States (0.15)
- Asia > Singapore (0.14)
- Africa > Kenya (0.05)
- (3 more...)
- Research Report (1.00)
- Overview (0.94)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Loans (1.00)
- (2 more...)
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Healthcare Professionals
Choudhury, Avishek, Chaudhry, Zaria
This paper explores the evolving relationship between clinician trust in LLMs, the transformation of data sources from predominantly human-generated to AI-generated content, and the subsequent impact on the precision of LLMs and clinician competence. One of the primary concerns identified is the potential feedback loop that arises as LLMs become more reliant on their outputs for learning, which may lead to a degradation in output quality and a reduction in clinician skills due to decreased engagement with fundamental diagnostic processes. While theoretical at this stage, this feedback loop poses a significant challenge as the integration of LLMs in healthcare deepens, emphasizing the need for proactive dialogue and strategic measures to ensure the safe and effective use of LLM technology. A key takeaway from our investigation is the critical role of user expertise and the necessity for a discerning approach to trusting and validating LLM outputs. The paper highlights how expert users, particularly clinicians, can leverage LLMs to enhance productivity by offloading routine tasks while maintaining a critical oversight to identify and correct potential inaccuracies in AI-generated content. This balance of trust and skepticism is vital for ensuring that LLMs augment rather than undermine the quality of patient care. Moreover, we delve into the potential risks associated with LLMs' self-referential learning loops and the deskilling of healthcare professionals. The risk of LLMs operating within an echo chamber, where AI-generated content feeds into the learning algorithms, threatens the diversity and quality of the data pool, potentially entrenching biases and reducing the efficacy of LLMs.
- North America > United States > West Virginia > Monongalia County > Morgantown (0.04)
- Asia > Indonesia (0.04)
- Health & Medicine > Diagnostic Medicine (0.68)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
How VADER is your AI? Towards a definition of artificial intelligence systems appropriate for regulation
Bezerra, Leonardo C. T., Brownlee, Alexander E. I., Alvarenga, Luana Ferraz, Moioli, Renan Cipriano, Batista, Thais Vasconcelos
Artificial intelligence (AI) has driven many information and communication technology (ICT) breakthroughs. Nonetheless, the scope of ICT systems has expanded far beyond AI since the Turing test proposal. Critically, recent AI regulation proposals adopt AI definitions affecting ICT techniques, approaches, and systems that are not AI. In some cases, even works from mathematics, statistics, and engineering would be affected. Worryingly, AI misdefinitions are observed from Western societies to the Global South. In this paper, we propose a framework to score how \textit{validated as appropriately-defined for regulation} (VADER) an AI definition is. Our online, publicly-available VADER framework scores the coverage of premises that should underlie AI definitions for regulation, which aim to (i) reproduce principles observed in other successful technology regulations, and (ii) include all AI techniques and approaches while excluding non-AI works. Regarding the latter, our score is based on a dataset of representative AI, non-AI ICT, and non-ICT examples. We demonstrate our contribution by reviewing the AI regulation proposals of key players, namely the United States, United Kingdom, European Union, and Brazil. Importantly, none of the proposals assessed achieve the appropriateness score, ranging from a revision need to a concrete risk to ICT systems and works from other fields.
- South America > Brazil > Rio Grande do Norte (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > Scotland > Stirling > Stirling (0.04)
- (5 more...)
- Law > Statutes (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
ChatGPT: New AI system, old bias?
Every time a new application of AI is announced, I feel a short-lived rush of excitement -- followed soon after by a knot in my stomach. This is because I know the technology, more often than not, hasn't been designed with equity in mind. One system, ChatGPT, has reached 100 million unique users just two months after its launch. The text-based tool engages users in interactive, friendly, AI-generated exchanges with a chatbot that has been developed to speak authoritatively on any subject it's prompted to address. In an interview with Michael Barbaro on the The Daily podcast from the New York Times, tech reporter Kevin Roose described how an app similar to ChatGPT, Bing's AI chatbot, which also is built on OpenAI's GPT-3 language model, responded to his request for a suggestion on a side dish to accompany French onion soup for Valentine's Day dinner with his wife.
- South America > Colombia (0.05)
- North America > United States > New York (0.05)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Law (0.98)
- (2 more...)
Can the world's de facto tech regulator really rein in AI? - Coda Story
Artificial intelligence is creeping into every aspect of our lives. AI-powered software is triaging hospital patients to determine who gets which treatment, deciding whether an asylum seeker is lying or telling the truth in their application and even conjuring up weird conceits for sitcoms. Just lately, these kinds of tools have been helping killer robots select their targets in the war in Ukraine. AI systems have been proven to carry systemic biases again and again, but their increasing centrality to the way we live makes those debates even more urgent. In typical tech fashion, AI-driven tools are advancing much faster than the laws that could theoretically govern them.
- Europe > Ukraine (0.24)
- North America > United States > California (0.04)
- Europe > Netherlands > South Holland > The Hague (0.04)
- (2 more...)
- Government > Regional Government (1.00)
- Information Technology > Security & Privacy (0.70)
- Law > Civil Rights & Constitutional Law (0.68)
The geopolitics of AI and the rise of digital sovereignty
On September 29, 2021, the United States and the European Union's (EU) new Trade and Technology Council (TTC) held their first summit. It took place in the old industrial city of Pittsburgh, Pennsylvania, under the leadership of the European Commission's Vice-President, Margrethe Vestager, and U.S. Secretary of State Antony Blinken. Following the meeting, the U.S. and the EU declared their opposition to artificial intelligence (AI) that does not respect human rights and referenced rights-infringing systems, such as social scoring systems.1 During the meeting, the TTC clarified that "The United States and European Union have significant concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. These systems pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems."2 The implicit target of the criticism was China's "social credit" system, a big data system that uses a wide variety of data inputs to assess a person's social credit score, which determines social permissions in society, such as buying an air or train ticket.3 The critique by the TTC indicates that the U.S. and the EU disagree with China's view of how authorities should manage the use of AI and data in society.4
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.24)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- (14 more...)
Responsible AI for an Era of Tighter Regulations
It is not just organizations based in the EU that need to pay attention. The regulation will apply to any provider that implements or develops AI systems in the EU or whose AI systems produce outputs that are used in the EU's jurisdiction, so it will affect many organizations based elsewhere. Moreover, the regulation, which is expected to come into force in 2023, is likely to bear similarities to rules currently being drawn up by other government authorities throughout the world.2 Given the impending heightened focus on new regulations, as well as the potential financial and reputational damage resulting from noncompliance, organizations urgently need to adopt measures that enable them to comply with the requirements of the emerging EU regulation. A comprehensive RAI program, based on BCG's Responsible AI Leader Blueprint, will allow them to act in accordance with and adapt to the proposed EU AI Act and other regulations that will inevitably follow (such as the Algorithmic Accountability Act of 2022 in the US).3 Notes: 3 US Congress, 2022, "Algorithmic Accountability Act of 2022."
- North America > United States > Massachusetts (0.05)
- Europe > United Kingdom (0.05)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.91)
How does information about AI regulation affect managers' choices?
Artificial intelligence (AI) technologies have become increasingly widespread over the last decade. As the use of AI has become more common and the performance of AI systems has improved, policymakers, scholars, and advocates have raised concerns. Policy and ethical issues such as algorithmic bias, data privacy, and transparency have gained increasing attention, raising calls for policy and regulatory changes to address the potential consequences of AI (Acemoglu 2021). As AI continues to improve and diffuse, it will likely have significant long-term implications for jobs, inequality, organizations, and competition. Premature deployment of AI products can also aggravate existing biases and discrimination or violate data privacy and protection practices.
- North America > United States > California (0.05)
- North America > United States > Massachusetts > Hampshire County > Northampton (0.04)
- North America > United States > Colorado (0.04)
- Research Report > New Finding (0.95)
- Research Report > Experimental Study (0.69)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Law > Civil Rights & Constitutional Law (0.96)
Proposed US AI Bill Costs May Outweigh Benefits
Senator Ron Wyden (D-Ore.), with Senator Cory Booker (D-N.J.) and Representative Yvette Clarke (D-N.Y.), introduced in early February the Algorithmic Accountability Act of 2022. This bill aims to bring transparency and oversight of software, algorithms and other automated systems that are used to make automated decisions. "As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalized communities," said Sen. Booker. The bill requires companies to conduct impact assessments for bias, effectiveness and other factors, when using automated decision systems to make critical decisions. The bill also gives the Federal Trade Commission (FTC) the authority to require the companies to comply with this bill and to create a public repository of these automated systems.
- North America > United States (1.00)
- Europe (0.05)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)