Goto

Collaborating Authors

 algorithmic tool


Amazon-hosted AI tool for UK military recruitment 'carries risk of data breach'

The Guardian

An artificial intelligence tool hosted by Amazon and designed to boost UK Ministry of Defence recruitment puts defence personnel at risk of being identified publicly, according to a government assessment. Data used in the automated system to improve the drafting of defence job adverts and attract more diverse candidates by improving the inclusiveness language, includes names, roles and emails of military personnel and is stored by Amazon in the US. This means "a data breach may have concerning consequences, ie identification of defence personnel", according to documents detailing government AI systems published for the first time today. The risk has been judged to be "low" and the MoD said "robust safeguards" have been put in place by the suppliers, Textio, Amazon Web Services and Amazon GuardDuty, a threat detection service. But it is one of several risks acknowledged by the government about its use of AI tools in the public sector in a tranche of documents released to improve transparency about the central government's use of algorithms. Official declarations about how the algorithms work stress that mitigations and safeguards are in place to tackle risks, as ministers push to use AI to boost UK economic productivity and, in the words of the technology secretary, Peter Kyle, on Tuesday, "bring public services back from the brink".


Will AI ever be smart enough to decipher federal regulations?

FOX News

Center for AI Safety Director Dan Hendrycks explains concerns about how the rapid growth of artificial intelligence could impact society. A federal agency is pondering whether artificial intelligence might someday be used to help the government identify duplicative or overly burdensome federal rules that need to be cut back. But officials are already hearing from skeptics who doubt AI will ever be powerful enough to wade through and understand the hundreds of thousands of pages of detailed federal rules. The Administrative Conference of the United States (ACUS) is an independent federal agency that works to increase the efficiency and fairness of regulations. In early May, ACUS released a report it commissioned on how AI and other algorithmic tools might be used to conduct retrospective reviews of federal rules to improve them.


How do "technical" design-choices made when building algorithmic decision-making tools for criminal justice authorities create constitutional dangers?

Yeung, Karen, Harkens, Adam

arXiv.org Artificial Intelligence

This two part paper argues that seemingly "technical" choices made by developers of machine-learning based algorithmic tools used to inform decisions by criminal justice authorities can create serious constitutional dangers, enhancing the likelihood of abuse of decision-making power and the scope and magnitude of injustice. Drawing on three algorithmic tools in use, or recently used, to assess the "risk" posed by individuals to inform how they should be treated by criminal justice authorities, we integrate insights from data science and public law scholarship to show how public law principles and more specific legal duties that are rooted in these principles, are routinely overlooked in algorithmic tool-building and implementation. We argue that technical developers must collaborate closely with public law experts to ensure that if algorithmic decision-support tools are to inform criminal justice decisions, those tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine, including respect for human rights, throughout the tool-building process.


Snooping on the police: can AI clean up the Met? - Raconteur

#artificialintelligence

Shamed and appalled by the brutal murder of Sarah Everard at the hands of a serving officer, the British public demanded a swift response from the Metropolitan Police Service. A subsequent review into the conduct of officers based at Charing Cross in London unearthed a toxic environment where colleagues bonded over jokes about rape, killing black children and beating their wives. Heads had to roll, starting with the former Met Police Service commissioner Dame Cressida Dick. The poor handling of the Everard case did little to assuage conclusions by its own watchdog that the Met is "systematically and institutionally corrupt". Inspector of Constabulary Matt Parr said that the Met had "sometimes behaved in ways that make it appear arrogant, secretive and lethargic" in response to investigations into dirty cops, and that it did "not have the capability to proactively monitor" communications with any effect, "despite repeated warnings from the inspectorate".


ARTIFICIAL INTELLIGENCE, ETHICS AND EDUCATION BY Aghemo Raffaella

#artificialintelligence

ARTIFICIAL INTELLIGENCE ETHICS AND EDUCATION BY RAFFAELLA AGHEMO, LAWYER DRAFT PAPER FOR CALL4PAPERS 2021 We live in a totally digital age. The pandemic crisis of recent years has exacerbated a type of approach to life that increasingly belongs to the virtual and less and less to the real. In this dimension, which Professor Floridi defines as'onlife', we all come to terms with new realities, increasingly technological and increasingly less'human'. IN THIS DIMENSION, WHICH PROFESSOR FLORIDI This should not frighten us, but we DEFINES AS'ONLIFE', WE must'equip' users to collaborate and ALL COME TO TERMS WITH interface with new beings, no longer NEW REALITIES, made of flesh and bones, but of INCREASINGLY circuits and transistors. All this TECHNOLOGICAL AND progress, which certainly precludes INCREASINGLY LESS what is defined as the fourth'HUMAN'. DAD Every economic and social change, in order to be well understood and EDUCATION integrated into daily dynamics, must pass through the "school desks", which have also been overtaken by DAD systems, distance learning, laptops and devices.


The Algorithmic Auditing Trap

#artificialintelligence

This op-ed was written by Mona Sloane, a sociologist and senior research scientist at the NYU Center for Responsible A.I. and a fellow at the NYU Institute for Public Knowledge. Her work focuses on design and inequality in the context of algorithms and artificial intelligence. We have a new A.I. race on our hands: the race to define and steer what it means to audit algorithms. Governing bodies know that they must come up with solutions to the disproportionate harm algorithms can inflict. This technology has disproportionate impacts on racial minorities, the economically disadvantaged, womxn, and people with disabilities, with applications ranging from health care to welfare, hiring, and education.


The Importance of Algorithmic Fairness - IT Peer Network

#artificialintelligence

Algorithmic fairness is a motif that plays throughout our podcast series: as we look to AI to help us make consequential decisions involving people, guests have stressed the risks that the automated systems that we build will encode past injustices and that these decisions may be too opaque. In episode twelve of the Intel on AI podcast, Intel AI Tech Evangelist and host Abigail Hing Wen talks with Alice Xiang, then Head of Fairness, Transparency, and Accountability Research at the Partnership on AI--a nonprofit in Silicon Valley founded by Amazon, Apple, Facebook, Google, IBM, Intel and other partners. With a background that includes both law and statistics, Alice's research has focused on the intersection of AI and the law. "A lot of the benefit of algorithmic systems, if used well, would be to help us detect problems rather than to help us automate decisions." Algorithmic fairness is the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by AI.


Applying Old Rules to New Tools: Employment Discrimination Law in the Age of Algorithms by Matthew U. Scherer, Allan King, Marko Mrkonich :: SSRN

#artificialintelligence

Companies, policymakers, and scholars alike are paying increasing attention to algorithmic recruitment and hiring tools that leverage artificial intelligence, machine learning, and Big Data. To its advocates, algorithmic employee selection processes can be more effective in choosing the strongest candidates, increasing diversity, and reducing the influence of human prejudices. Many observers, however, express concern about other forms of bias that can infect algorithmic selection procedures, leading to fears regarding the potential for algorithms to create unintended discriminatory effects or mask more deliberate forms of discrimination. This article represents the most comprehensive analysis to date of the legal, ethical, and practical challenges associated with using these tools. The article begins with background on both the nature of algorithmic selection tools and the legal backdrop of antidiscrimination laws. It then breaks down the key reasons why employers, courts and policymakers will struggle to fit these tools within the existing legal framework.


To decarbonize we must decomputerize: why we need a Luddite revolution

The Guardian

Our built environment is becoming one big computer. "Smartness" is coming to saturate our stores, workplaces, homes, cities. As we go about our daily lives, data is made, stored, analyzed and used to make algorithmic inferences about us that in turn structure our experience of the world. Computation encircles us as a layer, dense and interconnected. If our parents and our grandparents lived with computers, we live inside of them.


Optimization Algorithms in Machine Learning

#artificialintelligence

Optimization provides a valuable framework for thinking about, formulating, and solving many problems in machine learning. Since specialized techniques for the quadratic programming problem arising in support vector classification were developed in the 1990s, there has been more and more cross-fertilization between optimization and machine learning, with the large size and computational demands of machine learning applications driving much recent algorithmic research in optimization. This tutorial reviews the major computational paradigms in machine learning that are amenable to optimization algorithms, then discusses the algorithmic tools that are being brought to bear on such applications. We focus particularly on such algorithmic tools of recent interest as stochastic and incremental gradient methods, online optimization, augmented Lagrangian methods, and the various tools that have been applied recently in sparse and regularized optimization.