This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.
There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.
Ransomware represents the biggest threat to online security for most people and businesses in the UK, the head of GCHQ's cybersecurity arm is to warn. Lindy Cameron, chief executive of the National Cyber Security Centre, will say in a speech that the phenomenon, where hackers encrypt data and demand payment for it to be restored, is escalating and becoming increasingly professionalised. Speaking to the Rusi thinktank on Monday, Cameron will say that while spying online by Russia, China and other hostile states remains a "malicious strategic threat", it is the ransomware crisis that has become most urgent. "For the vast majority of UK citizens and businesses, and indeed for the vast majority of critical national infrastructure providers and government service providers, the primary key threat is not state actors but cybercriminals," Cameron is to say. Ransomware incidents have soared over the past two years globally as criminal gangs operating from countries such as Russia and other former Soviet states, which turn a blind eye to their activities, generate tens of millions of dollars by extorting money from companies. In May, a US oil network, Colonial Pipeline, was shut down after hackers obtained access via a compromised password, forcing the business to shut down for several days.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
Insurers are inadvertently funding organised crime by paying out claims from companies who have paid ransoms to regain access to data and systems after a hacking attack, Britain's former top cybersecurity official has warned. Ciaran Martin, who ran the National Cyber Security Centre until last August, said he feared that so-called ransomware was "close to getting out of control" and that there was a risk that NHS systems could be hit during the pandemic. The problem, he said, is being fuelled because there is no legal barrier to companies paying ransoms to cyber gangs – typically from Russia and some other former Soviet states – and claiming back on insurance. "People are paying bitcoin to criminals and claiming back cash," Martin said. "I see this as so avoidable. At the moment, companies have incentives to pay ransoms to make sure this all goes away," the former intelligence chief said.
Police forces in the UK are trialing the use of drones to provide air support to forces on the ground in cases where deploying a helicopter or an aeroplane might be less practical. The National Police Air Services (NPAS), the police aviation service that assists territorial police forces in England and Wales, is evaluating how drone technology might complement its existing national fleet of helicopters and planes. First trials for the technology kicked off at West Wales Airport near Aberporth, and included various typical scenarios that the NPAS's fleet might be confronted with. Typically, police forces request NPAS to assist them with tasks such as searching for suspects or missing people, vehicle pursuits, public order, counter-terrorism and firearms incidents. The NPAS is evaluating how drone technology might complement its existing national fleet of helicopters and aeroplanes.
Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and psychological backgrounds of individuals correlate in practice with different modes of human-computer-interaction, the paper describes from an ethical perspective how varying qualities of behavioral data that individuals leave behind while using digital technologies have socially relevant ramification for the development of machine learning applications. The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from, establishing an innovative filter regime to transition from the big data rationale n = all to a more selective way of processing data for training sets in machine learning. The overarching aim of this research is to promote methods for achieving beneficial machine learning applications that could be widely useful for industry as well as academia.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.