Collaborating Authors


Minority groups sound alarm on AI, urge feds to protect 'equity and civil rights'

FOX News

People in Texas sounded off on AI job displacement, with half of people who spoke to Fox News convinced that the tech will rob them of work. The growing use of artificial intelligence will likely lead to biased and discriminatory outcomes for minorities and disabled people, several groups warned the federal government this week. The National Artificial intelligence Advisory Committee, an interagency group led by the Commerce Department, held a public hearing online Tuesday aimed at informing policymakers about how the government can best manage the use of AI. Panelists were told by most of the witnesses that bias and discrimination are the biggest fears for the people they represent. Patrice Willoughby, vice president of policy and legislative affairs at the NAACP, told panelists that technology has already been used as a means to disenfranchise and mislead voters, and said her group worries about AI for the same reason.

5 things conservatives need to know before AI wipes out conservative thought altogether

FOX News

Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. The "Godfather of A.I.," Geoffrey Hinton, quit Google out of fear that his former employer intends to deploy artificial intelligence in ways that will harm human beings. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton recently told The New York Times. But stomping out the door does nothing to atone for his own actions, and it certainly does nothing to protect conservatives – who are the primary target of A.I. programmers – from being canceled. Here are five things to know as the battle over A.I. turns hot: Elon Musk recently revealed that Google co-founder Larry Page and other Silicon Valley leaders want AI to establish a "digital god" that "would understand everything in the world.

Axon's Taser Drone Plans Prompt AI Ethics Board Resignations


A majority of Axon's AI ethics board resigned in protest yesterday, following an announcement last week that the company planned to equip drones with Tasers and cameras as a way to end mass shootings in schools. The company backed down on its proposal Sunday, but the damage had been done. Axon had first asked the advisory board to consider a pilot program to outfit a select number of police departments with Taser-drones last year, and again last month. A majority of the ethics advisory board, which comprises AI ethics experts, law professors, and police reform and civil liberties advocates, opposed it both times. Advisory board chairman Barry Friedman told WIRED that Axon never asked the group to review any scenario involving schools, and that launching the pilot program without addressing previously stated concerns is dismissive of the board and its established process.

A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized Artificial Intelligence

Artificial intelligence (AI) systems operate in increasingly diverse areas, from healthcare to facial recognition, the stock market, autonomous vehicles, and so on. While the underlying digital infrastructure of AI systems is developing rapidly, each area of implementation is subject to different degrees and processes of legitimization. By combining elements from institutional theory and information systems-theory, this paper presents a conceptual framework to analyze and understand AI-induced field-change. The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions while existing institutional infrastructures determine the scope and speed at which organizational change is allowed to occur. Where institutional infrastructure and governance arrangements, such as standards, rules, and regulations, still are unelaborate, the field can move fast but is also more likely to be contested. The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.