Goto

Collaborating Authors

 surveillance technology


LAPD allowed to use drones as 'first responders' under new program

Los Angeles Times

Citing successes other police departments across the country have seen using drones, the Los Angeles Police Commission said it would allow the LAPD to deploy unmanned aircraft on routine emergency calls. The civilian oversight body approved an updated policy Tuesday allowing drones to be used in more situations, including "calls for service." The new guidelines listed other scenarios for future drone use -- "high-risk incident, investigative purpose, large-scale event, natural disaster" -- and transferred their command from the Air Support Division to the Office of Special Operations. Previously, the department's nine drones were restricted to a narrow set of dangerous situations, most involving barricaded suspects or explosives. Bryan Lium told commissioners the technology offers responding officers and their supervisors crucial, real-time information about what type of threats they might encounter while responding to an emergency.


Ordre public exceptions for algorithmic surveillance patents

Wernick, Alina

arXiv.org Artificial Intelligence

This chapter explores the role of patent protection in algorithmic surveillance and whether ordre public exceptions from patentability should apply to such patents, due to their potential to enable human rights violations. It concludes that in most cases, it is undesirable to exclude algorithmic surveillance patents from patentability, as the patent system is ill-equipped to evaluate the impacts of the exploitation of such technologies. Furthermore, the disclosure of such patents has positive externalities from the societal perspective by opening the black box of surveillance for public scrutiny.


Application of the NIST AI Risk Management Framework to Surveillance Technology

Swaminathan, Nandhini, Danks, David

arXiv.org Artificial Intelligence

This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF. Surveillance technologies are increasingly widespread in both public and private spaces, often being developed and deployed with little engagement from relevant stakeholders. Most notably, the individuals subject to the surveillance technology are rarely included in creating that technology. As an illustration of both prominence and controversy, one may consider the AI system developed by Clearview AI Inc. to monitor and record the activities of individuals and groups, including rapid face identification. Their system has come under close scrutiny for the ways that the organization scraped images and training data from the Internet; the company is currently under investigation in multiple jurisdictions for scraping billions of images from social media sites without users' consent [1, 2], and other companies like Facebook, Twitter, Venmo, and Google have issued cease and desist letters citing violations of their terms of service [3].


5 Years After San Francisco Banned Face Recognition, Voters Ask for More Surveillance

WIRED

San Francisco made history in 2019 when its Board of Supervisors voted to ban city agencies including the police department from using face recognition. About two dozen other US cities have since followed suit. But on Tuesday San Francisco voters appeared to turn against the idea of restricting police technology, backing a ballot proposition that will make it easier for city police to deploy drones and other surveillance tools. Proposition E passed with 60 percent of the vote and was backed by San Francisco Mayor London Breed. It gives the San Francisco Police Department new freedom to install public security cameras and deploy drones without oversight from the city's Police Commission or Board of Supervisors.


FTC bans Rite Aid from using facial surveillance systems for five years

Engadget

Rite Aid will not be able to use any kind of facial recognition security system for next five years as part of its settlement with the Federal Trade Commission, which accused it of "reckless use of facial surveillance systems." The FTC said in its complaint that the drugstore chain deployed an artificial intelligence-powered facial recognition technology from 2012 to 2020 to identify customers who may have previously shoplifted or have engaged in problematic behavior. Apparently, the company had created a database with "tens of thousands" of customer images, along with their names, dates of birth and alleged crimes. Those photos were of poor quality, taken by its security cameras, employees' phones and even from news stories. As a result, the system generated thousands of false-positive alerts.


Smile, you're on camera! Self-driving cars are here and they're watching you

The Guardian

If you've spent any time in San Francisco, you might believe we're on the cusp of the self-driving future promised by car makers and the tech industry: a high-tech utopia where roving robot cars pick up and drop off passengers seamlessly and more safely than if they had a human behind the wheel. While the city certainly has one key element down – a small network of driverless cars – the reality is far different and much more awkward and invasive than what the people building the technology once portrayed. What companies pitched were ultra-smart, AI-driven vehicles that make people inside and outside of the cars safer. But in addition to reports that the cars are becoming a frequent impediment to public safety, the always on-and-recording cameras also pose a risk to personal safety, experts say. A new report from Bloomberg reveals that one of the companies behind the self-driving cars that are operating in San Francisco, Google-owned Waymo, has been subject to law enforcement requests for footage that it captured while driving around.


Artificial intelligence without borders

Al Jazeera

Last year, the United States Department of Homeland Security advertised the impending "deployment" on the US-Mexico border of "robot dogs". According to a celebratory feature article published on the department's website, the goal of the programme was to "force-multiply" the presence of US Customs and Border Protection (CBP) as well as to "reduce human exposure to life-threatening hazards". In case there was any doubt as to which human lives were of concern, the article specified: "The American Southwest is a region that blends a harsh landscape, temperature extremes and various other non-environmental threats that can create dangerous obstacles for those who patrol the border." There is no denying that the US-Mexico border is an inhospitable place; just ask the countless refuge seekers who have died trying to navigate it, thanks in large part to ongoing US efforts to effectively criminalise the very right to asylum. And the terrain is becoming ever more hostile with the mad dash to run the entire world on artificial intelligence, border "security" operations to boot. The proliferation of AI-reliant surveillance technology has increasingly forced undocumented people into ever more dangerous territory, where "non-environmental threats" will apparently now also include canine robots.


The next world power will be the first to harness the power of AI, former defense official argues in new book

#artificialintelligence

The global battle for AI dominance is underway, according to author Paul Scharre, a former Army Ranger and current VP and director of studies at the Center for New American Security -- a think tank specializing in national security issues. Scharre previously served as a strategic planner at the Office of the Secretary of Defense, working to establish policies on unmanned and autonomous systems and emerging weapons technologies, and established DOD policies on intelligence, surveillance, and reconnaissance programs. In his latest book, "Four Battlegrounds: Power in the Age of Artificial Intelligence," Scharre explores how the international battle for the most powerful AI technology is changing global power dynamics. That battle, he says, is a global competition to seek the best and most efficient data, computing hardware, human talent, and institutions adopting AI technology -- which will determine the next global superpower. In your new book, you argue there's a battle for global power going on in the form of a revolution brought about by artificial intelligence.


Face Recognition Software Led to His Arrest. It Was Dead Wrong - E-DeshSeba

#artificialintelligence

Maryland is a unique place to debate face recognition regulation, says Andrew Northrup, an attorney in the forensics division of the Maryland Office of the Public Defender. He calls Baltimore "a petri dish for surveillance technology," because the city spends more money per capita on police among 72 major cities in the US, according to a 2021 analysis by the nonprofit Vera Institute of Justice, and has a long history of surveillance technology in policing. The use of invasive surveillance technology including face recognition in Baltimore during protests following the 2015 death of Freddie Gray led former House Oversight and Reform Committee chair Elijah Cummings to interrogate the issue in Congress. And in 2021, the Baltimore City Council voted to place a one-year moratorium on face recognition use by public and private actors, but not police, that expired in December. Northrup spoke in favor of the bill and its requirement for proficiency testing at the same House of Delegates Judiciary Committee hearing addressed by Carronne Sawyer this month.


Top 10 technology and ethics stories of 2022

#artificialintelligence

A major focus of Computer Weekly's technology and ethics coverage in 2022 was on working conditions throughout the tech sector, from the issue of forced labour and slavery throughout technology supply chains, to UK Amazon workers staging spontaneous "wildcat" strikes in response to derisory pay rises and warehouse conditions. Other stories in this vein included coverage of accusations that "soft union-busting" tactics were used by app-based food delivery firm Deliveroo to scupper its workers' grassroots organising efforts, and the ongoing court case against five major tech firms for their alleged role in the maiming and deaths of people extracting raw materials in the Democratic Republic of Congo. Artificial intelligence (AI) also featured heavily in Computer Weekly's technology and ethics coverage in 2022, with stories published on the tech sector's lacklustre commitment to "ethical" AI, as well as on the pitfalls and challenges of auditing AI-powered algorithms. Police technology was another major focus of 2022, as policing bodies continue to push ahead with new tech deployments such as live facial recognition (LFR) despite serious concerns about its effectiveness, proportionality and efficacy. Other stories focused on how technology is developed and deployed, and the underlying power dynamics at play.