Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.
How would you feel being watched, tracked and identified by facial recognition cameras everywhere you go? Facial recognition cameras are now creeping onto the streets of Britain and the U.S., yet most people aren't even aware. As we walk around, our faces could be scanned and subjected to a digital police line up we don't even know about. There are over 6 million surveillance cameras in the U.K. – more per citizen than any other country in the world, except China. In the U.K., biometric photos are taken and stored of people whose faces match with criminals – even if the match is incorrect. As director of the U.K. civil liberties group Big Brother Watch, I have been investigating the U.K. police's "trials" of live facial recognition surveillance for several years.
A new report details what privacy experts are calling a dangerous misapplication of facial recognition that uses photos of celebrities and digitally-doctored images to comb for criminals. According to a detailed investigation by Georgetown Law's Center on Privacy and Technology, one New York Police Department detective attempted to identify a suspect by scanning the face of actor Woody Harrelson. After footage from a security camera failed to produce results in a facial recognition scan, the detective used Google images of what he concluded to be the suspects celebrity doppelganger -- Woody Harrelson -- to run a test. The system turned up a match, says the report, who was eventually arrested on charges of petit larceny. In a new report from Georgetown University, an investigation shows that police have used celebrities to help its facial recognition software identify suspects.
AI encompasses an array of technologies, from fully automated or autonomous intelligence to assisted or augmented intelligence. Financial firms are already deploying some relatively simple AI tools, such as intelligent process automation (IPA), which handles non-routine tasks and processes that require judgment and problem-solving to free employees to work on more valuable jobs. Banks have been using AI to redesign their fraud detection and anti-money laundering efforts for a while, and investment firms are starting to use AI to execute trades, manage portfolios, and provide personalized service to their clients. Insurance organizations, in turn, have been turning to AI--and especially machine learning (ML)--to enhance products, pricing, and underwriting; strengthen the claims process; predict and prevent fraud; and improve customer service and billing. But before financial institutions can reap all of AI's benefits, they must first overcome challenges, including security, privacy, bias, and regulatory issues.
A self-driving shuttle got pulled over by police on its first day carrying passengers on a new Rhode Island route. Providence Police Chief Hugh Clements says an officer pulled over the odd-looking autonomous vehicle because he had never seen one before. The bus-like vehicle operated by Michigan-based May Mobility was dropping off passengers Wednesday morning when a police cruiser arrived with blinking lights and a siren. It was just hours after the public launch of a state-funded pilot shuttle service. The shuttle offers free rides on a 12-stop urban loop.
An advanced research arm of the U.S. government's intelligence community is looking to develop AI capable of tracking people across a vast surveillance network. As reported by Nextgov, the Intelligence Advanced Research Projects Activity (IARPA) has put out a call for more information on developing an algorithm that can be trained to identify targets by visually analyzing swaths of security camera footage. The goal, says the request, is to be able to identify and track subjects across areas as large as six miles in an effort to reconstruct crime scenes, protect military operations, and monitor critical infrastructure facilities. To develop the technology, IARPA will collect nearly 1,000 hours of video surveillance from at least 20 camera networks and then, using that sample, test various algorithms effectiveness. The agency's interest in AI-based surveillance technology mirrors a broader movement from governments and intelligence communities around the globe, many of whom have ramped up efforts to develop and scale systems.
San Francisco supervisors approved a ban on police using facial recognition technology, making it the first city in the U.S. with such a restriction. SAN FRANCISCO – San Francisco supervisors voted Tuesday to ban the use of facial recognition software by police and other city departments, becoming the first U.S. city to outlaw a rapidly developing technology that has alarmed privacy and civil liberties advocates. The ban is part of broader legislation that requires city departments to establish use policies and obtain board approval for surveillance technology they want to purchase or are using at present. Several other local governments require departments to disclose and seek approval for surveillance technology. "This is really about saying: 'We can have security without being a security state. We can have good policing without being a police state.' And part of that is building trust with the community based on good community information, not on Big Brother technology," said Supervisor Aaron Peskin, who championed the legislation.
To gauge the accuracy of machine learning models we use various parameters. The metrics used here will be Average Accuracy, False Positive Rates and False Negative Rates. K-Means is excluded from this metric as it is an unsupervised algorithm. Average Accuracy is defined as the ratio of the correctly classified data points to the total number of data points. False Positives are those cases which were supposed to be returned as threats but aren't. False negatives are just the opposite.
Chinese researchers have developed a new camera technology that can render human sized-subjects as far as 28 miles away. According to a paper from researcher Zheng-Ping Li published in the open source journal ArXiv, the camera technology can cut through smog and other pollution using a mixture of laser imaging and advanced AI software. While LIDAR technology, which stands for Light Detection and Ranging, has been used previously in other cameras and imaging techniques, researchers say a new software helps to mitigate the noise of predecessors. Chinese researchers have developed a new camera technology that can render human sized-subjects as far as 28 miles away. In a technique called'gating' software helps ignore photons reflected by other objects in the camera's field of view.
Washington, DC – In the future, artificial intelligence could augment the background investigative work performed by humans, cutting the time it takes and providing a more realistic, in-depth and realistic profile of the individual, the technical director for research and development and technology transfer at the Defense Security Service's National Background Investigative Services said recently. Mark Nehmer spoke at the "Genius Machines: The New Age of Artificial Intelligence" event, hosted by Nextgov and Defense One in Arlington, Virginia. Millions of service members, federal employees and contractors receive background checks and are issued clearances on a periodic basis. There are several problems with the current system of background investigations, Nehmer said. The use of artificial intelligence, or AI, could significantly reduce the time it takes investigations and ease the strain on already-overworked personnel and reduce the backlog of cases, Nehmer said.