Every few months, the U.S. National Institute of Standards and Technology (NIST) releases the results of benchmark tests it conducts on facial recognition algorithms submitted by companies, universities, and independent labs. A portion of these tests focus on demographic performance -- that is, how often the algorithms misidentify a Black man as a white man, a Black woman as a Black man, and so on. Stakeholders are quick to say that the algorithms are constantly improving with regard to bias, but a VentureBeat analysis reveals a different story. In fact, our findings cast doubt on the notion that facial recognition algorithms are becoming better at recognizing people of color. That isn't surprising, as numerous studies have shown facial recognition algorithms are susceptible to bias.
Mashable's series Algorithms explores the mysterious lines of code that increasingly control our lives -- and our futures. From dating apps, to news feeds, to streaming and purchase recommendations, we have become accustomed to a subtle prodding by unseen instruction sets, themselves generated by unnamed humans or opaque machines. But there is another, not so gentle side to the way algorithms affect us. A side where the prodding is more forceful, and the consequences more lasting than a song not to your liking, a product you probably shouldn't have bought, or even a date that fell flat. Automated license plate readers resulting in children held at gunpoint. Algorithms have the power to drive pain and oppression, at scale, and unless there is an intentional systematic effort to push back, our ever increasing reliance on algorithmic decision-making will only lead us further down a dark path.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
It is hard to know what it means when a global religious figure, two iconic technology giants and the Pentagon all find themselves on the same side of an argument. The U.S. Department of Defense issued five principles Feb. 24 for its own use of artificial intelligence, including biometric systems like facial recognition. Systems need to be responsible, equitable, traceable, governable and reliable. Four days later, at the end of a Vatican workshop examining artificial intelligence ethics and law, Pope Francis, Microsoft Corp., IBM Corp. and other invited organizations called for "new forms of regulation" and six principles that overlap with the Defense Department's list. The document, titled Rome Call for AI Ethics and backed by the Pope, says every stage and aspect of artificial intelligence must adhere to ideals of transparency, inclusion, responsibility, impartiality, reliability, security and privacy.
The face recognition system is designed to be used by drones. The U.S. Special Operations Command (SOCOM) is developing a portable facial recognition system that can identify individuals from 1 kilometer (0.6 miles) away. The Advanced Tactical Facial Recognition at a Distance Technology project demonstrated a working prototype last year; its use could be extended to drones. Long-range face-recognition device manufacturer Secure Planet is developing the system, which must render captured images as pictures that are sufficiently clear for software to identify. Secure Planet bases its devices on digital single-lens reflex cameras with commercial face-recognition software running on a standard laptop.
The US military is developing a portable face-recognition device capable of identifying individuals from a kilometre away. The Advanced Tactical Facial Recognition at a Distance Technology project is being carried out for US Special Operations Command (SOCOM). It commenced in 2016, and a working prototype was demonstrated in December 2019, paving the way for a production version. SOCOM says the research is ongoing, but declined to comment further. Initially designed for hand-held use, the technology could also be used from drones.
The U.S. military is spending more than $4.5 million to develop facial recognition technology that reads the pattern of heat being emitted by faces in order to identify specific people. The technology would work in the dark and across long distances, according to contracts posted on a federal spending database. Facial recognition is already employed by the military, which uses the technology to identify individuals on the battlefield. But existing facial recognition technology typically relies on images generated by standard cameras, such as those found in iPhone or CCTV networks. Now, the military wants to develop a facial recognition system that analyzes infrared images to identify individuals.
Over the last 15 years, the United States military has developed a new addition to its arsenal. The weapon is deployed around the world, largely invisible, and grows more powerful by the day. That weapon is a vast database, packed with millions of images of faces, irises, fingerprints, and DNA data -- a biometric dragnet of anyone who has come in contact with the U.S. military abroad. The 7.4 million identities in the database range from suspected terrorists in active military zones to allied soldiers training with U.S. forces. "Denying our adversaries anonymity allows us to focus our lethality. It's like ripping the camouflage netting off the enemy ammunition dump," wrote Glenn Krizay, director of the Defense Forensics and Biometrics Agency, in notes obtained by OneZero.
As of today, lots of companies state to assist security firms, the army, in addition to consumers prevent crime and defend their private, homes, and buildings belongings. This particular article intends to offer business leaders in the security space with a concept of what they are able to presently expect from Ai in the business of theirs. We wish this report allows company leaders in security to garner insights they are able to confidently relay to the executive teams of theirs so they are able to make educated choices when thinking about AI adoption. At the minimum, this article intends to serve as a technique of decreasing the time industry leaders in physical security spend researching AI businesses with whom they might (or might not) be keen on working. Evolv Technology claims to offer a physical security system that consists of the Evolve Edgepersonnel threat screening machine that works with the Evolv Pinpoint automated facial recognition application.
Fox News Flash top headlines for Sept. 4 are here. Check out what's clicking on Foxnews.com A coalition of activist groups representing more than 15 million combined members is pushing for a federal ban on law enforcement's use of facial recognition technology. The groups, which are planning to blanket lawmakers with emails and phone calls, are coming together under BanFacialRecognition, which was organized by the digital rights group Fight for the Future as a way to show the public exactly where and how the controversial surveillance technology is being used nationwide. "Facial recognition is one of the most authoritarian and invasive forms of surveillance ever created, and it's spreading like an epidemic. The technology has been banned by three cities -- Oakland and San Francisco in California, and Somerville, Mass. Fight for the Future, along with more than two dozen other organizations, is calling for a total ban on facial recognition technology at the federal level. Two tests of Amazon's facial recognition software, which the tech giant claims can now detect "fear," falsely labeled California state lawmakers and members of Congress as criminal suspects. Most of the false positives were people of color in both tests. The Jeff Bezos-led company has said that it encourages law enforcement agencies to use 99 percent confidence ratings for public safety applications of the technology. Amazon's Ring security service, which deploys facial recognition technology, is reportedly working with more than 200 police departments. "When using facial recognition to identify persons of interest in an investigation, law enforcement should use the recommended 99 percent confidence threshold, and only use those predictions as one element of the investigation" and not the sole determinant, the company said in a blog post earlier this year. The grassroots coalition, which includes Consumer Action, Restore the Fourth, Electronic Privacy Information Center, Color of Change, United We Dream and Media Justice, is united in the belief that regulating the technology isn't enough. "We live in the land of the free and the home of the brave.