Collaborating Authors

law enforcement

How AI can empower communities and strengthen democracy


Each Fourth of July for the past five years I've written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good. This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes.

Machine Learning and its Application in Accounts Payable


Automated tools for accounts payable processes were already in place even before the introduction of machine learning in business software. The problem, however, was that the automated software tools were static to changes and required regular adjustments in their implementation, which also called for the training of their operators. There are optimized algorithms in machine learning that enable seamless scanning of electronic information received from vendors like emails and assign general ledger codes faster. The machine learning software might require human assistance at the beginning, with time, they improve, thus handling the data entry processes accurately without assistance. Machine learning algorithms in accounts payable software are also useful in detecting fraud by spotting inconsistencies in details from the vendors.

Interesting AI/ML Articles You Should Read This Week (July 4)


Would you let a machine learning model that has a failure rate of 98% and a false positive rate of 81% into production? Well, these claimed performance figures are from a facial recognition system that is in use by the policing force in South Wales and other parts of the United Kingdom. Dave Gershgorn article starts with a description akin to the setting of a dystopian future where an overseeing governing system monitors everyone; which is hysterically a foreshadowing of a foreseeable future. South Wales Police have been using facial recognition systems since 2017 and have done this in no secrecy from the public. They've made arrests as a result of the facial recognition system.

Why IBM Decided to Halt all Facial Recognition Development


In a letter to congress sent on June 8th, IBM's CEO Arvind Krishna made a bold statement regarding the company's policy toward facial recognition. "IBM no longer offers general purpose IBM facial recognition or analysis software," says Krishna. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency." The company has halted all facial recognition development and disapproves or any technology that could lead to racial profiling. The ethics of face recognition technology have been in question for years. However, there has been little to no movement in the enactment of official laws barring the technology.

Detroit police chief cops to 96-percent facial recognition error rate


Detroit's police chief admitted on Monday that facial recognition technology used by the department misidentifies suspects about 96 percent of the time. It's an eye-opening admission given that the Detroit Police Department is facing criticism for arresting a man based on a bogus match from facial recognition software. Last week, the ACLU filed a complaint with the Detroit Police Department on behalf of Robert Williams, a Black man who was wrongfully arrested for stealing five watches worth $3,800 from a luxury retail store. Investigators first identified Williams by doing a facial recognition search with software from a company called DataWorks Plus. Under police questioning, Williams pointed out that the grainy surveillance footage obtained by police didn't actually look like him.

Fraud Detection with AI and Machine Learning


Obviously, the methods of past years have ceased to be effective. Even Fraud Detection with AI and Machine Learning is neither a magic pill nor an absolute guarantee of protection. However, nothing better was invented at the moment, so it makes sense to learn how ML solutions and fraud detection analysis can make your business more secure, and your customers more confident in your services. The very concept of detecting fraud using machine learning is based on the idea that legitimate and illegal actions have different characteristics. Moreover, these signs can be completely invisible to the human eye. The machine learning system for recognizing fraud proceeds from its knowledge of the legitimate operation, compares this knowledge with events occurring in real-time and draws a conclusion about the validity or illegality of a certain action.

Council Post: AI Is Amazing But Complicated, And We Don't Necessarily Need To Plunge In Headfirst


Eric Hutto is President and Chief Operating Officer at Unisys Corporation. Artificial intelligence (AI) can help humans address many challenges, but it also creates challenges. We know AI has biases. We understand that AI may or may not draw fair and ethical conclusions all the time. Yet it's clear that AI is going to happen anyway.

'Face Recognition Risks Chilling Our Ability to Participate in Free Speech'


Janine Jackson interviewed the Center on Privacy and Technology's Clare Garvie about facial recognition rules for the June 26, 2020, episode of CounterSpin. This is a lightly edited transcript. Janine Jackson: Robert Williams, an African-American man in Detroit, was falsely arrested when an algorithm declared his face a match with security footage of a watch store robbery. Boston City Council voted this week to ban the city's use of facial recognition technology, part of an effort to move resources from law enforcement to community, but also out of concern about dangerous mistakes like that in Williams' case, along with questions about what the technology means for privacy and free speech. As more and more people go out in the streets and protest, what should we know about this powerful tool, and the rules--or lack thereof--governing its use?

Security firm Ring works with US police with 'deadly histories'

Daily Mail - Science & tech

Amazon may have banned police from using its facial recognition technology, but a new report shows the tech giant is providing thousands of departments with video and audio footage from Ring. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with the Amazon-owned company and hundreds of them have'deadly histories.' Data from sources reveals half of the agencies had at least one fatal encounter in the last five years and altogether are responsible for a third of fatal encounters nationwide. These departments are also involved with the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos and Sean Monterrosa. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with Amazon-owned Ring and hundreds of them have'deadly histories'

Tools to Spot Deepfakes and AI-Generated Text - KDnuggets


With the emergence of incredibly powerful machine learning technologies, such as Deepfakes and Generative Neural Networks, it is all the easier now to spread false information. In this article, we will briefly introduce deepfakes and generative neural networks, as well as a few ways to spot AI-generated content and protect yourself against misinformation. I have many elderly relatives and some middle-aged relatives that just aren't well-versed with technology. Some of these people believe nearly anything they read, or at least believe it enough to share it on social media. While that doesn't sound so bad, it depends on what you are sharing.