Law Enforcement & Public Safety


Facial recognition software company reveals it was security breach exposing its entire client list

Daily Mail - Science & tech

Facial recognition software provider Clearview AI has revealed that its entire client list was stolen by someone who'gained unauthorized access' to company documents and data. According to a notice sent to its customers, Cleaview AI said that in addition to its client list, the intruder had gained access to the number of user accounts associated with each client, as well as the number of searches conducted through those accounts. The company didn't specify how the security breach had occurred nor who might have been responsible, and it claimed its servers and internal network hadn't been compromised. Facial recognition software company Clearview AI has revealed a security breach that exposed it's client list and number of searches those clients made'Unfortunately, data breaches are part of life in the 21st century,' Clearview attorney Tor Ekeland told The Daily Beast, who broke the story. 'Our servers were never accessed.


The Challenge Of Analytics Growth In The Public Sector

#artificialintelligence

Although the opportunities to apply analytics in the public sector are abundant, cultural and technical challenges must be overcome before government agencies can claim to be fully developed, enterprise-wide, analytically competitive organizations. Building an analytical culture where data is widely used to evaluate hypotheses is crucial for an analytically competitive organization. Despite the successes that the public sector has seen in the past with analytics, data analysis is not integrated into most decision-making processes. This can partly be attributed to the enormous variety of tasks in many different fields that government organizations perform. In such varied environments, one-size-fits-all approach to cultural change is often ineffective, and customized approaches training, policies, and incentives are necessary.


Clearview AI has billions of our photos. Its entire client list was just stolen

#artificialintelligence

New York (CNN Business)Clearview AI, a startup that compiles billions of photos for facial recognition technology, said it lost its entire client list to hackers. The company said it has patched the unspecified flaw that allowed the breach to happen. In a statement, Clearview AI's attorney Tor Ekeland said that while security is the company's top priority, "unfortunately, data breaches are a part of life. Our servers were never accessed." He added that the company continues to strengthen its security procedures and that the flaw has been patched.


Controversial facial recognition company Clearview AI just had its entire client list stolen

#artificialintelligence

In recent months, Clearview AI has been attacked from all sides by lawmakers, tech giants, and privacy advocates for its business practices, which include scraping public images of people from sites like LinkedIn, Venmo, Facebook, and YouTube. Clearview AI's systems then allow clients to search for people in its database using these scraped images. While several law enforcement agencies are known to use Clearview AI's services, the breach of its entire client list may shed some embarrassment on other organizations who are clients of the company that wish to remain unknown. As of now, however, it looks like Clearview AI's client list hasn't been made public--at least not yet. Clearview AI made the disclosure of the breach in an email to clients, saying an intruder "gained unauthorized access" to the client list.


Met police chief: facial recognition technology critics are ill-informed

The Guardian

The Metropolitan police commissioner, Cressida Dick, has attacked critics of facial recognition technology for using arguments she has claimed are highly inaccurate and ill-informed. The Met began operational use of the technology earlier this month despite concerns raised about its accuracy and privacy implications by civil liberties groups, including Amnesty International UK, Liberty and Big Brother Watch (BBW). On Monday, speaking at the Royal United Services Institute (Rusi) in central London, which has just launched its own report expressing reservations about the rollout of new technology in policing, Dick launched an impassioned defence of its use. "I and others have been making the case for the proportionate use of tech in policing, but right now the loudest voices in the debate seem to be the critics, sometimes highly incorrect and/or highly ill-informed," she said. "And I would say it is for the critics to justify to victims of crimes why police shouldn't use tech lawfully and proportionately to catch criminals."


Dummy cops to get cameras, Artificial Intelligence

#artificialintelligence

Police chief says mannequins will have facial recognition cameras to fight crime, spot traffic offenders, fine drunk drivers; American and French police show interest in the new tech Disruptive technologies such as Artificial Intelligence (AI) will soon empower mannequins to fight crime, spot traffic offenders, fine drunk drivers and rein in criminals across the city, a top official said. "We will soon have artificial eyes fixed in mannequins as cameras with a small AI-linked computing device inside them for facial recognition through a well-connected central server," City Police Commissioner Bhaskar Rao said. The mannequins, however, will not be permanent fixtures at a given place but operate in a hide-and-seek mode. "The AI software will locate the culprits, tip off the police about the number of violations one has committed, count the traffic slips registered against the same vehicle, estimate the penalty amount and alert the police," said Rao. On how futuristic dummies and connected police officers work, Rao said a drunk driver caught on MG Road will be identified by the mannequin even at a far-away junction to relay information to the control room through facial recognition.


Is artificial intelligence making racial profiling worse?

#artificialintelligence

REVERB is a new documentary series from CBSN Originals. Throughout its history, the LAPD has found itself embroiled in controversy over racially biased policing. In 1992, police violence and the acquittal of four police officers who beat black motorist Rodney King culminated in riots that killed more than 50 people. Many reforms have been instituted in the decades since then, but racial bias in LA law enforcement continues to raise concerns. A 2019 report found that the LAPD pulled over black drivers four times as often as white drivers, and Latino drivers three times as often as whites, despite white drivers being more likely to have weapons, drugs or other contraband.


What data should AI be trained on to avoid bias? - JAXenter

#artificialintelligence

As AI and machine learning permeate every sphere of our lives today, it gets easier to celebrate these technologies. From entertainment to customer support to law enforcement, they provide humans with considerable help. Certain things they are capable of are so amazing that they seem almost like magic to an outside observer. However, it's necessary to remember that as astonishing as machine learning-powered tech advancements are, they are still a product created by us, humans. And we can't simply shed our personalities when developing anything, much less an AI – an algorithm that has to think on its own.


Using Machine Learning Techniques for Fraud Detection: Machine prediction or Anomaly detection or Behavioral analytics Saksoft

#artificialintelligence

Every now and then, there is a fraudulent activity masquerading as the original – no exception to the business world. And if fraud detection is about dealing with smokes and mirrors with barely any room for errors, then Machine learning and AI have grown into reckoned technology forces giving enterprises the hope of clearing the smoke and smashing the mirror. Given the most complex of a situation – to decide whether it is a fraud being perpetrated or an original one being conducted– and the need to combat even the most-modern fraud tricks, organizations across Banking, Fintech, Insurance, Retail and other industries are using machine learning techniques for fraud detection to unearth subtle fraud patterns, detect anomalies as well as suspicious behaviors, and prevent fraud. In using machine learning techniques for fraud detection, what sets the prerogative for using machine prediction or anomaly detection or behavioral analytics? Now, when we are entrusted with this fraud detection and prevention task, data would be the first stop to frame the solution strategy.


New Zealand's first AI police officer reports for duty

#artificialintelligence

New Zealand Police has recruited an unusual new officer to the force: an AI cop called Ella. Ella is a life-like virtual assistant that uses real-time animation to emulate face-to-face interaction in an empathetic way. Its first day of work will be next Monday, when Ella will be stationed in the lobby of the force's national headquarters in Wellington. Its chief duties there will be welcoming visitors to the building, telling staff that they've arrived, and directing them to collect their passes. It can also talk to visitors about certain issues, such as the force's non-emergency number and police vetting procedures.