law enforcement


Did you know this about IR35? We've also got a handful of excellent candidates too!

#artificialintelligence

Hi First Name, Another week in what seems like a never-ending January, but we are finishing it with a bang! As you know our annual RPA recruitment market report is being launched this Thursday. We have a teaser below, but to see all of it and have first access, you will need to come down for a drink and some networking on Thursday. In AI news, the Met police have announced the plans to deploy facial recognition cameras in London, while the EU are considering banning the use of facial recognition in public spaces for five years (seems like Brexit has come at just the right time for the Met!) Our recent blog in conjunction with Kingsbridge is highlighting the importance of insurance for contractors in regards to IR35, this is something you need all of your contract resource to have without question. If you have any concerns about the state of IR35 moving forward or your current contract resource, please don't hesitate to reach out.


AI License Plate Readers Are Cheaper--So Drive Carefully

#artificialintelligence

The town of Rotterdam, New York, has only 45 police officers, but technology extends their reach. Each day a department computer logs the license plates of around 10,000 vehicles moving through and around town, using software plugged into a network of cameras at major intersections and commercial areas. "Let's say for instance you had a bank robbed," says Jeffrey Collins, a lieutenant who supervises the department's uniform division. "You can look back and see every car that passed." Officers can search back in time for a specific plate, and also by color, make, and model of car.


Controversial facial recognition firm Clearview AI facing legal claims after damning NYT report

#artificialintelligence

Clearview AI, an artificial intelligence firm providing facial recognition technology to US law enforcement, may be overstating how effective its services are in catching terrorist suspects and preventing attacks, according to a report from BuzzFeed News. The company, which gained widespread recognition from a New York Times story published earlier this month, claims it was instrumental in identifying a New York suspect from video footage who had placed three rice cookers disguised as explosive devices around New York City last August, creating panic and setting off a citywide manhunt. BuzzFeed News found via a public records request that Clearview AI has been claiming in promotional material that law enforcement linked the suspect to an online profile in only five seconds using its database. But city police now say this is simply false. "The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident," an NYPD spokesperson told BuzzFeed News.


Fraud detection: the problem, solutions and tools

#artificialintelligence

"Fraud is a billion-dollar business There are many formal definitions but essentially a fraud is an "art" and crime of deceiving and scamming people in their financial transactions. Frauds have always existed throughout human history but in this age of digital technology, the strategy, extent and magnitude of financial frauds is becoming wide-ranging -- from credit cards transactions to health benefits to insurance claims. Fraudsters are also getting super creative. Who's never received an email from a Nigerian royal widow that she's looking for trusted someone to hand over large sums of her inheritance? No wonder why is fraud a big deal.


London Cops Will Use Facial Recognition to Hunt Suspects

#artificialintelligence

There will soon be a new bobby on the beat in London: artificial intelligence. London's Metropolitan Police said Friday that it will deploy facial recognition technology to find wanted criminals and missing persons. It said the technology will be deployed at "specific locations," each with a "bespoke watch list" of wanted persons, mostly violent offenders. However, a spokesperson was unable to specify how many facial recognition systems will be used, where, or how frequently. The Met said use of the technology would be publicized beforehand and marked by signs on site.


Rogue NYPD cops are using facial recognition app Clearview

#artificialintelligence

Rogue NYPD officers are using a sketchy facial recognition software on their personal phones that the department's own facial recognition unit doesn't want to touch because of concerns about security and potential for abuse, The Post has learned. Clearview AI, which has scraped millions of photos from social media and other public sources for its facial recognition program -- earning a cease-and-desist order from Twitter -- has been pitching itself to law enforcement organizations across the country, including to the NYPD. The department's facial recognition unit tried out the app in early 2019 as part of a complimentary 90-day trial but ultimately passed on it, citing a variety of concerns. Those include app creator Hoan Ton-That's ties to viddyho.com, which was involved in a widespread phishing scam in 2009, according to police sources and reports. The NYPD was also concerned because Clearview could not say who had access to images once police loaded them into the company's massive database, sources said.


London Police Roll Out Facial Recognition Technology

#artificialintelligence

The London Metropolitan Police have announced that it intends to begin using Live Facial Recognition (LFR) technology in various parts of the UK's capital city. The police explained that the technology will be "intelligence-led and deployed to specific locations in London," used for five to six hours at a time, with bespoke lists drawn up of "wanted individuals." As the BBC reports, the police claim the technology is able to identify 70 percent of wanted suspects while only generating false alerts once per 1,000 people detected by the system. The cameras will be rolled out within a month and clearly signposted. Police officers are going to hand out leaflets about the facial recognition technology and consult with local communities.


London Police Roll Out Facial Recognition Technology

#artificialintelligence

The London Metropolitan Police have announced that it intends to begin using Live Facial Recognition (LFR) technology in various parts of the UK's capital city. The police explained that the technology will be "intelligence-led and deployed to specific locations in London," used for five to six hours at a time, with bespoke lists drawn up of "wanted individuals." As the BBC reports, the police claim the technology is able to identify 70 percent of wanted suspects while only generating false alerts once per 1,000 people detected by the system. The cameras will be rolled out within a month and clearly signposted. Police officers are going to hand out leaflets about the facial recognition technology and consult with local communities.


The battle for ethical AI at the world's biggest machine-learning conference

#artificialintelligence

Facial-recognition algorithms have been at the centre of privacy and ethics debates.Credit: Qilai Shen/Bloomberg/Getty Diversity and inclusion took centre stage at one of the world's major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month's Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics. The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies -- such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. "There is no such thing as a neutral tech platform," warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs.


Quick, cheap to make and loved by police – facial recognition apps are on the rise John Naughton

The Guardian

Way back in May 2011, Eric Schmidt, who was then the executive chairman of Google, said that the rapid development of facial recognition technology had been one of the things that had surprised him most in a long career as a computer scientist. But its "surprising accuracy" was "very concerning". Questioned about this, he said that a database using facial recognition technology was unlikely to be a service that the company would create, but went on to say that "some company … is going to cross that line". As it happens, Dr Schmidt was being economical with the actualité, as the MP Alan Clark used to say. He must surely have known that a few months earlier Facebook had announced that it was using facial recognition in the US to suggest names while tagging photos.