Deepfakes may have innocent and fun applications -- companies like RefaceAI and Morphin enable users to swap their faces with those of popular celebrities in a GIF or digital content format. But like a double-edged sword, the more realistic the content looks, the greater the potential for deception. Deepfakes have been ranked by experts as one of the most serious artificial intelligence (AI) crime threats based on the wide array of applications it can be used for criminal activities and terrorism. A study by University College London (UCL) identified 20 ways AI can be deployed for the greater evil and these emerging technologies were ranked in order of concern in accordance with the severity of the crime, the profit gained, and the difficulty in combating their threats. When the term was first coined, the idea of deepfakes triggered widespread concern mostly centered around the misuse of the technology in spreading misinformation, especially in politics.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
As the first step on the road to a powerful, high tech surveillance apparatus, it was a little underwhelming: a blue van topped by almost comically intrusive cameras, a few police officers staring intently but ineffectually at their smartphones and a lot of bemused shoppers. As unimpressive as the moment may have been, however, the decision by London's Metropolitan Police to expand its use of live facial recognition (LFR) marks a significant shift in the debate over privacy, security and surveillance in public spaces. Despite dismal accuracy results in earlier trials, the Metropolitan Police Service (MPS) has announced they are pushing ahead with the roll-out of LFR at locations across London. MPS say that cameras will be focused on a small targeted area "where intelligence suggests [they] are most likely to locate serious offenders," and will match faces against a database of individuals wanted by police. The cameras will be accompanied by clear signposting and officers handing out leaflets (it is unclear why MPS thinks that serious offenders would choose to walk through an area full of police officers handing out leaflets to passersby).
National guidance is urgently needed to oversee the police's use of data-driven technology amid concerns it could lead to discrimination, a report has said. The study, published by the Royal United Services Institute (Rusi) on Sunday, said guidelines were required to ensure the use of data analytics, artificial intelligence (AI) and computer algorithms developed "legally and ethically". Forces' expanding use of digital technology to tackle crime was in part driven by funding cuts, the report said. Officers are battling against "information overload" as the volume of data around their work grows, while there is also a perceived need to take a "preventative" rather than "reactive" stance to policing. Such pressures have led forces to develop tools to forecast demand in control centres, "triage" investigations according to their "solvability" and to assess the risks posed by known offenders.
Jokingly dubbed "deal prevention units" by some front-office staff, compliance teams now have the third most-stressful City jobs after that of an investment banker and a trader. Pre-crisis, pre-Brexit and pre-cybercrime, compliance used to be (almost!) a stress-free job with regular hours. As regulatory pressure intensifies and personal liability mounts, compliance officers are under increased pressure do the right thing every time, personally and professionally. Our latest research, The Cost of Compliance and How to Reduce It, shows that a typical European bank, serving 10 million customers, could save up to €10 million annually and avoid growing fines by the regulator by implementing technology to improve the "Know Your Customer" (KYC) processes. Following new EU Anti-Money Laundering (AML4/5) and Counter-Terrorist Financing (CTF) rules extending the scope of KYC requirements, the cost each year of punitive non-compliance fines is now €3.5 million.
The last day of January 2019 was sunny, yet bitterly cold in Romford, east London. Shoppers scurrying from retailer to retailer wrapped themselves in winter coats, scarves and hats. The temperature never rose above three degrees Celsius. For police officers positioned next to an inconspicuous blue van, just metres from Romford's Overground station, one man stood out among the thin winter crowds. The man, wearing a beige jacket and blue cap, had pulled his jacket over his face as he moved in the direction of the police officers.
British privacy activist Ed Bridges is set to appeal a landmark ruling that endorses the "sinister" use of facial recognition technology by the police to hunt for suspects. In what is believed to be the world's first case of its kind, Bridges told the High Court in Wales that the local police breached his rights by scanning his face without consent. "This sinister technology undermines our privacy and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance," Bridges said in a statement. But judges said the police's use of facial recognition technology was lawful and legally justified. Civil rights group Liberty, which represented 36-year-old Bridges, said it would appeal the "disappointing" decision, while police chiefs said they understood the fears of the public.
A growing backlash against face recognition suggests the technology has a reached a crucial tipping point, as battles over its use are erupting on numerous fronts. Face-tracking cameras have been trialled in public by at least three UK police forces in the last four years. A court case against one force, South Wales Police, began earlier this week, backed by human rights group Liberty. Ed Bridges, an office worker from Cardiff whose image was captured during a test in 2017, says the technology is an unlawful violation of privacy, an accusation the police force denies. Avoiding the camera's gaze has got others in trouble.