SMEDEREVO, SERBIA – Chinese special police took part in their first joint training drills in Europe on Thursday, joining Serbia's elite anti-terrorist unit and local police in an exercise at a Chinese-owned steel mill outside Belgrade. Machine gun fire and stun grenade blasts shook the plant in Smederevo, some 60 km east of the Serbian capital, as police from the two countries used three helicopters and 20 armored vehicles in a staged raid to rescue hostages. Serbia's interior minister, Nebojsa Stefanovic, said he planned more cooperation with Chinese law enforcement agencies, saying Serbia was "learning from a bigger and stronger" country. "China is not only our strategic partner, but also … a friendly and a brotherly country," he told reporters. Some 180 special police officers from China's Henan province participated in the exercise at the mill, which was bought by China's Hesteel in 2016 and employs a number of Chinese citizens.
I have been pondering the security technology encroachments into public life, particularly regarding CCTV monitoring. There was a time, it seems now very long ago, that the UK was awash in CCTV. Hundreds of millions of dollars, and over four million (and counting) CCTV cameras later, the UK is the most surveilled society on earth. We were assured that would never happen in the US, or other developed countries. Still, if you have nothing to hide… Today, London and Beijing have over 400,000 CCTV each (proving politics is no guarantee either way).
We already knew that the city of Moscow is saturated with CCTV cameras, but we've only just learned the extent that the city is able to conduct surveillance on its citizens. NTechLab is a bold Russian company that is at the forefront of the most talked about technology around, facial recognition. Their app, FindFace, which can track everyone on VKontakte, the Russian equivalent of Twitter, based on their profile, caused an outcry in and outside Russia after it was used to to identify and harass sex workers and porn actresses through their personal profiles. Later, the company launched an emotion-reading recognition system, re-igniting concerns over the citizens' privacy and personal data. Despite rumours, nobody really knew who's using this state-of-the-art technology as NTechLab doesn't disclose the identity of their clients.
AI that could thwart illegal activity by identifying criminals before they act is set to be rolled out in India. The aim of the Minority Report-style CCTV surveillance system is to prevent offences such as sexual assault by looking at the body language of people to predict what they are about to do. An Israeli security and AI research company will soon use AI to analyse the terabytes of data streamed from CCTV cameras in public areas in India. Crime-predicting AI that could thwart illegal activity by identifying it before it happens is rolling out in India. The partnership has been formed between Tel Aviv-based company Cortica Best Group in India, according to Digital Trends.
Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country's only surveillance regulator. The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology. "Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it," Porter said. "Police are increasingly wearing body cameras. What are the appropriate limits for their use? "The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency." The watchdog's comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making. Lord Evans, a former MI5 chief, told the Sunday Telegraph that "it was very difficult to find out where AI is being used in the public sector" and that "at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms". AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful. Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate. Durham police have spent three years evaluating an AI tool devised by Cambridge University to predict whether an arrested person is likely to reoffend and so should not be released on bail. Similar technologies used in the US, where they are also guide sentencing, have been accused of concluding that black people are more likely to be future criminals, but the results of the British trial are yet to be made public. The committee is due to report to Boris Johnson in February, but Porter said the task was urgent because of the rapid pace of technological change and an unclear system of regulation in which no single body had oversight. The information commissioner is responsible for the use of personal data but not surveillance, while Porter's office regulates the use of CCTV systems and all technologies attached to them, including facial recognition and lip-reading software. "We've been calling for a wider review for months," Porter said. "The SCC, for example, is the only surveillance regulator in England and Wales and we date back to when the iPhone 5 was new and exciting.