Members of the public have said there is no justification for the use of facial recognition technology in CCTV systems operated by a private developer at a 67-acre site in central London. It emerged on Monday that the property developer Argent was using the cameras "in the interests of public safety" in King's Cross, mostly north of the railway station across an area including the Google headquarters and the Central Saint Martins art school, but the precise uses of the technology remained unclear. "For law enforcement purposes, there is some justification, but personally I don't think a private developer has the right to have that in a public place," said Grant Otto, who lives in London. He questioned possible legal issues around the collection of facial data by a private entity and said he was unaware of any protections that would allow people to request their information be removed from a database, with similar rights as those enshrined in GDPR. Jack Ramsey, a tourist from New Zealand, echoed his concerns.
New Zealand is a leader in government use of artificial intelligence (AI). It is part of a global network of countries that use predictive algorithms in government decision making, for anything from the optimal scheduling of public hospital beds to whether an offender should be released from prison, based on their likelihood of reoffending, or the efficient processing of simple insurance claims. But the official use of AI algorithms in government has been in the spotlight in recent years. On the plus side, AI can enhance the accuracy, efficiency and fairness of day-to-day decision making. But concerns have also been expressed regarding transparency, meaningful human control, data protection and bias.
Facebook Inc.'s chief artificial intelligence scientist said the company is years away from being able to use software to automatically screen live video for extreme violence. Yann LeCun's comments follow the March livestream of the Christchurch mosque shootings in New Zealand. 'This problem is very far from being solved,' LeCun said Friday during a talk at Facebook's AI Research Lab in Paris. Facebook was criticised for allowing the Christchurch attacker to broadcast the shootings live without adequate oversight that could have resulted in quicker take-downs of the video. It also struggled to prevent other users from re-posting the attacker's footage.
In March, a gunman walked into two mosques in Christchurch, New Zealand, opened fire, and killed dozens of worshippers. According to a police official, the suspected gunman was arrested 36 minutes after police were called to the scene. Now, a tech company believes its smart security cameras can prevent attacks like the tragedy in Christchurch, and says it plans to install its AI-powered systems in mosques around the world. Athena Security, the tech company behind the security system, and Al-Ameri International Trading announced the Keep Mosques Safe initiative last week. Al-Ameri International Trading, along with several Islamic non-profit groups, will fund the Keep Mosques Safe effort.
While the subcontracting model used for New Zealand's Ultra-Fast Broadband (UFB) network was appropriate to meet the uptick of fibre deployment, as was the use of migrant workers. A review has found that Chorus, Visionstream, and UCG did not manage well or understand how this model became vulnerable to such a risk. "There is evidence that the'UFB Connect' part of the UFB work programme is where the model is exposed to breaches of labour standards and migrant exploitation," the review by MartinJenkins said. "These problems relate to services delivered by two of the service companies, Visionstream and UCG, through a range of subcontracted delivery partners." In October, the Labour Inspectorate arm of Employment New Zealand announced it had completed 75 visits alongside Immigration New Zealand and Inland Revenue in June of 2018 and identified 73 subcontractors in Auckland in breach of minimum employment standards.
Facebook said on Wednesday night that its artificial intelligence systems failed to automatically detect the New Zealand mosque shooting video. A senior executive at the social media giant responded in a blog post to criticism that it didn't act quickly enough to take down the gunman's livestream video of his attack in Christchurch that left 50 people dead, allowing it to spread rapidly online. Facebook's vice president of integrity, Guy Rosen, said "this particular video did not trigger our automatic detection systems." "AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove," Rosen said. One reason is because artificial intelligence systems are trained with large volumes of similar content, but in this case there was not enough because such attacks are rare.