Goto

Collaborating Authors

 innocent people


AI That Generates Police Sketches

#artificialintelligence

In recent years, there have been significant advances in artificial intelligence (AI) technology that have enabled computers to generate realistic images of human faces. One application of this technology is the creation of police sketches, which traditionally have been created by artists based on eyewitness descriptions. The use of AI to generate police sketches has the potential to speed up investigations and help police identify suspects more quickly. However, there are also concerns about the potential drawbacks of using this technology. One of the main concerns is accuracy.


UN calls for moratorium on Artificial Intelligence tech that threatens human rights- Technology News, Firstpost

#artificialintelligence

The UN called Wednesday for a moratorium on artificial intelligence systems like facial recognition technology that threaten human rights until "guardrails" are in place against violations. UN High Commissioner for Human Rights Michelle Bachelet warned that "AI technologies can have negative, even catastrophic effects if they are used without sufficient regard to how they affect people's human rights." She called for assessments of how great a risk various AI technologies pose to things like rights to privacy and freedom of movement and of expression. Faulty AI tools have led to people being unfairly denied social security benefits, while innocent people have been arrested due to flawed facial recognition. She said countries should ban or heavily regulate the ones that pose the greatest threats.


A Texas jury found him guilty of murder. A computer algorithm proved his innocence.

#artificialintelligence

Nearly a decade into his life sentence for murder, Lydell Grant was escorted out of a Texas prison in November with his hands held high, free on bail, all thanks to DNA re-examined by a software program. "The last nine years, man, I felt like an animal in a cage," Grant, embracing his mother and brother, told the crush of reporters awaiting him in Houston. "Especially knowing that I didn't do it." Now, Grant, 42, is on a fast-track to exoneration after a judge recommended in December that Texas' highest criminal court vacate his conviction. His attorneys are hopeful a ruling is made in the coming weeks.


China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win)

#artificialintelligence

Artificial intelligence (AI) is increasingly embedded into every aspect of life, and China is pouring billions into its bid to become an AI superpower. China's three-step plan is to pull equal with the United States in 2020, start making major breakthroughs of its own by mid-decade, and become the world's AI leader in 2030. There's no doubt that Chinese companies are making big gains. Chinese government spending on AI may not match some of the most-hyped estimates, but China is providing big state subsidies to a select group of AI national champions, like Baidu in autonomous vehicles (AVs), Tencent in medical imaging, Alibaba in smart cities, Huawei in chips and software. Baidu ("China's Google") is based in Beijing, where the local government has kindly closed more than 300 miles of city roads to make way for AV tests.


Police use of facial recognition is legal, Cardiff high court rules

The Guardian

Police use of automatic facial recognition technology to search for people in crowds is lawful, the high court in Cardiff has ruled. Although the mass surveillance system interferes with the privacy rights of those scanned by security cameras, a judge has concluded, it is not illegal. The legal challenge was brought by Ed Bridges, a former Liberal Democrat councillor from Cardiff, who noticed the cameras when he went out to buy a lunchtime sandwich. He was supported by the human rights organisation Liberty. Bridges said he was distressed by police use of the technology, which he believes captured his image while out shopping and later at a peaceful protest against the arms trade.


Deepfake evidence so realistic 'innocent people will go to jail' warns expert

#artificialintelligence

Deepfake material including fabricated evidence will become so realistic it will land innocent people in jail, an expert has warned.Shamir Allibhai, CEO of video verification company Amber, believes content including CCTV and voice recordings will be subject to gross manipulation. He spoke amid alarming concerns over deepfake technology raising eyebrows online, including a recent viral video of comedian Bill Hader morphing into actor Tom Cruise.And Shamir warns it is only a matter of time before the technology creeps its way into the global judicial system.He told Daily Star Online: "Initially, deepfakes will be manipulations of existing audio/video evidence, such as that from CCTV, voice recorders, police body cams, and bystanders' cell phones. "Humans have notoriously weak hearing as compared to our sight: I would bet that we get fooled by fake audio first. "It is also much easier to create believable fake audio than it is to create believable fake video. "In the future, video will be generated from scratch, with no basis in actual footage."What's


Why we should be very scared by the intrusive menace of facial recognition John Naughton

The Guardian

On 18 July, the House of Commons select committee on science and technology published an assessment of the work of the biometrics commissioner and the forensic science regulator. My guess is that most citizens have never heard of these two public servants, which is a pity because what they do is important for the maintenance of justice and the protection of liberty and human rights. The current biometrics commissioner is Prof Paul Wiles. His role is to keep under review the retention and use by the police of biometric material. This used to be just about DNA samples and custody images, but digital technology promises to increase his workload significantly.


Artificial Intelligence – A Counterintelligence Perspective: Part IV

#artificialintelligence

In my first post in this series, I wrote that one definition of artificial intelligence (AI) is a machine that thinks. Several people with technical backgrounds in the AI field reached out to me after reading that post. One comment I received that I found striking is that AI is neither A nor I. Instead, it is just computer code. Nothing is thinking; a computer is just following directions. And AI is just inputs to outputs for a goal.


Face recognition police tools 'staggeringly inaccurate'

BBC News

The accuracy of police facial recognition systems has been criticised by a UK privacy group. Two forces have been testing facial recognition cameras at public events in an effort to catch wanted criminals. Big Brother Watch said its investigation showed the technology was "dangerous and inaccurate" as it had wrongly flagged up a "staggering" number of innocent people as suspects. But police have defended its use and say additional safeguards are in place. Police facial recognition cameras have been trialled at events such as football matches, festivals and parades.


'The AI body snatchers have already taken over'

#artificialintelligence

Until rules and guidelines are written that govern how artificial intelligence software makes decisions, there will be grave risks to using it, including utter ineffectiveness, warns Nicolas Economou of The Future Society. The society is a nonprofit that began at the Harvard Kennedy School, and Economou is the founder of the society's Science, Law and Society Initiative, an international forum that works on AI governance and policy to ensure that humanity reaps the benefits of AI while mitigating its risks. Economou, who is also the CEO of the legal tech company H5, participated last week in a Global Governance of AI Roundtable in Dubai, at which policymakers crafted recommendations and began creating a road map for global cooperation on setting standards. Participants included execs from global tech companies including Microsoft and Facebook, as well as representatives from government and academia. Economou discussed the kinds of issues he and others raised at the roundtable. He urges that careful work be done to vet AI technology for accuracy and fairness and that a methodical, multidisciplinary approach be taken to establish a legal and moral framework for the powerful technology.