Goto

Collaborating Authors

 future crime


Real-life Minority Report: Argentina will use AI to 'predict future crimes'

Daily Mail - Science & tech

Argentinian security forces have announced plans to use artificial intelligence to'predict future crimes' but experts warn the move could threaten citizens' rights. Far-right president Javier Milei has created the Artificial Intelligence Applied to Security Unit which will use algorithms to analyse historical crime data. The data produced will then be used to predict future crimes, The Guardian has reported. The security unit is also expected to be able to use facial recognition software to track down wanted persons and detect suspicious activity. However, the Minority Report-esque resolution has concerned human rights campaigners who fear certain groups in society may be over-scrutinised by the AI technology.


Argentina will use AI to 'predict future crimes' but experts worry for citizens' rights

The Guardian

Argentina's security forces have announced plans to use artificial intelligence to "predict future crimes" in a move experts have warned could threaten citizens' rights. The country's far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use "machine-learning algorithms to analyse historical crime data to predict future crimes". It is also expected to deploy facial recognition software to identify "wanted persons", patrol social media, and analyse real-time security camera footage to detect suspicious activities. While the ministry of security has said the new unit will help to "detect potential threats, identify movements of criminal groups or anticipate disturbances", the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations. Experts fear that certain groups of society could be overly scrutinised by the technology, and have also raised concerns over who – and how many security forces – will be able to access the information.


TechScape: can AI really predict crime?

The Guardian

In 2011, the Los Angeles police department rolled out a novel approach to policing called Operation Laser. Laser – which stood for Los Angeles Strategic Extraction and Restoration – was the first predictive policing programme of its kind in the US, allowing the LAPD to use historical data to predict with laser precision (hence the name) where future crimes might be committed and who might commit them. But it was all but precise. The programme used historical crime data like arrests, calls for service, field interview cards – which police filled out with identifying information every time they stopped someone regardless of the reason – and more to map out "problem areas" for officers to focus their efforts on or assign criminal risk scores to individuals. Information collected during these policing efforts was fed into computer software that further helped automate the department's crime-prediction efforts.


AI for Crime Prevention and Detection - 5 Current Applications

#artificialintelligence

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders. Companies and cities all over world are experimenting with using artificial intelligence to reduce and prevent crime, and to more quickly respond to crimes in progress. The ideas behind many of these projects is that crimes are relatively predictable; it just requires being able to sort through a massive volume of data to find patterns that are useful to law enforcement. This kind of data analysis was technologically impossible a few decades ago, but the hope is that recent developments in machine learning are up to the task.


Justice Can't Be Colorblind: How to Fight Bias with Predictive Policing

@machinelearnbot

Originally published by Scientific American. Law enforcement's use of predictive analytics recently came under fire again. Dartmouth researchers made waves reporting that simple predictive models--as well as nonexpert humans--predict crime just as well as the leading proprietary analytics software. That the leading software achieves (only) human-level performance might not actually be a deadly blow, but a flurry of press from dozens of news outlets has quickly followed. In any case, even as this disclosure raises questions about one software tool's credibility, a more enduring, inherent quandary continues to plague predictive policing.


Artificial Intelligence as a Weapon for Hate and Racism

#artificialintelligence

The stunning advancement of artificial intelligence and machine learning has brought advances in society. These technologies have improved medicine and how quickly doctors can diagnose disease, for example. IBM's AI platform Watson helps reduce water waste in drought stricken areas. AI even entertains us--the more you use Netflix, the more it learns what your viewing preferences are and makes suggestions based on what you like to watch. However, there is a very dark side to AI, and it's worrying many social scientists and some in the tech industry.