pre-crime
'Pre-Crime' AI Is Driving 'Industrial-Scale Human Rights Abuses' In China's Xinjiang Province - Slashdot
Long-time Slashdot reader clawsoon writes: Among Sunday's releases from the International Consortium of Investigative Journalists on leaked Chinese documents about the detention of Xinjiang Uighurs -- which they are calling the largest mass internment of an ethnic-religious minority since World War II -- is a section on detention by algorithm which "is more than a'pre-crime' platform, but a'machine-learning, artificial intelligence (AI), command and control' platform that substitutes artificial intelligence for human judgment...." "The Chinese have bought into a model of policing where they believe that through the collection of large-scale data run through AI and machine learning that they can, in fact, predict ahead of time where possible incidents might take place, as well as identify possible populations that have the propensity to engage in anti-state anti-regime action," reports James Mulvenon, director of intelligence integration at SOS International LLC, an intelligence and information technology contractor for several U.S. government agencies. "And then they are preemptively going after those people using that data." The Chinese government responded by calling the leaked documents "fake news."
- Law > Civil Rights & Constitutional Law (0.85)
- Media > News (0.64)
- Government > Regional Government (0.64)
PRE-CRIME I Official trailer
Welcome to the real "Minority Report". PRE-CRIME is a wake up call for all of us. A science fiction scenario, both fascinating and scary, has arrived in our everyday life. To predict a future crime scene and to prevent a murder seems like something from a sci-fi movie. It is, but it's also real – and happening right now.
- North America > United States > Illinois > Cook County > Chicago (0.08)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.08)
- Media > Film (0.58)
- Leisure & Entertainment (0.58)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.41)
Pre-crime, algorithms, artificial intelligence, and ethics
I just binge-listened to an outstanding podcast, LifeAfter, which, without giving too much away, is about artificial intelligence and its impact on people. The ethical issues that this podcast raises are fascinating and riff on some of the AI-related issues we're starting to appreciate. One of the big issues in the real world we're just getting to grips with lies in the way we humans create intelligent systems because whoever does the design and coding brings their own world views, biases, misunderstandings, and, most crucially, prejudices to the party.
Pre-crime, algorithms, Artificial Intelligence, and ethics
I just binge-listened to an outstanding podcast, LifeAfter, which, without giving too much away, is about artificial intelligence and its impact on people. When you die in the digital age, pieces of you live on forever. In your emails, your social media posts and uploads, in the texts and videos you've messaged, and for some – even in their secret online lives few even know about. But what if that digital existence took on a life of its own? The ethical issues that this podcast raises are fascinating and riff on some of the AI-related issues we're starting to appreciate.
The era of pre-crime? Chinese AI can tell you're a criminal by looking at your photo
The greatest danger we face is underestimating the power of todays, let alone tomorrows, technology. As we enter an era of persistent online and offline surveillance it is becoming easier to see how one day police forces around the world could create a Minority Report like "Pre-crime" unit – or, how governments could use technology to "enforce" a status quo You know when you look at someone in the street, and, even though you know nothing about them, something about their look makes you cross the road to avoid them? Or when you look at police mug shots and subconsciously think "Yeah, they look like the criminal type"? Well now a pair of Chinese researchers have dived straight into the controversial area by trying to see if artificial intelligence (AI) can determine who's a criminal and who isn't – using nothing more than a photograph. And then, the more dangerous question – if it can could it go one step further and determine an innocent persons pre-disposition to criminality and how likely they are to, eventually, commit a crime?