Goto

Collaborating Authors

Criminal Law


DCGAN from Scratch with Tensorflow Keras -- Create Fake Images from CELEB-A Dataset

#artificialintelligence

Generator: the generator generates new data instances that are "similar" to the training data, in our case celebA images. Generator takes random latent vector and outputs a "fake" image of the same size as our reshaped celebA image. Discriminator: the discriminator evaluate the authenticity of provided images; it classifies the images from the generator and the original image. Discriminator takes true of fake images and outputs the probability estimate ranging between 0 and 1. Here, D refers to the discriminator network, while G obviously refers to the generator.


Queensland police to trial AI tool designed to predict and prevent domestic violence incidents

#artificialintelligence

Queensland police are preparing to begin trials of an artificial intelligence system to identify high-risk domestic violence offenders, and officers intend to use the data to "knock on doors" before serious escalation. The "actuarial tool" uses data from the police Qprime computer system to develop a risk assessment of all potential domestic and family violence offenders. The algorithm has been in development for about three years and practical trials will begin in some police districts before the end of 2021. "With these perpetrators, we will not wait for a triple-zero phone call and for a domestic and family violence incident to reach the point of crisis," acting Supt Ben Martain said. "Rather, with this cohort of perpetrators, who our predictive analytical tools tell us are most likely to escalate into further DFV offending, we are proactively knocking on doors without any call for service."


Banking on Bots: Mitigating Algorithmic Bias in Financial Services

#artificialintelligence

When developing new technologies, we must ensure that they operate fairly. At a time when identity is increasingly being used as the key to digital access, any technology based on identity must function fairly and equally for everyone, regardless of race, age, gender, or other characteristics leading to human physical diversity. While digital services have proliferated across many industries, this issue is particularly relevant in the financial sector, as Covid-19 accelerates a shift towards automated platforms delivered remotely by banks and other providers – with biases in AI having stark implications for unfairly rewarding certain groups over others. How does AI bias creep into machine learning models? Algorithmic decision making relies on machine learning techniques that recognise patterns from historical data.


The Growing Importance of Data and AI Literacy – Part 1

#artificialintelligence

This is the first part of a 2-part series on the growing importance of teaching Data and AI literacy to our students. This will be included in a module I am teaching at Menlo College but wanted to share the blog to help validate the content before presenting to my students. Apple plans to introduce new iPhone software that uses artificial intelligence (AI) to churn through the vast collection of photos that people have taken with their iPhones to detect and report child sexual abuse. See the Wall Street article "Apple Plans to Have iPhones Detect Child Pornography, Fueling Priva..." for more details on Apple's plan. Apple has a strong history of working to protect its customers' privacy.


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


Study finds growing government use of sensitive data to 'nudge' behaviour

The Guardian

A new form of "influence government", which uses sensitive personal data to craft campaigns aimed at altering behaviour has been "supercharged" by the rise of big tech firms, researchers have warned. National and local governments have turned to targeted advertisements on search engines and social media platforms to try to "nudge" the behaviour of the country at large, the academics found. The shift to this new brand of governance stems from a marriage between the introduction of nudge theory in policymaking and an online advertising infrastructure that provides unforeseen opportunities to run behavioural adjustment campaigns. Some of the examples found by the Scottish Centre for Crime and Criminal Justice (SCCCJ) range from a Prevent-style scheme to deter young people from becoming online fraudsters to tips on how to light a candle properly. While targeted advertising is common across business, one researcher argues that the government using it to drive behavioural change could create a perfect feedback loop.


Apple gives more detail on new iPhone photo scanning feature as controversy continues

The Independent - Tech

Apple has released yet more details on its new photo-scanning features, as the controversy over whether they should be added to the iPhone continues. Earlier this month, Apple announced that it would be adding three new features to iOS, all of which are intended to fight against child sexual exploitation and the distribution of abuse imagery. One adds new information to Siri and search, another checks messages sent to children to see if they might contain inappropriate images, and the third compares photos on an iPhone with a database of known child sexual abuse material (CSAM) and alerts Apple if it is found. It is the latter of those three features that has proven especially controversial. Critics say that the feature is in contravention of Apple's commitment to privacy, and that it could in the future be used to scan for other kinds of images, such as political pictures on the phones of people living in authoritarian regimes.


Detection of Illicit Drug Trafficking Events on Instagram: A Deep Multimodal Multilabel Learning Approach

arXiv.org Artificial Intelligence

Social media such as Instagram and Twitter have become important platforms for marketing and selling illicit drugs. Detection of online illicit drug trafficking has become critical to combat the online trade of illicit drugs. However, the legal status often varies spatially and temporally; even for the same drug, federal and state legislation can have different regulations about its legality. Meanwhile, more drug trafficking events are disguised as a novel form of advertising commenting leading to information heterogeneity. Accordingly, accurate detection of illicit drug trafficking events (IDTEs) from social media has become even more challenging. In this work, we conduct the first systematic study on fine-grained detection of IDTEs on Instagram. We propose to take a deep multimodal multilabel learning (DMML) approach to detect IDTEs and demonstrate its effectiveness on a newly constructed dataset called multimodal IDTE(MM-IDTE). Specifically, our model takes text and image data as the input and combines multimodal information to predict multiple labels of illicit drugs. Inspired by the success of BERT, we have developed a self-supervised multimodal bidirectional transformer by jointly fine-tuning pretrained text and image encoders. We have constructed a large-scale dataset MM-IDTE with manually annotated multiple drug labels to support fine-grained detection of illicit drugs. Extensive experimental results on the MM-IDTE dataset show that the proposed DMML methodology can accurately detect IDTEs even in the presence of special characters and style changes attempting to evade detection.


AI Identifies Instagram Drug Dealers With Near 95% Accuracy

#artificialintelligence

Researchers in the US have developed a multimodal machine learning system that's capable of identifying the accounts and posts of drug dealers on Instagram, by analyzing a variety of content, including image content. The research, entitled Identifying Illicit Drug Dealers on Instagram with Large-scale Multimodal Data Fusion, is a collaboration between three researchers at West Virginia University and one from Case Western Reserve University. To facilitate the project, the researchers created a database called Identifying Drug Dealers on Instagram (IDDIG), featuring 4000 user accounts, with 1,400 the accounts of drug dealers, and the remainder as a control group to test the identification process. The model includes posted images, posted comments, as well as information from homepage images and biography texts posted on the homepage. Initial testing of the technique reports almost a 95% accuracy rate in identifying Instagram-based drug dealers, and the framework has also led to a hashtag-based community detection project designed to discover changing signifiers of activity related to the sale of illegal drugs, utilizing geographical factors and identification of specific drug types.


How AI-powered tech landed man in jail with scant evidence

#artificialintelligence

Michael Williams' wife pleaded with him to remember their fishing trips with the grandchildren, how he used to braid her hair, anything to jar him back to his world outside the concrete walls of Cook County Jail. His three daily calls to her had become a lifeline, but when they dwindled to two, then one, then only a few a week, the 65-year-old Williams felt he couldn't go on. He made plans to take his life with a stash of pills he had stockpiled in his dormitory. Williams was jailed last August, accused of killing a young man from the neighborhood who asked him for a ride during a night of unrest over police brutality in May. But the key evidence against Williams didn't come from an eyewitness or an informant; it came from a clip of noiseless security video showing a car driving through an intersection, and a loud bang picked up by a network of surveillance microphones. Prosecutors said technology powered by a secret algorithm that analyzed noises detected by the sensors indicated Williams shot and killed the man. "I kept trying to figure out, how can they get away with using the technology like that against me?" said Williams, speaking publicly for the first time about his ordeal. Williams sat behind bars for nearly a year before a judge dismissed the case against him last month at the request of prosecutors, who said they had insufficient evidence.