The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
Human gait refers to a daily motion that represents not only mobility, but it can also be used to identify the walker by either human observers or computers. Recent studies reveal that gait even conveys information about the walker's emotion. Individuals in different emotion states may show different gait patterns. The mapping between various emotions and gait patterns provides a new source for automated emotion recognition. Compared to traditional emotion detection biometrics, such as facial expression, speech and physiological parameters, gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject. These advantages make gait a promising source for emotion detection. This article reviews current research on gait-based emotion detection, particularly on how gait parameters can be affected by different emotion states and how the emotion states can be recognized through distinct gait patterns. We focus on the detailed methods and techniques applied in the whole process of emotion recognition: data collection, preprocessing, and classification. At last, we discuss possible future developments of efficient and effective gait-based emotion recognition using the state of the art techniques on intelligent computation and big data.
Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution. Is Artificial Intelligence (AI) A Threat To Humans? When Oxford University Professor Nick Bostrom's New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged there's an enormous upside to artificial intelligence technology.
Artificial intelligence is reshaping how we live, learn, and work, and this past fall, MIT undergraduates got to explore and build on some of the tools and coming out of research labs at MIT. Through the Undergraduate Research Opportunities Program (UROP), students worked with researchers at the MIT Quest for Intelligence and elsewhere on projects to improve AI literacy and K-12 education, understand face recognition and how the brain forms new memories, and speed up tedious tasks like cataloging new library material. Six projects are featured below. Nicole Thumma met her first robot when she was 5, at a museum. "It was incredible that I could have a conversation, even a simple conversation, with this machine," she says.
We seek to determine whether state-of-the-art, black box face recognition techniques can learn first-impression appearance bias from human annotations. With FaceNet, a popular face recognition architecture, we train a transfer learning model on human subjects' first impressions of personality traits in other faces. We measure the extent to which this appearance bias is embedded and benchmark learning performance for six different perceived traits. In particular, we find that our model is better at judging a person's dominance based on their face than other traits like trustworthiness or likeability, even for emotionally neutral faces. We also find that our model tends to predict emotions for deliberately manipulated faces with higher accuracy than for randomly generated faces, just like a human subject. Our results lend insight into the manner in which appearance biases may be propagated by standard face recognition models.
SEOUL – In cram school-obsessed South Korea, students fork out for classes in everything from K-pop auditions to real estate deals. Now, as top Korean firms roll out artificial intelligence in hiring, job seekers want to learn how to beat the bots. From his basement office in downtown Gangnam, career consultant Park Seong-jung is among those in a growing business of offering lessons on how to handle recruitment screening by computers instead of people. Video interviews using facial recognition technology to analyze character are key, according to Park. "Don't force a smile with your lips," he told students looking for work in a recent session, one of many he said he has conducted for hundreds of people.
SEOUL (Reuters) - In cram school-obsessed South Korea, students fork out for classes in everything from K-pop auditions to real estate deals. Now, top Korean firms are rolling out artificial intelligence in hiring - and jobseekers want to learn how to beat the bots. From his basement office in downtown Gangnam, careers consultant Park Seong-jung is among those in a growing business of offering lessons in handling recruitment screening by computers, not people. Video interviews using facial recognition technology to analyze character are key, according to Park. "Don't force a smile with your lips," he told students looking for work in a recent session, one of many he said he has conducted for hundreds of people.
Michael Kwet is a Visiting Fellow of the Information Society Project at Yale Law School. He is the author of Digital Colonialism: US Empire and the New Imperialism in the Global South, and hosts the Tech Empire podcast. "Beggars" and "vagrants" are not welcome in Parkhurst, South Africa, a mostly white, middle-class suburb of about 5,000 on the outskirts of Johannesburg's inner city. Criminals are on the prowl, residents warn, and they threaten their neighborhood security. To combat crime, the locals came up with a solution: place CCTV surveillance cameras everywhere. However, these are not the camera networks of times past. Thanks to advancements in machine learning and AI, CCTV systems are now equipped with sophisticated video analytics that can track a wide range of behaviors, objects, and patterns, in addition to individual faces. Armed with powerful new tech, communities of color can be watched, flagged, policed, and intimidated into submission. I've spent the past several years studying the video surveillance industry in South Africa. During that time, a private corporation called Vumacam has been quietly assembling a "smart" CCTV surveillance network in the suburbs of Johannesburg. Earlier this year, the company announced it would blanket Joburg with 15,000 cameras.
I started developing AI algorithms for handwriting recognition at my part-time student job while doing my Undergraduate degree in Computer Science. Since then, over the last 20 years or so, I have strived to combine my work in the industry with academic research. I did my Graduate degree in Computer Vision and completed my Ph.D. in Machine Learning while having quite an intensive career in the industry in parallel with my studies. In the industry, I've worked on all kinds of data and applications, including medical imaging, educational multimedia, mobile advertising, financial time series, video, text and speech processing for public safety, and other projects. When I began working with the product and business aspects of R&D, I felt that I needed to strengthen the relevant skills, so I went back to school and got an additional Master's degree in Technology Management.
Facebook AI Research says it's created a machine learning system for de-identification of individuals in video. Startups like D-ID and a number of previous works have made de-identification technology for still images, but this is the first one that works on video. In initial tests, the method was able to thwart state-of-the-art facial recognition systems. The AI for automatic video modification doesn't need to be retrained to be applied to each video. It maps a slightly distorted version on a person's face in order to make it difficult for facial recognition technology to identify a person.