Goto

Collaborating Authors

Results


Two men arrested over deepfake pornography videos

The Japan Times

Tokyo's Metropolitan Police Department has arrested two men on defamation and other charges over distributing on the internet pornography videos they doctored so that the faces of actresses in the original videos were swapped with those of female celebrities, it was learned Friday. Takumi Hayashida, a 21-year-old university student in Kumamoto, and Takanobu Otsuki, a 47-year-old system engineer in Sanda, Hyogo Prefecture, admitted to the charges, police sources said. The suspects used an artificial intelligence technology called deep learning to produce so-called deepfake pornography videos. The case is the first involving deepfake pornography videos handled by police in Japan. Otsuki told the police that he wanted to be praised by others, the sources said.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media

arXiv.org Artificial Intelligence

Recently, the emergence of the #MeToo trend on social media has empowered thousands of people to share their own sexual harassment experiences. This viral trend, in conjunction with the massive personal information and content available on Twitter, presents a promising opportunity to extract data driven insights to complement the ongoing survey based studies about sexual harassment in college. In this paper, we analyze the influence of the #MeToo trend on a pool of college followers. The results show that the majority of topics embedded in those #MeToo tweets detail sexual harassment stories, and there exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions. Furthermore, we discover the outstanding sentiments of the #MeToo tweets using deep semantic meaning representations and their implications on the affected users experiencing different types of sexual harassment. We hope this study can raise further awareness regarding sexual misconduct in academia.


How Big Tech Manipulates Academia to Avoid Regulation

#artificialintelligence

The irony of the ethical scandal enveloping Joichi Ito, the former director of the MIT Media Lab, is that he used to lead academic initiatives on ethics. After the revelation of his financial ties to Jeffrey Epstein, the financier charged with sex trafficking underage girls as young as 14, Ito resigned from multiple roles at MIT, a visiting professorship at Harvard Law School, and the boards of the John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, and the New York Times Company. Many spectators are puzzled by Ito's influential role as an ethicist of artificial intelligence. Indeed, his initiatives were crucial in establishing the discourse of "ethical AI" that is now ubiquitous in academia and in the mainstream press. In 2016, then-President Barack Obama described him as an "expert" on AI and ethics. Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University.


Unwanted Advances in Higher Education: Uncovering Sexual Harassment Experiences in Academia with Text Mining

arXiv.org Machine Learning

Sexual harassment in academia is often a hidden problem because victims are usually reluctant to report their experiences. Recently, a web survey was developed to provide an opportunity to share thousands of sexual harassment experiences in academia. Using an efficient approach, this study collected and investigated more than 2,000 sexual harassment experiences to better understand these unwanted advances in higher education. This paper utilized text mining to disclose hidden topics and explore their weight across three variables: harasser gender, institution type, and victim's field of study. We mapped the topics on five themes drawn from the sexual harassment literature and found that more than 50% of the topics were assigned to the unwanted sexual attention theme. Fourteen percent of the topics were in the gender harassment theme, in which insulting, sexist, or degrading comments or behavior was directed towards women. Five percent of the topics involved sexual coercion (a benefit is offered in exchange for sexual favors), 5% involved sex discrimination, and 7% of the topics discussed retaliation against the victim for reporting the harassment, or for simply not complying with the harasser. Findings highlight the power differential between faculty and students, and the toll on students when professors abuse their power. While some topics did differ based on type of institution, there were no differences between the topics based on gender of harasser or field of study. This research can be beneficial to researchers in further investigation of this paper's dataset, and to policymakers in improving existing policies to create a safe and supportive environment in academia.


A Guide to Solving Social Problems with Machine Learning

#artificialintelligence

You sit down to watch a movie and ask Netflix for help. Zoolander 2?") The Netflix recommendation algorithm predicts what movie you'd like by mining data on millions of previous movie-watchers using sophisticated machine learning tools. And then the next day you go to work and every one of your agencies will make hiring decisions with little idea of which candidates would be good workers; community college students will be largely left to their own devices to decide which courses are too hard or too easy for them; and your social service system will implement a reactive rather than preventive approach to homelessness because they don't believe it's possible to forecast which families will wind up on the streets. You'd love to move your city's use of predictive analytics into the 21st century, or at least into the 20th century. You just hired a pair of 24-year-old computer programmers to run your data science team. But should they be the ones to decide which problems are amenable to these tools? Or to decide what success looks like?


Discrimination in the Age of Algorithms

arXiv.org Artificial Intelligence

But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.


A Guide to Solving Social Problems with Machine Learning

#artificialintelligence

You sit down to watch a movie and ask Netflix for help. Zoolander 2?") The Netflix recommendation algorithm predicts what movie you'd like by mining data on millions of previous movie-watchers using sophisticated machine learning tools. And then the next day you go to work and every one of your agencies will make hiring decisions with little idea of which candidates would be good workers; community college students will be largely left to their own devices to decide which courses are too hard or too easy for them; and your social service system will implement a reactive rather than preventive approach to homelessness because they don't believe it's possible to forecast which families will wind up on the streets. You'd love to move your city's use of predictive analytics into the 21st century, or at least into the 20th century. You just hired a pair of 24-year-old computer programmers to run your data science team. But should they be the ones to decide which problems are amenable to these tools? Or to decide what success looks like?


Alexa and Google Home have capacity to predict if couple are struggling and can interrupt arguments, finds study

The Independent - Tech

Virtual assistants such as Amazon's Alexa and Google Home have the capacity to analyse how happy and healthy a couple's relationship is, research has found. In-home listening devices will soon be able to judge how functional relationships are as well as interrupt an argument with an idea for how to resolve it, the study said. The research, by Imperial College Business School, stated that within the next two to three years, digital assistants could predict with 75 per cent accuracy the likelihood of a relationship or marriage being a success. The technology would reach a verdict through acoustic analysis of communication between couples – examining everything from everyday encounters to arguments. The virtual assistants would then be able to provide relationship advice and what researchers refer to as democratising counselling.