Goto

Collaborating Authors

Unwanted Advances in Higher Education: Uncovering Sexual Harassment Experiences in Academia with Text Mining

arXiv.org Machine Learning

Sexual harassment in academia is often a hidden problem because victims are usually reluctant to report their experiences. Recently, a web survey was developed to provide an opportunity to share thousands of sexual harassment experiences in academia. Using an efficient approach, this study collected and investigated more than 2,000 sexual harassment experiences to better understand these unwanted advances in higher education. This paper utilized text mining to disclose hidden topics and explore their weight across three variables: harasser gender, institution type, and victim's field of study. We mapped the topics on five themes drawn from the sexual harassment literature and found that more than 50% of the topics were assigned to the unwanted sexual attention theme. Fourteen percent of the topics were in the gender harassment theme, in which insulting, sexist, or degrading comments or behavior was directed towards women. Five percent of the topics involved sexual coercion (a benefit is offered in exchange for sexual favors), 5% involved sex discrimination, and 7% of the topics discussed retaliation against the victim for reporting the harassment, or for simply not complying with the harasser. Findings highlight the power differential between faculty and students, and the toll on students when professors abuse their power. While some topics did differ based on type of institution, there were no differences between the topics based on gender of harasser or field of study. This research can be beneficial to researchers in further investigation of this paper's dataset, and to policymakers in improving existing policies to create a safe and supportive environment in academia.


Discrimination in the Age of Algorithms

arXiv.org Artificial Intelligence

But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.


US appeals court says Tinder Plus pricing is discriminatory

Engadget

They say all's fair in love and war, but those that have used Tinder will probably disagree. And that includes Allan Candelore, a man suing the dating app over the pricing of its premium service, Tinder Plus. Candelore and his lawyers argue that charging $9.99 a month to users under 30, and $19.99 a month to those over 30, is age discrimination, and violates two California laws: the Unruh Civil Rights Act and the Unfair Competition Law.


AI Will Transform The Field Of Law

#artificialintelligence

The field of law has evolved surprisingly little since the days of Oliver Wendell Holmes, Jr. ... [ ] (1841-1935), considered by many to be the greatest U.S. Supreme Court justice in history. Virtually everything that companies do--sales, purchases, partnerships, mergers, reorganizations--they do via legally enforceable contracts. Innovation would grind to a halt without a well-developed body of intellectual property law. Day to day, whether we recognize it or not, each of us operates against the backdrop of our legal regime and the implicit possibility of litigation. At close to $1T globally, the legal services market is one of the largest in the world.


AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

arXiv.org Artificial Intelligence

Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.