Goto

Collaborating Authors

 online abuse


British athletes given AI app as shield from online abuse

BBC News

Team GB Olympic and Paralympic athletes are being offered a new form of artificial intelligence-based protection from online abuse. UK Sport, the body that funds Olympic and Paralympic sports, has signed a contract worth more than £300,000 to give thousands of athletes access to an app that detects and hides abusive posts sent by other users on social media. Athletes are able to sign up for free and can protect their accounts throughout the Games cycle up to Los Angeles 2028. The level of abuse our athletes are facing online is unacceptable - to do nothing about this is not an option, UK Sport director of performance Kate Baker said about a deal that is the first of its kind in British sport. The app, called Social Protect, uses AI to try to ensure athletes see as few abusive messages sent their way as possible.


'The chilling effect': how fear of 'nudify' apps and AI deepfakes is keeping Indian women off the internet

The Guardian

A new report has found an increase in AI tools being used to create digitally manipulated images or videos of women in India. A new report has found an increase in AI tools being used to create digitally manipulated images or videos of women in India. 'The chilling effect': how fear of'nudify' apps and AI deepfakes is keeping Indian women off the internet G aatha Sarvaiya would like to post on social media and share her work online. An Indian law graduate in her early 20s, she is in the earliest stages of her career and trying to build a public profile. The problem is, with AI-powered deepfakes on the rise, there is no longer any guarantee that the images she posts will not be distorted into something violating or grotesque.


Protecting your daughter from deepfakes and online abuse

FOX News

Most of us have at least one young woman in our lives that we cherish -- a daughter, niece or goddaughter, for example. Well, this International Women's Day, I learned something that should be concerning to us all. Fully 96% of all deepfakes -- artificial intelligence-generated images and videos that use someone's likeness -- are pornographic and target women without their consent. One well-known case involved an Australian law student who discovered that manipulated pornographic images of her were being shared online when she was just 18. But this isn't an isolated incident.


Wimbledon employs AI to protect players from online abuse

The Guardian

The All England Lawn Tennis Club is using artificial intelligence for the first time to protect players at Wimbledon from online abuse. An AI-driven service monitors players' public-facing social media profiles and automatically flags death threats, racism and sexist comments in 35 different languages. High-profile players who have been targeted online such as the former US Open champion Emma Raducanu and the four-time grand slam winner Naomi Osaka have previously spoken out about having to delete Instagram and Twitter, now called X, from their phones. Harriet Dart, the British No 2, has said she only uses social media from time to time because of online "hate". Speaking on Thursday after her triumph against Katie Boulter, the British No 1, Dart said: "I just think there's a lot of positives for it [social media] but also a lot of negatives. I'm sure today, if I open one of my apps, regardless if I won, I'd have a lot of hate as well."


The Unappreciated Role of Intent in Algorithmic Moderation of Social Media Content

Wang, Xinyu, Koneru, Sai, Venkit, Pranav Narayanan, Frischmann, Brett, Rajtmajer, Sarah

arXiv.org Artificial Intelligence

As social media has become a predominant mode of communication globally, the rise of abusive content threatens to undermine civil discourse. Recognizing the critical nature of this issue, a significant body of research has been dedicated to developing language models that can detect various types of online abuse, e.g., hate speech, cyberbullying. However, there exists a notable disconnect between platform policies, which often consider the author's intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent. This paper examines the role of intent in content moderation systems. We review state of the art detection models and benchmark training datasets for online abuse to assess their awareness and ability to capture intent. We propose strategic changes to the design and development of automated detection and moderation systems to improve alignment with ethical and policy conceptualizations of abuse.


French Open 2023: Grand Slam using AI to protect players from online abuse

BBC News

As the French Open introduces a new technology to help players filter out social media abuse, BBC Sport looks at the issues tennis players encounter online.


Radical AI podcast: featuring Seyi Akiwowo

AIHub

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Seyi Akiwowo about staying safe online. How can technology be designed to fight online abuse and harassment? What is the difference between cancel culture and appropriate accountability? How can you stay safe online?


What is Computer Vision? Know Computer Vision Basic to Advanced & How Does it Work?

#artificialintelligence

Computer vision is a field of study which enables computers to replicate the human visual system. It's a subset of artificial intelligence which collects information from digital images or videos and processes them to define the attributes. The entire process involves image acquiring, screening, analysing, identifying and extracting information. This extensive processing helps computers to understand any visual content and act on it accordingly. You can also take up computer vision course for free to understand the basics under Artificial intelligence domain.


What is Computer Vision? Computer Vision Basic to Advanced & How does it work?

#artificialintelligence

Computer vision is a field of study which enables computers to replicate the human visual system. It's a subset of artificial intelligence which collects information from digital images or videos and processes them to define the attributes. The entire process involves image acquiring, screening, analysing, identifying and extracting information. This extensive processing helps computers to understand any visual content and act on it accordingly. Computer vision projects translate digital visual content into explicit descriptions to gather multi-dimensional data.


Directions in Abusive Language Training Data: Garbage In, Garbage Out

Vidgen, Bertie, Derczynski, Leon

arXiv.org Artificial Intelligence

Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data.