Goto

Collaborating Authors

 kendall






UK to bring into force law to tackle Grok AI deepfakes this week

BBC News

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk's Grok AI chatbot. The Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images. Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person's consent, were not harmless images but weapons of abuse. The BBC has approached X for comment. It previously said: Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content..


Ofcom investigates Elon Musk's X over Grok AI sexual deepfakes

BBC News

Ofcom has launched an investigation into Elon Musk's X over concerns its AI tool Grok is being used to create sexualised images. In a statement, the UK watchdog said there had been deeply concerning reports of the chatbot being used to create and share undressed images of people, as well as sexualised images of children. If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater. The BBC has approached X for comment. Elon Musk previously said the UK government wanted any excuse for censorship in response to a post questioning why other AI platforms were not being looked at.


UK presses X to address intimate deepfake images

Al Jazeera

The United Kingdom has urged Elon Musk's X to urgently address a proliferation of intimate "deepfake" images created on demand via its built-in AI chatbot Grok, joining a European outcry over a surge in nonconsensual imagery on the platform. The comments, made on Tuesday, follow reporting that Grok, prompted by users, was creating a flood of nonconsensual images of women and minors in skimpy clothing. "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online," Kendall said. "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." "X needs to deal with this urgently," Kendall said.


Elon Musk's X should deal with 'appalling' Grok AI deepfakes, government demands

BBC News

Government demands Musk's X deals with'appalling' Grok AI deepfakes Technology Secretary Liz Kendall has called on Elon Musk's X to urgently deal with its artificial intelligence chatbot Grok being used to create non-consensual sexualised deepfake images of women and girls. The BBC has seen several examples on X of people asking the bot to digitally undress people to make them appear in bikinis without their consent, as well as putting them in sexual situations. Kendall said the situation was absolutely appalling, adding we cannot and will not allow the proliferation of these degrading images. It is absolutely right that Ofcom is looking into this as a matter of urgency and it has my full backing to take any enforcement action it deems necessary. On Monday, regulator Ofcom said it had made urgent contact with Elon Musk's company xAI and was investigating concerns Grok has been producing undressed images of people.


DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation

Neural Information Processing Systems

Few-shot learning aims to adapt models trained on the base dataset to novel tasks where the categories were not seen by the model before. This often leads to a relatively concentrated distribution of feature values across channels on novel classes, posing challenges in determining channel importance for novel tasks. Standard few-shot learning methods employ geometric similarity metrics such as cosine similarity and negative Euclidean distance to gauge the semantic relatedness between two features. However, features with high geometric similarities may carry distinct semantics, especially in the context of few-shot learning. In this paper, we demonstrate that the importance ranking of feature channels is a more reliable indicator for few-shot learning than geometric similarity metrics. We observe that replacing the geometric similarity metric with Kendall's rank correlation only during inference is able to improve the performance of few-shot learning across a wide range of methods and datasets with different domains. Furthermore, we propose a carefully designed differentiable loss for meta-training to address the non-differentiability issue of Kendall's rank correlation. By replacing geometric similarity with differentiable Kendall's rank correlation, our method can integrate with numerous existing few-shot approaches and is ready for integrating with future state-of-the-art methods that rely on geometric similarity metrics.


Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection

Neural Information Processing Systems

Model attributions are important in deep neural networks as they aid practitioners in understanding the models, but recent studies reveal that attributions can be easily perturbed by adding imperceptible noise to the input. The non-differentiable Kendall's rank correlation is a key performance index for attribution protection. In this paper, we first show that the expected Kendall's rank correlation is positively correlated to cosine similarity and then indicate that the direction of attribution is the key to attribution robustness. Based on these findings, we explore the vector space of attribution to explain the shortcomings of attribution defense methods using $\ell_p$ norm and propose integrated gradient regularizer (IGR), which maximizes the cosine similarity between natural and perturbed attributions. Our analysis further exposes that IGR encourages neurons with the same activation states for natural samples and the corresponding perturbed samples. Our experiments on different models and datasets confirm our analysis on attribution protection and demonstrate a decent improvement in adversarial robustness.