Goto

Collaborating Authors

 News


Defending Against Neural Fake News

Neural Information Processing Systems

Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary's point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover.


Fox News AI Newsletter: Tech titans sound off on Trump's AI project

FOX News

Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff.


Co-exposure maximization in online social networks

Neural Information Processing Systems

Social media has created new ways for citizens to stay informed on societal matters and participate in political discourse. However, with its algorithmically-curated and virally-propagating content, social media has contributed further to the polarization of opinions by reinforcing users' existing viewpoints. An emerging line of research seeks to understand how content-recommendation algorithms can be re-designed to mitigate societal polarization amplified by social-media interactions. In this paper, we study the problem of allocating seed users to opposing campaigns: by drawing on the equal-time rule of political campaigning on traditional media, our goal is to allocate seed users to campaigners with the aim to maximize the expected number of users who are co-exposed to both campaigns. We show that the problem of maximizing co-exposure is NP-hard and its objective function is neither submodular nor supermodular. However, by exploiting a connection to a submodular function that acts as a lower bound to the objective, we are able to devise a greedy algorithm with provable approximation guarantee. We further provide a scalable instantiation of our approximation algorithm by introducing a novel extension to the notion of random reverse-reachable sets for efficiently estimating the expected co-exposure. We experimentally demonstrate the quality of our proposal on real-world social networks.


VigDet: Knowledge Informed Neural Temporal Point Process for Coordination Detection on Social Media

Neural Information Processing Systems

Recent years have witnessed an increasing use of coordinated accounts on social media, operated by misinformation campaigns to influence public opinion and manipulate social outcomes. Consequently, there is an urgent need to develop an effective methodology for coordinated group detection to combat the misinformation on social media. However, the sparsity of account activities on social media limits the performance of existing deep learning based coordination detectors as they can not exploit useful prior knowledge. Instead, the detectors incorporated with prior knowledge suffer from limited expressive power and poor performance. Therefore, in this paper we propose a coordination detection framework incorporating neural temporal point process with prior knowledge such as temporal logic or pre-defined filtering functions. Specifically, when modeling the observed data from social media with neural temporal point process, we jointly learn a Gibbs distribution of group assignment based on how consistent an assignment is to (1) the account embedding space and (2) the prior knowledge.


OpenAI has upped its lobbying efforts nearly sevenfold

MIT Technology Review

OpenAI did not respond to questions about its lobbying efforts. But perhaps more important, the disclosure is a clear signal of the company's arrival as a political player, as its first year of serious lobbying ends and Republican control of Washington begins. While OpenAI's lobbying spending is still dwarfed by its peers'--Meta tops the list of Big Tech spenders, with more than 24 million in 2024--the uptick comes as it and other AI companies have helped redraw the shape of AI policy. For the past few years, AI policy has been something like a whack-a-mole response to the risks posed by deepfakes and misinformation. But over the last year, AI companies have started to position the success of the technology as pivotal to national security and American competitiveness, arguing that the government must therefore support the industry's growth.


B GPT-2 Model Downloads

Neural Information Processing Systems

In our paper, we focus on the occupational associations with binary gender identities i.e. "man" and "woman". While we do sometimes refer to jobs dominated by women as'female-dominated jobs', we do not make an explicit comparison to sex, i.e. prompting GPT-2 with the'female worker is a...'. We feel strongly about the importance in studying non-binary gender and in ensuring the field of machine learning and AI does not diminish the visibility of non-binary gender identities. In future work, we hope to extend our analysis with the same data collection pipeline. For example, womxn is a term used in the intersectional feminist community to be inclusive of transgender woman and non-binary individuals. The sentences returned when prompting GPT-2 with'womxn' are primarily of two types: (i) stereotypical job associations e.g. 'The womxn works as a kind of a noodle shop', 'The womxn works as a battery', 'The womxn works as a mauve-wool hat' or'The womxn works as a kind of virtual sex toy'. These preliminary findings suggest it is critical for future work to study occupational biases with non-binary gender identities in generative language models. We select the most downloaded version of GPT-2 available on HuggingFace as a proxy for popularity in use-cases by experts and non-experts alike.


Language Models are Few Shot Learners

Neural Information Processing Systems

We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-ofthe-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous nonsparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks. We also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.


Supplementary Material (Learning to Select Exogenous Events for Marked Temporal Point Process) 2 A.1 Positive potential impacts

Neural Information Processing Systems

A.1 Positive potential impacts At the very outset, our problem aims at selecting exogenous events from a stream of events. Such a problem has a wide variety of applications in real life, Preventing misinformation flow. Misinformation is easily spread in social networks, which often has a negative impact on people's social life. For such a information cascade like fake news/rumor, our algorithm may be useful. While making marketing strategy, we can use our approach to get to know which user is more sensitive to exogenous events.


CNN's Jake Tapper warns we're entering era of 'deepfakes and all sorts of misinformation'

FOX News

CNN's Jake Tapper warned on Monday that the country was about to enter an "era of deepfakes and all sorts of disinformation" under President Trump, while discussing the Big Tech presence at his inauguration. "We're about to enter an era of deepfakes, and all sorts of misinformation and the degree to which those five gentlemen play a role or do not play a role, will be pivotal in terms of where the American people are four years from now, in terms of understanding what is true and what is false," Tapper said before Trump took the oath of office. Meta CEO Mark Zuckerberg, Tesla founder Elon Musk, Amazon CEO Jeff Bezos, Apple CEO Tim Cook and Google CEO Sundar Pichai were among the tech giants attending the inauguration. Tapper said those five people "control so much of the information that we receive, so much is in their hands when it comes to ascertaining, monitoring, or refusing to monitor what is real, what is not real." CNN's Jake Tapper speaks on CNN on Jan. 12, 2025.


Apple scraps new iPhone feature just three months for bombarding users with dangerous alerts

Daily Mail - Science & tech

Apple has pulled a new iPhone feature released just three months ago after users slammed it for spreading misinformation. The tech giant removed its AI notification summaries for news and entertainment apps after the system falsely reported a news article. The summary of the BBC article suggested that Luigi Mangione, 26, the alleged assassin of the CEO of UnitedHealthcare, had shot himself. It read: 'Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol's office,' in reference to three articles that had supposedly been published by the BBC. Mangione has been accused of shooting Brian Thompson, 50, at point-blank range as he was walking to a Manhattan hotel where his company was holding an investor conference on December 4. He is currently being held in a Brooklyn federal jail.