Vo, Nguyen
STRUM-LLM: Attributed and Structured Contrastive Summarization
Gunel, Beliz, Wendt, James B., Xie, Jing, Zhou, Yichao, Vo, Nguyen, Fisher, Zachary, Tata, Sandeep
Users often struggle with decision-making between two options (A vs B), as it usually requires time-consuming research across multiple web pages. We propose STRUM-LLM that addresses this challenge by generating attributed, structured, and helpful contrastive summaries that highlight key differences between the two options. STRUM-LLM identifies helpful contrast: the specific attributes along which the two options differ significantly and which are most likely to influence the user's decision. Our technique is domain-agnostic, and does not require any human-labeled data or fixed attribute list as supervision. STRUM-LLM attributes all extractions back to the input sources along with textual evidence, and it does not have a limit on the length of input sources that it can process. STRUM-LLM Distilled has 100x more throughput than the models with comparable performance while being 10x smaller. In this paper, we provide extensive evaluations for our method and lay out future directions for our currently deployed system.
Hierarchical Multi-head Attentive Network for Evidence-aware Fake News Detection
Vo, Nguyen, Lee, Kyumin
To detect fake news, researchers proposed to use The proliferation of biased news, misleading linguistics and textual content (Castillo et al., 2011; claims, disinformation and fake news has caused Zhao et al., 2015; Liu et al., 2015). Since textual heightened negative effects on modern society in claims are usually deliberately written to deceive various domains ranging from politics, economics readers, it is hard to detect fake news by solely to public health. A recent study showed that maliciously relying on the content claims. Therefore, multiple fabricated and partisan stories possibly works utilized other signals such as temporal caused citizens' misperception about political candidates spreading patterns (Liu and Wu, 2018), network (Allcott and Gentzkow, 2017) during the structures (Wu and Liu, 2018; Vo and Lee, 2018; 2016 U.S. presidential elections. In economics, the Shu et al., 2020) and users' feedbacks (Vo and spread of fake news has manipulated stock price Lee, 2019; Shu et al., 2019; Vo and Lee, 2020a).
Where Are the Facts? Searching for Fact-checked Information to Alleviate the Spread of Fake News
Vo, Nguyen, Lee, Kyumin
Although many fact-checking systems have been developed in academia and industry, fake news is still proliferating on social media. These systems mostly focus on fact-checking but usually neglect online users who are the main drivers of the spread of misinformation. How can we use fact-checked information to improve users' consciousness of fake news to which they are exposed? How can we stop users from spreading fake news? To tackle these questions, we propose a novel framework to search for fact-checking articles, which address the content of an original tweet (that may contain misinformation) posted by online users. The search can directly warn fake news posters and online users (e.g. the posters' followers) about misinformation, discourage them from spreading fake news, and scale up verified content on social media. Our framework uses both text and images to search for fact-checking articles, and achieves promising results on real-world datasets. Our code and datasets are released at https://github.com/nguyenvo09/EMNLP2020.