Benchmarking zero-shot stance detection with FlanT5-XXL: Insights from training data, prompting, and decoding strategies into its near-SoTA performance

Aiyappa, Rachith, Senthilmani, Shruthi, An, Jisun, Kwak, Haewoon, Ahn, Yong-Yeol

arXiv.org Artificial Intelligence 

Such fine-tuning Stance detection is a fundamental computational approaches can benefit from both the general language task that is widely used across many disciplines understanding from the pre-training as well such as political science and communication studies as the problem-specific thing, even without spending (Wang et al., 2019b; Küçük and Can, 2020) Its a huge amount of computing resources (Wang goal is to extract the standpoint or stance (e.g., Favor, et al., 2022a). Against, or Neutral) towards a target from a More recently, the GPT family of models (Radford given text. Given that modern democratic societies et al., 2019; Brown et al., 2020) birthed another make societal decisions by aggregating people's explicit powerful and even simpler paradigm of incontext stances through voting, estimation of peoples' learning ("few-shot" or "zero-shot"). Instead stances is a useful task. While a representative survey of tuning any parameters of the model, it is the gold standard, it falls short in scalability simply uses the input to guide the model to produce and cost (Salganik, 2019). Surveys can also produce the desired output for downstream tasks. For biased results due to the people's tendency to instance, a few examples related to the task can be report more socially acceptable positions even in fed as the context to the LLM.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found