For Those Who May Find Themselves on the Red Team
–arXiv.org Artificial Intelligence
This position paper argues that literary scholars must engage with large language model (LLM) interpretability research. While doing so will involve ideological struggle, if not out-right complicity, the necessity of this engagement is clear: the abiding instrumentality of current approaches to interpretability cannot be the only standard by which we measure interpretation with LLMs. One site at which this struggle could take place, I suggest, is the red team.
arXiv.org Artificial Intelligence
Nov-25-2025