Goto

Collaborating Authors

 persona


Grammarly pulls AI author-impersonation tool after backlash

BBC News

Writing tool Grammarly has disabled an AI feature which mimicked personas of prominent writers, including Stephen King and scientist Carl Sagan, following a backlash from people impersonated. The Expert Review function, which offered writing feedback inspired by the styles of famous authors and academics, was taken down this week by Superhuman, the tech firm which runs Grammarly. The feature was met with resistance, including a multi-million dollar lawsuit, from writers who found their names and reputations used as AI personas without their consent. Shishir Mehrotra, the firm's chief executive, apologised on LinkedIn, acknowledging the tool had misrepresented the voices of experts. Investigative journalist Julia Angwin, a New York Times contributing opinion writer, is the lead plaintiff in a class-action lawsuit filed against Superhuman and Grammarly in the Southern District of New York.





Supplementary Materials: In-Context Impersonation Reveals Large Language Models' Strengths and Biases

Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, Zeynep Akata

Neural Information Processing Systems

Reveals Large Language Models' Strengths and Biases In this supplementary materials we show additional results mentioned in the main paper. First, we give experimental details in Section A. Next, we show results for Llama 2 on the bandit task in Section B. Afterwards, we show in Section C.1 additional quantitative results for the expertise-based Section D provides additional details about the vision and language tasks. For more details on the code please refer to the README.md Section A.1) and the amount of compute required to reproduce our experiments (Section Section A.2) A.1 Prompt variations generated by meta-prompting Work done whilst visiting University of Tübingen 37th Conference on Neural Information Processing Systems (NeurIPS 2023). For all Vicuna-13B based experiments (bandit, reasoning and vision) we used a single Nvidia A100-40GB GPU.





Who's asking? User personas and the mechanics of latent misalignment

Neural Information Processing Systems

Studies show that safety-tuned models may nevertheless divulge harmful information. In this work, we show that whether they do so depends significantly on who they are talking to, which we refer to as . In fact, we find manipulating user persona to be more effective for eliciting harmful content than certain more direct attempts to control model refusal. We study both natural language prompting and activation steering as intervention methods and show that activation steering is significantly more effective at bypassing safety filters.We shed light on the mechanics of this phenomenon by showing that even when model generations are safe, harmful content can persist in hidden representations and can be extracted by decoding from earlier layers. We also show we can predict a persona's effect on refusal given only the geometry of its steering vector. Finally, we show that certain user personas induce the model to form more charitable interpretations of otherwise dangerous queries.


IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering

Neural Information Processing Systems

To evaluate Large Language Models (LLMs) for question answering (QA), traditional methods typically focus on directly assessing the immediate responses generated by the models based on the given question and context. In the common use case of humans seeking AI assistant's help in finding information, these non-interactive evaluations do not account for the dynamic nature of human-model conversations, and interaction-aware evaluations have shown that accurate models are not necessarily preferred by humans Lee et al. Recent works in human-computer interaction (HCI) have employed human evaluators to conduct interactions and evaluations, but they are often prohibitively expensive and time-consuming to scale. In this work, we introduce an automated evaluation framework IQA-EVAL to Interactive Question Answering Evaluations, more specifically, we introduce LLM-based Evaluation Agent (LEA) that can: (1) simulate human behaviors to generate interactions with IQA models; (2) automatically evaluate the generated interactions. Moreover, we propose assigning personas to LEAs to better simulate groups of real human evaluators. We show that: (1) our evaluation framework with GPT-4 (or Claude) as the backbone model achieves a high correlation with human evaluations on the IQA task; (2) assigning personas to LEA to better represent the crowd further significantly improves correlations. Finally, we use our automated metric to evaluate five recent LLMs with over 1000 questions from complex and ambiguous question answering tasks, which would cost $5k if evaluated by humans.