Goto

Collaborating Authors

 Baden-Württemberg










Supplementary Materials: In-Context Impersonation Reveals Large Language Models' Strengths and Biases

Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, Zeynep Akata

Neural Information Processing Systems

Reveals Large Language Models' Strengths and Biases In this supplementary materials we show additional results mentioned in the main paper. First, we give experimental details in Section A. Next, we show results for Llama 2 on the bandit task in Section B. Afterwards, we show in Section C.1 additional quantitative results for the expertise-based Section D provides additional details about the vision and language tasks. For more details on the code please refer to the README.md Section A.1) and the amount of compute required to reproduce our experiments (Section Section A.2) A.1 Prompt variations generated by meta-prompting Work done whilst visiting University of Tübingen 37th Conference on Neural Information Processing Systems (NeurIPS 2023). For all Vicuna-13B based experiments (bandit, reasoning and vision) we used a single Nvidia A100-40GB GPU.