Di Bonaventura, Chiara
Semantic Web and Creative AI -- A Technical Report from ISWS 2023
Ahmad, Raia Abu, Alharbi, Reham, Barile, Roberto, Böckling, Martin, Bolanos, Francisco, Bonfitto, Sara, Bruns, Oleksandra, Celino, Irene, Chudasama, Yashrajsinh, Critelli, Martin, d'Amato, Claudia, D'Ippolito, Giada, Dasoulas, Ioannis, De Giorgis, Stefano, De Leo, Vincenzo, Di Bonaventura, Chiara, Di Panfilo, Marco, Dobriy, Daniil, Domingue, John, Duan, Xuemin, Dumontier, Michel, Efeoglu, Sefika, Eschauzier, Ruben, Ginwa, Fakih, Ferranti, Nicolas, Graciotti, Arianna, Hanisch, Philipp, Hannah, George, Heidari, Golsa, Hogan, Aidan, Hussein, Hassan, Jouglar, Alexane, Kalo, Jan-Christoph, Kieffer, Manoé, Klironomos, Antonis, Koch, Inês, Lajewska, Weronika, Lazzari, Nicolas, Lindekrans, Mikael, Lippolis, Anna Sofia, Llugiqi, Majlinda, Mancini, Eleonora, Marzi, Eleonora, Menotti, Laura, Flores, Daniela Milon, Nagowah, Soulakshmee, Neubert, Kerstin, Niazmand, Emetis, Norouzi, Ebrahim, Martinez, Beatriz Olarte, Oudshoorn, Anouk Michelle, Poltronieri, Andrea, Presutti, Valentina, Purohit, Disha, Raoufi, Ensiyeh, Ringwald, Celian, Rockstroh, Johanna, Rudolph, Sebastian, Sack, Harald, Saeed, Zafar, Saeedizade, Mohammad Javad, Sahbi, Aya, Santini, Cristian, Simic, Aleksandra, Sommer, Dennis, Sousa, Rita, Tan, Mary Ann, Tarikere, Vidyashree, Tietz, Tabea, Tirpitz, Liam, Tomasino, Arnaldo, van Harmelen, Frank, Vissoci, Joao, Woods, Caitlin, Zhang, Bohui, Zhang, Xinyue, Zheng, Heng
The International Semantic Web Research School (ISWS) is a week-long intensive program designed to immerse participants in the field. This document reports a collaborative effort performed by ten teams of students, each guided by a senior researcher as their mentor, attending ISWS 2023. Each team provided a different perspective to the topic of creative AI, substantiated by a set of research questions as the main subject of their investigation. The 2023 edition of ISWS focuses on the intersection of Semantic Web technologies and Creative AI. ISWS 2023 explored various intersections between Semantic Web technologies and creative AI. A key area of focus was the potential of LLMs as support tools for knowledge engineering. Participants also delved into the multifaceted applications of LLMs, including legal aspects of creative content production, humans in the loop, decentralised approaches to multimodal generative AI models, nanopublications and AI for personal scientific knowledge graphs, commonsense knowledge in automatic story and narrative completion, generative AI for art critique, prompt engineering, automatic music composition, commonsense prototyping and conceptual blending, and elicitation of tacit knowledge. As Large Language Models and semantic technologies continue to evolve, new exciting prospects are emerging: a future where the boundaries between creative expression and factual knowledge become increasingly permeable and porous, leading to a world of knowledge that is both informative and inspiring.
MSTS: A Multimodal Safety Test Suite for Vision-Language Models
Röttger, Paul, Attanasio, Giuseppe, Friedrich, Felix, Goldzycher, Janis, Parrish, Alicia, Bhardwaj, Rishabh, Di Bonaventura, Chiara, Eng, Roman, Geagea, Gaia El Khoury, Goswami, Sujata, Han, Jieun, Hovy, Dirk, Jeong, Seogyeong, Jeretič, Paloma, Plaza-del-Arco, Flor Miriam, Rooein, Donya, Schramowski, Patrick, Shaitarova, Anastassia, Shen, Xudong, Willats, Richard, Zugarini, Andrea, Vidgen, Bertie
Vision-language models (VLMs), which process image and text inputs, are increasingly integrated into chat assistants and other consumer AI applications. Without proper safeguards, however, VLMs may give harmful advice (e.g. how to self-harm) or encourage unsafe behaviours (e.g. to consume drugs). Despite these clear hazards, little work so far has evaluated VLM safety and the novel risks created by multimodal inputs. To address this gap, we introduce MSTS, a Multimodal Safety Test Suite for VLMs. MSTS comprises 400 test prompts across 40 fine-grained hazard categories. Each test prompt consists of a text and an image that only in combination reveal their full unsafe meaning. With MSTS, we find clear safety issues in several open VLMs. We also find some VLMs to be safe by accident, meaning that they are safe because they fail to understand even simple test prompts. We translate MSTS into ten languages, showing non-English prompts to increase the rate of unsafe model responses. We also show models to be safer when tested with text only rather than multimodal prompts. Finally, we explore the automation of VLM safety assessments, finding even the best safety classifiers to be lacking.
ferret: a Framework for Benchmarking Explainers on Transformers
Attanasio, Giuseppe, Pastor, Eliana, Di Bonaventura, Chiara, Nozza, Debora
As Transformers are increasingly relied upon to solve complex NLP problems, there is an increased need for their decisions to be humanly interpretable. While several explainable AI (XAI) techniques for interpreting the outputs of transformer-based models have been proposed, there is still a lack of easy access to using and comparing them. We introduce ferret, a Python library to simplify the use and comparisons of XAI methods on transformer-based classifiers. With ferret, users can visualize and compare transformers-based models output explanations using state-of-the-art XAI methods on any free-text or existing XAI corpora. Moreover, users can also evaluate ad-hoc XAI metrics to select the most faithful and plausible explanations. To align with the recently consolidated process of sharing and using transformers-based models from Hugging Face, ferret interfaces directly with its Python library. In this paper, we showcase ferret to benchmark XAI methods used on transformers for sentiment analysis and hate speech detection. We show how specific methods provide consistently better explanations and are preferable in the context of transformer models.