Kowald, Dominik
Practical Application and Limitations of AI Certification Catalogues in the Light of the AI Act
Autischer, Gregor, Waxnegger, Kerstin, Kowald, Dominik
In this work-in-progress, we investigate the certification of AI systems, focusing on the practical application and limitations of existing certification catalogues in the light of the AI Act by attempting to certify a publicly available AI system. We aim to evaluate how well current approaches work to effectively certify an AI system, and how publicly accessible AI systems, that might not be actively maintained or initially intended for certification, can be selected and used for a sample certification process. Our methodology involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive tool to systematically assess an AI model's compliance with certification standards. We find that while the catalogue effectively structures the evaluation process, it can also be cumbersome and time-consuming to use. We observe the limitations of an AI system that has no active development team anymore and highlighted the importance of complete system documentation. Finally, we identify some limitations of the certification catalogues used and proposed ideas on how to streamline the certification process.
De-centering the (Traditional) User: Multistakeholder Evaluation of Recommender Systems
Burke, Robin, Adomavicius, Gediminas, Bogers, Toine, Di Noia, Tommaso, Kowald, Dominik, Neidhardt, Julia, Özgöbek, Özlem, Pera, Maria Soledad, Tintarev, Nava, Ziegler, Jürgen
Expanding the frame of evaluation to include other parties, as well as the ecosystem in which the system is deployed, leads us to a multistakeholder view of recommender system evaluation as defined in [2]: "A multistakeholder evaluation is one in which the quality of recommendations is assessed across multiple groups of stakeholders." In this article, we provide (i) an overview of the types of recommendation stakeholders that can be considered in conducting such evaluations, (ii) a discussion of the considerations and values that enter into developing measures that capture outcomes of interest for a diversity of stakeholders, (iii) an outline of a methodology for developing and applying multistakeholder evaluation, and (iv) three examples of different multistakeholder scenarios including derivations of evaluation metrics for different stakeholder groups in these different scenarios. The variety of possible stakeholders we identified that are part of the general recommendation ecosystem is suggested in Figure 1 and defined here, using the terminology from [1, 2]: Recommendation consumers are the traditional recommender system users to whom recommendations are delivered and to which typical forms of recommender system evaluation are oriented. Item providers form the general class of individuals or entities who create or otherwise stand behind the items being recommended.
Establishing and Evaluating Trustworthy AI: Overview and Research Challenges
Kowald, Dominik, Scher, Sebastian, Pammer-Schindler, Viktoria, Müllner, Peter, Waxnegger, Kerstin, Demelius, Lea, Fessl, Angela, Toller, Maximilian, Estrada, Inti Gabriel Mendoza, Simic, Ilija, Sabol, Vedran, Truegler, Andreas, Veas, Eduardo, Kern, Roman, Nad, Tomislav, Kopeinik, Simone
However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: 1) human agency and oversight, 2) fairness and non-discrimination, 3) transparency and explainability, 4) robustness and accuracy, 5) privacy and security, and 6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to 1) interdisciplinary research, 2) conceptual clarity, 3) context-dependency, 4) dynamics in evolving systems, and 5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.
Reproducibility in Machine Learning-based Research: Overview, Barriers and Drivers
Semmelrock, Harald, Ross-Hellauer, Tony, Kopeinik, Simone, Theiler, Dieter, Haberl, Armin, Thalmann, Stefan, Kowald, Dominik
Research in various fields is currently experiencing challenges regarding the reproducibility of results. This problem is also prevalent in machine learning (ML) research. The issue arises, for example, due to unpublished data and/or source code and the sensitivity of ML training conditions. Although different solutions have been proposed to address this issue, such as using ML platforms, the level of reproducibility in ML-driven research remains unsatisfactory. Therefore, in this article, we discuss the reproducibility of ML-driven research with three main aims: (i) identifying the barriers to reproducibility when applying ML in research as well as categorize the barriers to different types of reproducibility (description, code, data, and experiment reproducibility), (ii) discussing potential drivers such as tools, practices, and interventions that support ML reproducibility, as well as distinguish between technology-driven drivers, procedural drivers, and drivers related to awareness and education, and (iii) mapping the drivers to the barriers. With this work, we hope to provide insights and to contribute to the decision-making process regarding the adoption of different solutions to support ML reproducibility.
Take the aTrain. Introducing an Interface for the Accessible Transcription of Interviews
Haberl, Armin, Fleiß, Jürgen, Kowald, Dominik, Thalmann, Stefan
aTrain is an open-source and offline tool for transcribing audio data in multiple languages with CPU and NVIDIA GPU support. It is specifically designed for researchers using qualitative data generated from various forms of speech interactions with research participants. aTrain requires no programming skills, runs on most computers, does not require an internet connection, and was verified not to upload data to any server. aTrain combines OpenAI's Whisper model with speaker recognition to provide output that integrates with the popular qualitative data analysis software tools MAXQDA and ATLAS.ti. It has an easy-to-use graphical interface and is provided as a Windows-App through the Microsoft Store allowing for simple installation by researchers. The source code is freely available on GitHub. Having developed aTrain with a focus on speed on local computers, we show that the transcription time on current mobile CPUs is around 2 to 3 times the duration of the audio file using the highest-accuracy transcription models. If an entry-level graphics card is available, the transcription speed increases to 20% of the audio duration.
Reproducibility in Machine Learning-Driven Research
Semmelrock, Harald, Kopeinik, Simone, Theiler, Dieter, Ross-Hellauer, Tony, Kowald, Dominik
Research is facing a reproducibility crisis, in which the results and findings of many studies are difficult or even impossible to reproduce. This is also the case in machine learning (ML) and artificial intelligence (AI) research. Often, this is the case due to unpublished data and/or source-code, and due to sensitivity to ML training conditions. Although different solutions to address this issue are discussed in the research community such as using ML platforms, the level of reproducibility in ML-driven research is not increasing substantially. Therefore, in this mini survey, we review the literature on reproducibility in ML-driven research with three main aims: (i) reflect on the current situation of ML reproducibility in various research fields, (ii) identify reproducibility issues and barriers that exist in these research fields applying ML, and (iii) identify potential drivers such as tools, practices, and interventions that support ML reproducibility. With this, we hope to contribute to decisions on the viability of different solutions for supporting ML reproducibility.
A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendations
Kowald, Dominik, Mayr, Gregor, Schedl, Markus, Lex, Elisabeth
Recent research has suggested different metrics to measure the inconsistency of recommendation performance, including the accuracy difference between user groups, miscalibration, and popularity lift. However, a study that relates miscalibration and popularity lift to recommendation accuracy across different user groups is still missing. Additionally, it is unclear if particular genres contribute to the emergence of inconsistency in recommendation performance across user groups. In this paper, we present an analysis of these three aspects of five well-known recommendation algorithms for user groups that differ in their preference for popular content. Additionally, we study how different genres affect the inconsistency of recommendation performance, and how this is aligned with the popularity of the genres. Using data from LastFm, MovieLens, and MyAnimeList, we present two key findings. First, we find that users with little interest in popular content receive the worst recommendation accuracy, and that this is aligned with miscalibration and popularity lift. Second, our experiments show that particular genres contribute to a different extent to the inconsistency of recommendation performance, especially in terms of miscalibration in the case of the MyAnimeList dataset.
A conceptual model for leaving the data-centric approach in machine learning
Scher, Sebastian, Geiger, Bernhard, Kopeinik, Simone, Trügler, Andreas, Kowald, Dominik
For a long time, machine learning (ML) has been seen as the abstract problem of learning relationships from data independent of the surrounding settings. This has recently been challenged, and methods have been proposed to include external constraints in the machine learning models. These methods usually come from application-specific fields, such as de-biasing algorithms in the field of fairness in ML or physical constraints in the fields of physics and engineering. In this paper, we present and discuss a conceptual high-level model that unifies these approaches in a common language. We hope that this will enable and foster exchange between the different fields and their different methods for including external constraints into ML models, and thus leaving purely data-centric approaches.
Modelling the long-term fairness dynamics of data-driven targeted help on job seekers
Scher, Sebastian, Kopeinik, Simone, Trügler, Andreas, Kowald, Dominik
The use of data-driven decision support by public agencies is becoming more widespread and already influences the allocation of public resources. This raises ethical concerns, as it has adversely affected minorities and historically discriminated groups. In this paper, we use an approach that combines statistics and data-driven approaches with dynamical modeling to assess long-term fairness effects of labor market interventions. Specifically, we develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers through targeted help. The selection of who receives what help is based on a data-driven intervention model that estimates an individual's chances of finding a job in a timely manner and rests upon data that describes a population in which skills relevant to the labor market are unevenly distributed between two groups (e.g., males and females). The intervention model has incomplete access to the individual's actual skills and can augment this with knowledge of the individual's group affiliation, thus using a protected attribute to increase predictive accuracy. We assess this intervention model's dynamics -- especially fairness-related issues and trade-offs between different fairness goals -- over time and compare it to an intervention model that does not use group affiliation as a predictive feature. We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
Listener Modeling and Context-aware Music Recommendation Based on Country Archetypes
Schedl, Markus, Bauer, Christine, Reisinger, Wolfgang, Kowald, Dominik, Lex, Elisabeth
Music preferences are strongly shaped by the cultural and socio-economic background of the listener, which is reflected, to a considerable extent, in country-specific music listening profiles. Previous work has already identified several country-specific differences in the popularity distribution of music artists listened to. In particular, what constitutes the "music mainstream" strongly varies between countries. To complement and extend these results, the article at hand delivers the following major contributions: First, using state-of-the-art unsupervised learning techniques, we identify and thoroughly investigate (1) country profiles of music preferences on the fine-grained level of music tracks (in contrast to earlier work that relied on music preferences on the artist level) and (2) country archetypes that subsume countries sharing similar patterns of listening preferences. Second, we formulate four user models that leverage the user's country information on music preferences. Among others, we propose a user modeling approach to describe a music listener as a vector of similarities over the identified country clusters or archetypes. Third, we propose a context-aware music recommendation system that leverages implicit user feedback, where context is defined via the four user models. More precisely, it is a multi-layer generative model based on a variational autoencoder, in which contextual features can influence recommendations through a gating mechanism. Fourth, we thoroughly evaluate the proposed recommendation system and user models on a real-world corpus of more than one billion listening records of users around the world (out of which we use 369 million in our experiments) and show its merits vis-a-vis state-of-the-art algorithms that do not exploit this type of context information.