systematic literature review
Interview with Frida Hartman: Studying bias in AI-based recruitment tools
In a new series of interviews, we're meeting some of the PhD students that were selected to take part in the Doctoral Consortium at the European Conference on Artificial Intelligence (ECAI-2025) . In the second interview of the series, we caught up with Frida Hartman to find out how her PhD is going so far, and plans for the next steps in her investigations. Frida, along with co-authors Mario Mirabile and Michele Dusi, was also the winner of the ECAI-2025 Diversity & Inclusion Competition, for work entitled . This award was presented at the closing ceremony of the conference. Could start by giving us a quick introduction to yourself and the topic that you're working on?
Impact and Implications of Generative AI for Enterprise Architects in Agile Environments: A Systematic Literature Review
Kooy, Stefan Julian, Piest, Jean Paul Sebastian, Bemthuis, Rob Henk
Generative AI (GenAI) is reshaping enterprise architecture work in agile software organizations, yet evidence on its effects remains scattered. We report a systematic literature review (SLR), following established SLR protocols of Kitchenham and PRISMA, of 1,697 records, yielding 33 studies across enterprise, solution, domain, business, and IT architect roles. GenAI most consistently supports (i) design ideation and trade-off exploration; (ii) rapid creation and refinement of artifacts (e.g., code, models, documentation); and (iii) architectural decision support and knowledge retrieval. Reported risks include opacity and bias, contextually incorrect outputs leading to rework, privacy and compliance concerns, and social loafing. We also identify emerging skills and competencies, including prompt engineering, model evaluation, and professional oversight, and organizational enablers around readiness and adaptive governance. The review contributes with (1) a mapping of GenAI use cases and risks in agile architecting, (2) implications for capability building and governance, and (3) an initial research agenda on human-AI collaboration in architecture. Overall, the findings inform responsible adoption of GenAI that accelerates digital transformation while safeguarding architectural integrity.
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Europe > Netherlands (0.04)
- Asia > Indonesia > Java > Jakarta > Jakarta (0.04)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Information Technology > Security & Privacy (1.00)
- Government (0.93)
Leveraging LLMs for Semi-Automatic Corpus Filtration in Systematic Literature Reviews
Joos, Lucas, Keim, Daniel A., Fischer, Maximilian T.
The creation of systematic literature reviews (SLR) is critical for analyzing the landscape of a research field and guiding future research directions. However, retrieving and filtering the literature corpus for an SLR is highly time-consuming and requires extensive manual effort, as keyword-based searches in digital libraries often return numerous irrelevant publications. In this work, we propose a pipeline leveraging multiple large language models (LLMs), classifying papers based on descriptive prompts and deciding jointly using a consensus scheme. The entire process is human-supervised and interactively controlled via our open-source visual analytics web interface, LLMSurver, which enables real-time inspection and modification of model outputs. We evaluate our approach using ground-truth data from a recent SLR comprising over 8,000 candidate papers, benchmarking both open and commercial state-of-the-art LLMs from mid-2024 and fall 2025. Results demonstrate that our pipeline significantly reduces manual effort while achieving lower error rates than single human annotators. Furthermore, modern open-source models prove sufficient for this task, making the method accessible and cost-effective. Overall, our work demonstrates how responsible human-AI collaboration can accelerate and enhance systematic literature reviews within academic workflows.
- Research Report > New Finding (1.00)
- Overview (1.00)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Bridging Ethical Principles and Algorithmic Methods: An Alternative Approach for Assessing Trustworthiness in AI Systems
Papademas, Michael, Ziouvelou, Xenia, Troumpoukis, Antonis, Karkaletsis, Vangelis
Artificial Intelligence (AI) technology epitomizes the complex challenges posed by human-made artifacts, particularly those widely integrated into society and exerting significant influence, highlighting potential benefits and their negative consequences. While other technologies may also pose substantial risks, AI's pervasive reach makes its societal effects especially profound. The complexity of AI systems, coupled with their remarkable capabilities, can lead to a reliance on technologies that operate beyond direct human oversight or understanding. To mitigate the risks that arise, several theoretical tools and guidelines have been developed, alongside efforts to create technological tools aimed at safeguarding Trustworthy AI. The guidelines take a more holistic view of the issue but fail to provide techniques for quantifying trustworthiness. Conversely, while technological tools are better at achieving such quantification, they lack a holistic perspective, focusing instead on specific aspects of Trustworthy AI. This paper aims to introduce an assessment method that combines the ethical components of Trustworthy AI with the algorithmic processes of PageRank and TrustRank. The goal is to establish an assessment framework that minimizes the subjectivity inherent in the self-assessment techniques prevalent in the field by introducing algorithmic criteria. The application of our approach indicates that a holistic assessment of an AI system's trustworthiness can be achieved by providing quantitative insights while considering the theoretical content of relevant guidelines.
- Europe > Middle East > Malta > Northern Region > Western District > Attard (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Greece (0.04)
- (3 more...)
- Research Report (1.00)
- Overview (1.00)
Bias Mitigation for AI-Feedback Loops in Recommender Systems: A Systematic Literature Review and Taxonomy
Stoecker, Theodor, Bayer, Samed, Weber, Ingo
Recommender systems continually retrain on user reactions to their own predictions, creating AI feedback loops that amplify biases and diminish fairness over time. Despite this well-known risk, most bias mitigation techniques are tested only on static splits, so their long-term fairness across multiple retraining rounds remains unclear. We therefore present a systematic literature review of bias mitigation methods that explicitly consider AI feedback loops and are validated in multi-round simulations or live A/B tests. Screening 347 papers yields 24 primary studies published between 2019-2025. Each study is coded on six dimensions: mitigation technique, biases addressed, dynamic testing set-up, evaluation focus, application domain, and ML task, organising them into a reusable taxonomy. The taxonomy offers industry practitioners a quick checklist for selecting robust methods and gives researchers a clear roadmap to the field's most urgent gaps. Examples include the shortage of shared simulators, varying evaluation metrics, and the fact that most studies report either fairness or performance; only six use both.
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
A Systematic Literature Review on Multi-label Data Stream Classification
Freire-Oliveira, H., Paiva, E. R. F., Gama, J., Khan, L., Cerri, R.
Classification in the context of multi-label data streams represents a challenge that has attracted significant attention due to its high real-world applicability. However, this task faces problems inherent to dynamic environments, such as the continuous arrival of data at high speed and volume, changes in the data distribution (concept drift), the emergence of new labels (concept evolution), and the latency in the arrival of ground truth labels. This systematic literature review presents an in-depth analysis of multi-label data stream classification proposals. We characterize the latest methods in the literature, providing a comprehensive overview, building a thorough hierarchy, and discussing how the proposals approach each problem. Furthermore, we discuss the adopted evaluation strategies and analyze the methods' asymptotic complexity and resource consumption. Finally, we identify the main gaps and offer recommendations for future research directions in the field.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- South America > Brazil > São Paulo (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
XABPs: Towards eXplainable Autonomous Business Processes
Fettke, Peter, Fournier, Fabiana, Limonad, Lior, Metzger, Andreas, Rinderle-Ma, Stefanie, Weber, Barbara
Autonomous business processes (ABPs), i.e., self-executing workflows leveraging AI/ML, have the potential to improve operational efficiency, reduce errors, lower costs, improve response times, and free human workers for more strategic and creative work. However, ABPs may raise specific concerns including decreased stakeholder trust, difficulties in debugging, hindered accountability, risk of bias, and issues with regulatory compliance. We argue for eXplainable ABPs (XABPs) to address these concerns by enabling systems to articulate their rationale. The paper outlines a systematic approach to XABPs, characterizing their forms, structuring explainability, and identifying key BPM research challenges towards XABPs.
- Europe > Germany > Saarland > Saarbrücken (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Switzerland > St. Gallen > St. Gallen (0.04)
- (2 more...)
AI for Better UX in Computer-Aided Engineering: Is Academia Catching Up with Industry Demands? A Multivocal Literature Review
Uulu, Choro Ulan, Kulyabin, Mikhail, Etaiwi, Layan, Pacheco, Nuno Miguel Martins, Joosten, Jan, Röse, Kerstin, Petridis, Filippos, Bosch, Jan, Olsson, Helena Holmström
Computer-Aided Engineering (CAE) enables simulation experts to optimize complex models, but faces challenges in user experience (UX) that limit efficiency and accessibility. While artificial intelligence (AI) has demonstrated potential to enhance CAE processes, research integrating these fields with a focus on UX remains fragmented. This paper presents a multivocal literature review (MLR) examining how AI enhances UX in CAE software across both academic research and industry implementations. Our analysis reveals significant gaps between academic explorations and industry applications, with companies actively implementing LLMs, adaptive UIs, and recommender systems while academic research focuses primarily on technical capabilities without UX validation. Key findings demonstrate opportunities in AI-powered guidance, adaptive interfaces, and workflow automation that remain underexplored in current research. By mapping the intersection of these domains, this study provides a foundation for future work to address the identified research gaps and advance the integration of AI to improve CAE user experience.
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Research Report > Experimental Study (0.48)
- Research Report > New Finding (0.34)