Not enough data to create a plot.
Try a different view from the menu above.
Brajovic, Danilo
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Fresz, Benjamin, Dubovitskaya, Elena, Brajovic, Danilo, Huber, Marco, Horz, Christian
This paper investigates the relationship between law and eXplainable Artificial Intelligence (XAI). While there is much discussion about the AI Act, for which the trilogue of the European Parliament, Council and Commission recently concluded, other areas of law seem underexplored. This paper focuses on European (and in part German) law, although with international concepts and regulations such as fiduciary plausibility checks, the General Data Protection Regulation (GDPR), and product safety and liability. Based on XAI-taxonomies, requirements for XAI-methods are derived from each of the legal bases, resulting in the conclusion that each legal basis requires different XAI properties and that the current state of the art does not fulfill these to full satisfaction, especially regarding the correctness (sometimes called fidelity) and confidence estimates of XAI-methods.
DMLR: Data-centric Machine Learning Research -- Past, Present and Future
Oala, Luis, Maskey, Manil, Bat-Leah, Lilith, Parrish, Alicia, Gürel, Nezihe Merve, Kuo, Tzu-Sheng, Liu, Yang, Dror, Rotem, Brajovic, Danilo, Yao, Xiaozhe, Bartolo, Max, Rojas, William A Gaviria, Hileman, Ryan, Aliment, Rainier, Mahoney, Michael W., Risdal, Meg, Lease, Matthew, Samek, Wojciech, Dutta, Debojyoti, Northcutt, Curtis G, Coleman, Cody, Hancock, Braden, Koch, Bernard, Tadesse, Girmaw Abebe, Karlaš, Bojan, Alaa, Ahmed, Dieng, Adji Bousso, Noy, Natasha, Reddi, Vijay Janapa, Zou, James, Paritosh, Praveen, van der Schaar, Mihaela, Bollacker, Kurt, Aroyo, Lora, Zhang, Ce, Vanschoren, Joaquin, Guyon, Isabelle, Mattson, Peter
Drawing from discussions at the inaugural DMLR workshop at ICML 2023 and meetings prior, in this report we outline the relevance of community engagement and infrastructure development for the creation of next-generation public datasets that will advance machine learning science. We chart a path forward as a collective effort to sustain the creation and maintenance of these datasets and methods towards positive scientific, societal and business impact.
Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development
Brajovic, Danilo, Renner, Niclas, Goebels, Vincent Philipp, Wagner, Philipp, Fresz, Benjamin, Biller, Martin, Klaeb, Mara, Kutz, Janika, Neuhuettler, Jens, Huber, Marco F.
Despite large progress in Explainable and Safe AI, practitioners suffer from a lack of regulation and standards for AI safety. In this work we merge recent regulation efforts by the European Union and first proposals for AI guidelines with recent trends in research: data and model cards. We propose the use of standardized cards to document AI applications throughout the development process. Our main contribution is the introduction of use-case and operation cards, along with updates for data and model cards to cope with regulatory requirements. We reference both recent research as well as the source of the regulation in our cards and provide references to additional support material and toolboxes whenever possible. The goal is to design cards that help practitioners develop safe AI systems throughout the development process, while enabling efficient third-party auditing of AI applications, being easy to understand, and building trust in the system. Our work incorporates insights from interviews with certification experts as well as developers and individuals working with the developed AI applications.