MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence

Gizinski, Stanisław, Kuzba, Michał, Pielinski, Bartosz, Sienkiewicz, Julian, Łaniewski, Stanisław, Biecek, Przemysław

arXiv.org Artificial Intelligence 

Artificial intelligence methods are playing an increasingly important role in global economics. The growing importance and, at the same time, the risks associated with AI are driving a vibrant discussion about the responsible development of artificial intelligence. Examples of negative consequences resulting from black-box models show that interpretability, transparency, safety, and fairness are essential yet sometimes overlooked components of AI systems. Efforts to secure the responsible development of AI systems are ongoing at many levels and in many communities, both policymakers and academics (Gill et al., 2020; Barredo Arrieta et al., 2020; Baniecki et al., 2020). Naturally, national strategies for the development of responsible AI, sector regulations related to the safe use of AI, as well as academic research related to new methods that ensure the transparency and verifiability of models are all interrelated. Strategies are based on discussions in the scientific community and are often sources of inspiration for subsequent research work. The need for regulation stems from risks, often identified by the research community, but when regulations are created, they become a powerful tool for developing methods to meet expectations. Scientific work in AI is particularly strongly connected to the economy, which means that a large part of it responds to the threads identified in regulations and strategies.