Welcome to the ALTAI portal! The Assessment List for Trustworthy Artificial Intelligence (ALTAI), is a practical tool that helps business and organisations to self-assess the trustworthiness of their AI systems under development. The AI HLEG translated these requirements into a detailed Assessment List, taking into account feedback from a six month long piloting process within the European AI community. Furthermore, to demonstrate the capability of such an Assessment List the Vice-Chair of the AI HLEG and his team at the Insight Centre for Data Analytics at University College Cork, developed a prototype web based tool, to practically guide developers and deployers of AI through an accessible and dynamic checklist. You can create an ALTAI account here.
The Government of Ireland released its national AI strategy on Thursday 8th July 2021, presenting it online, with key members of the government and the public sector in attendance. Comprising a 73-page document, the strategy considers AI from three different perspectives: building public trust in AI, leveraging AI for economic and societal benefit, and enablers for AI. These key aspects are detailed in the strategy document, with eight actionable corresponding strands ranging from how to engage and raise awareness of the public about AI, to building a strong AI innovation ecosystem, to nurturing and developing AI skills and talents. Following the European approaches of ethical, human-centred, and trustworthy AI, "The National AI Strategy will serve as a roadmap to an ethical, trustworthy and human-centric design, development, deployment and governance of AI to ensure Ireland can unleash the potential that AI can provide," writes Robert Troy, Minister of State for Trade Promotion, Digital and Company Regulation. "Underpinning our Strategy are three core principles to best embrace the opportunities of AI – adopting a human-centric approach to the application of AI; staying open and adaptable to innovations; and ensuring good governance to build trust and confidence for innovation to flourish, because ultimately if AI is to be truly inclusive and have a positive impact on all of us, we need to be clear on its role in our society and ensure that trust is the ultimate marker of success."
Availability of powerful computation and communication technology as well as advances in artificial intelligence enable a new generation of complex, AI-intense systems and applications. Such systems and applications promise exciting improvements on a societal level, yet they also bring with them new challenges for their development. In this paper we argue that significant challenges relate to defining and ensuring behaviour and quality attributes of such systems and applications. We specifically derive four challenge areas from relevant use cases of complex, AI-intense systems and applications related to industry, transportation, and home automation: understanding, determining, and specifying (i) contextual definitions and requirements, (ii) data attributes and requirements, (iii) performance definition and monitoring, and (iv) the impact of human factors on system acceptance and success. Solving these challenges will imply process support that integrates new requirements engineering methods into development approaches for complex, AI-intense systems and applications. We present these challenges in detail and propose a research roadmap.
Autonomous systems with cognitive features are on their way into the market. Within complex environments, they promise to implement complex and goal oriented behavior even in a safety related context. This behavior is based on a certain level of situational awareness (perception) and advanced de-cision making (deliberation). These systems in many cases are driven by artificial intelligence (e.g. neural networks). The problem with such complex systems and with using AI technology is that there is no generally accepted approach to ensure trustworthiness. This paper presents a framework to exactly fill this gap. It proposes a reference lifecycle as a structured approach that is based on current safety standards and enhanced to meet the requirements of autonomous/cog-nitive systems and trustworthiness.
In recent years, curial incidents and accidents have been reported due to un-intended control caused by misjudgment of statistical machine learning (SML), which include deep learning. The international functional safety standards for Electric/Electronic/Programmable (E/E/P) systems have been widely spread to improve safety. However, most of them do not recom-mended to use SML in safety critical systems so far. In practical the new concepts and methods are urgently required to enable SML to be safely used in safety critical systems. In this paper, we organize five kinds of technical safety concepts (TSCs) for SML components toward accordance with functional safety standards. We discuss not only quantitative evaluation criteria, but also development process based on XAI (eXplainable Artificial Intelligence) and Automotive SPICE to improve explainability and reliability in development phase. Fi-nally, we briefly compare the TSCs in cost and difficulty, and expect to en-courage further discussion in many communities and domain.