Safely Implementing AI - Flight Safety Foundation
EASA envisions three stages of AI's rollout in aviation: systems that will assist pilots (2022–2025); human-machine collaboration in flying an aircraft, such as a "virtual" first officer (2025–2030); and autonomous commercial air transport, or, more colloquially, pilotless airliners that fly themselves (2035 and beyond). EASA broadly defines AI as "any technology that appears to emulate the performance of a human." Ultimately, the widespread deployment of AI in aviation comes down to a matter of trust, EASA stated. "A European ethical approach to AI is central to strengthen citizens' trust in the digital development and aims at building a competitive advantage for European companies," according to the EASA roadmap. "Only if AI is developed and used in a way that respects widely shared ethical values can it be considered trustworthy. Therefore, there is a need for ethical guidelines that build on the existing regulatory framework. In June 2018, the [European] Commission set up a High-Level Expert Group on Artificial Intelligence (AI HLEG), the general objective of which was to support the implementation of the European strategy on AI. This includes the elaboration of recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. In April 2019, the AI HLEG proposed the following seven key requirements for trustworthy AI, which were published in its report on Ethics Guidelines on Trustworthy Artificial Intelligence."
Nov-25-2020, 09:45:13 GMT
- Country:
- Europe (0.73)
- Industry:
- Government > Regional Government
- Europe Government (0.37)
- Transportation > Air (1.00)
- Government > Regional Government
- Technology: