securing artificial intelligence
Securing Artificial Intelligence Before It Secures Us!
Since I spend an inordinate and unfortunate amount of time worrying about the possibility of a forthcoming artificial intelligence (AI) apocalypse, I was delighted to hear that the folks at ETSI have plunged into the fray with regard to establishing the world's first standardization initiative dedicated toward securing AI. We will return to ETSI's initiative shortly, but first… To be honest, things are now happening so fast with regard to AI that it's starting to make my head spin (see also What the FAQ are AI, ANNs, ML, DL, and DNNs?). As I've mentioned before, AI has been long in the coming. Way back in the 1840s, Ada Lovelace, who was assisting Charles Babbage on his quest to build a mechanical computer called the Analytical Engine, jotted down some thoughts about the possibility of computers one day using numbers as symbols to represent other things like musical notes. In 1950, a little over 100 years after Ada penned her musings, English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist, Alan Mathison Turing wrote a seminal paper, Computing Machinery and Intelligence, in which he considered the question, "Can Computers Think?"
Securing Artificial Intelligence
In the last five years, many large companies began to integrate artificial intelligence systems into their IT infrastructure with machine learning as one of the most widely used technologies. The spread and use of artificial intelligence will grow and accelerate. According to forecasts by IDC, a market research firm, worldwide industry spending on artificial intelligence will reach $35.8 billion in 2019 and is forecast to double to $79.2 billion in 2022 with an annual growth rate of 38 percent. Today, 72 percent of business executives believe that artificial intelligence will be the most significant business advantage for their company, according to PwC, a consultancy. In the next years, we can expect the investment boom in artificial intelligence to also reach the public sector as well as the military.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.50)
- Information Technology > Security & Privacy (0.50)
- Government > Military (0.32)
ETSI - ETSI launches specification group on Securing Artificial Intelligence
ETSI is pleased to announce the creation of a new Industry Specification Group on Securing Artificial Intelligence (ISG SAI). The group will develop technical specifications to mitigate threats arising from the deployment of AI throughout multiple ICT-related industries. This includes threats to artificial intelligence systems from both conventional sources and other AIs. The ETSI Securing Artificial Intelligence group was initiated to anticipate that autonomous mechanical and computing entities may make decisions that act against the relying parties either by design or as a result of malicious intent. The purpose of the ETSI ISG SAI is to develop the technical knowledge that acts as a baseline in ensuring that artificial intelligence is secure.