New approach needed for defining AI standards in cybersecurity, say Oxford academics
Leading experts in cybersecurity and ethics from Oxford Internet Institute, University of Oxford, Dr Mariarosaria Taddeo and Professor Luciano Floridi, and Professor Tom McCutcheon from Defence Science and Technology Laboratories believe the current approach to defining standards and certification procedures for Artificial Intelligence (AI) systems in cybersecurity is risky and should be replaced with an alternative method. Their new paper "Trusting Artificial Intelligence in Cybersecurity: a Double-Edged Sword", published in the journal Nature Machine Intelligence argues that defining standards based on placing implicit trust in AI systems to perform as expected, without any degree of any monitoring or control, could leave us at risk of new forms of AI attacks, disrupting systems and changing their behaviour. Current'trust' based standards and certification procedures in AI typically see tasks being carried out with either no or minimal control on the way the AI-driven tasks are performed. In their paper, the cybersecurity experts present the case for developing'reliable' rather than trustworthy AI in cybersecurity. The experts argue that reliable AI has greater potential to ensure the successful deployment of AI systems for cybersecurity tasks, making them less vulnerable to cyber-attacks.
Nov-13-2019, 04:22:50 GMT
- Country:
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.26)
- Genre:
- Research Report (0.98)
- Industry:
- Government > Military
- Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military
- Technology: