Community of ethical hackers needed to prevent AI's looming 'crisis of trust'
The Artificial Intelligence industry should create a global community of hackers and "threat modellers" dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it's too late. This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge's Centre for the Study of Existential Risk (CSER), who have authored a new "call to action" published in the journal Science. They say that companies building intelligent technologies should harness techniques such as "red team" hacking, audit trails and "bias bounties" – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public. Otherwise, the industry faces a "crisis of trust" in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil. The novelty and "black box" nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr Shahar Avin of CSER.
Dec-10-2021, 10:18:59 GMT
- Country:
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.25)
- Genre:
- Research Report (0.56)
- Industry:
- Government (0.91)
- Information Technology (0.72)
- Media > News (0.71)
- Transportation (0.77)
- Technology: