Army looks to block data 'poisoning' in facial recognition, AI - FedScoop
The Army has many data problems. But when it comes to the data that underlies facial recognition, one sticks out: Enemies want to poison the well. Adversaries are becoming more sophisticated at providing "poisoned," or subtly altered, data that will mistrain artificial intelligence and machine learning algorithms. To try and safeguard facial recognition databases from these so-called backdoor attacks, the Army is funding research to build defensive software to mine through its databases. Since deep learning algorithms are only as good as the data they rely on, adversaries can use backdoor attacks to leave the Army with untrustworthy AI or even bake-in the ability to kill an algorithm when it sees a particular image, or "trigger." "People tend to modify the input data very slightly so it is not so obvious to a human eye, but can fool the model," said Helen Li, a Duke University faculty member whose research team received $60,000 from the Army Research Office for work on an AI database defensive software.
Feb-11-2020, 19:44:01 GMT
- Country:
- North America > United States
- New York (0.05)
- Texas > Jack County (0.05)
- North America > United States
- Industry:
- Government > Military > Army (1.00)
- Technology: