Study warns deepfakes can fool facial recognition
Deepfakes, or AI-generated videos that take a person in an existing video and replace them with someone else's likeness, are multiplying at an accelerating rate. According to startup Deeptrace, the number of deepfakes on the web increased 330% from October 2019 to June 2020, reaching over 50,000 at their peak. That's troubling not only because these fakes might be used to sway opinion during an election or implicate a person in a crime, but because they've already been abused to generate pornographic material of actors and defraud a major energy producer. Open source tools make it possible for anyone with images of a victim to create a convincing deepfake, and a new study suggests that deepfake-generating techniques have reached the point where they can reliably fool commercial facial recognition services. In a paper published on the preprint server Arxiv.org,
Mar-8-2021, 12:20:09 GMT
- Country:
- Asia > South Korea > Gyeonggi-do > Suwon (0.05)
- Genre:
- Research Report > New Finding (0.50)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: