Goto

Collaborating Authors

 deepmedia


Deepfake detection tools must work with dark skin tones, experts warn

The Guardian > Technology

Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned. Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye. This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicities, accents, genders, ages and skin-tone, they are open to bias, experts warned.


Deepfake detection tools must work with dark skin tones, experts warn

The Guardian

Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned. Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye. This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicities, accents, genders, ages and skin-tone, they are open to bias, experts warned.


Pentagon turns to Silicon Valley to accelerate AI tech development, adoption: report

FOX News

Fox News correspondent Gillian Turner has the latest on the president's focus amid calls for an impeachment inquiry on'Special Report.' Silicon Valley has started scooping up military contracts as the Pentagon turns to private companies to boost artificial intelligence (AI) development and adoption, according to reports. "This kind of change doesn't always move as smoothly or as quickly as I'd like," Defense Secretary Lloyd Austin said during a speech in December to a group that included start-up tech companies. The courtship between tech start-ups and the Department of Defense (DOD) started well before the public engagement with large language models (LLMs) like ChatGPT: Saildrone, a start-up founded in 2013, had started developing an armada of AI systems to conduct surveillance on international waters in 2021. Alexander Karp, CEO and co-founder of Palantir Technologies, wrote an open letter to European leaders just weeks after Russia invaded Ukraine February 2022 and urged them to modernize their armies with Silicon Valley's help.


Biden speaking five languages shows potential, risks of deepfake tech

#artificialintelligence

At a workshop hosted through the Air Force's military university on Aug. 26 in Montgomery, Alabama, students were shown a video of President Joe Biden addressing the UN while effortlessly switching between five languages including Mandarin and Russian. While Thomas Jefferson and John Quincy Adams were fluent in several languages, Biden, like most U.S. presidents, is only known to speak English. The video was a piece of synthetic media, more commonly known as a "deepfake." Created using a combination of machine learning and artificial intelligence, deepfakes are hyperrealistic videos that replace one person's likeness with that of another, or appear to show them doing something they never did. And as the technology improves, they get harder to detect.


Biden speaking five languages shows potential, risks of deepfake tech

#artificialintelligence

At a workshop hosted through the Air Force's military university on Aug. 26 in Montgomery, Alabama, students were shown a video of President Joe Biden addressing the UN while effortlessly switching between five languages including Mandarin and Russian. While Thomas Jefferson and John Quincy Adams were fluent in several languages, Biden, like most U.S. presidents, is only known to speak English. The video was a piece of synthetic media, more commonly known as a "deepfake." Created using a combination of machine learning and artificial intelligence, deepfakes are hyperrealistic videos that replace one person's likeness with that of another, or appear to show them doing something they never did. And as the technology improves, they get harder to detect.