The Future of Generative Adversarial Networks in Deepfakes - Metaphysic.ai

#artificialintelligence 

In the image above, we see examples of'frontalization' under OSFR, where the system fails (bottom row) to infer an authentic likeness from an'off-center' angle in a source photograph, and where the degree of occlusion (i.e., how far the subject is looking away from camera) seems to accord directly with the degree of inaccuracy in the final result. Fed into the ClarifAI celebrity face recognition engine, the frontalized synthetic image of Mathew Rhys (top row, second from right) scores a respectable 0.061 likelihood of being an image of the actor; however, the frontalized Ursula Andress (bottom row, second from right), whose input source image (bottom left) is at a pretty acute 45-50 angle from the camera, is interpreted by ClarifAI as singer Kacey Musgraves (0.089 probability). The pose transformations in OSFR are not informed by multiple views, but rather inferred from generic pose knowledge across multiple identities (in datasets such as CelebA-HQ, a typical training source in a wide-ranging GAN framework). Likewise, expression transformations are powered by'baseline' transformations that are not specific to the identity in an image that you might want a GAN to alter, and therefore cannot take account of the unpredictable ways that the resting human face will distort and transform across a range of expressions. Most GAN initiatives that attempt expression alterations publish test results of'unknown' subjects, where it's not possible for the viewer to know whether the expressions are faithful to the source identity.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found