white face
White faces generated by AI are more convincing than photos, finds survey
It sounds like a scenario straight out of a Ridley Scott film: technology that not only sounds more "real" than actual humans, but looks more convincing, too. Yet it seems that moment has already arrived. A new study has found people are more likely to think pictures of white faces generated by AI are human than photographs of real individuals. "Remarkably, white AI faces can convincingly pass as more real than human faces – and people do not realise they are being fooled," the researchers report. The team, which includes researchers from Australia, the UK and the Netherlands, said their findings had important implications in the real world, including in identity theft, with the possibility that people could end up being duped by digital impostors.
- Oceania > Australia (0.26)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
Humans struggle to distinguish between real and AI-generated faces
According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not. "Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable--and more trustworthy--than real faces," the researchers explained. Video, audio, text, and imagery generated by generative adversarial networks (GANs) are increasingly being used for nonconsensual intimate imagery, financial fraud, and disinformation campaigns. The generator will start with random pixels and will keep improving the image to avoid penalisation from the discriminator. This process continues until the discriminator can no longer distinguish a synthesised face from a real one.
- North America > United States > California (0.06)
- North America > Canada > Ontario > Middlesex County > London (0.06)
- Europe > Ukraine (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- Media (0.78)
- Government (0.60)
AI-synthesized faces are indistinguishable from real faces and more trustworthy
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable--and more trustworthy--than real faces. Artificial intelligence (AI)–powered audio, image, and video synthesis--so-called deep fakes--has democratized access to previously exclusive Hollywood-grade, special effects technology. From synthesizing speech in anyone's voice (1) to synthesizing an image of a fictional person (2) and swapping one person's identity with another or altering what they are saying in a video (3), AI-synthesized content holds the power to entertain but also deceive. Generative adversarial networks (GANs) are popular mechanisms for synthesizing content.
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.71)
How Do We Use Artificial Intelligence Ethically?
I'm hugely passionate about artificial intelligence (AI), and I'm proud to say that I help companies use AI to do amazing things in the world. But we must make sure we use AI responsibly, so we can make the world a better place. In this post, I'm going to give you some tips for making sure you apply AI ethically within your organization. Communicate clearly with people (externally and internally) about what AI can do and its challenges. It is possible to use AI for the wrong reasons, so organizations need to figure out the right purposes for using AI and how to stay within predefined ethical boundaries.
- North America > United States (0.05)
- Europe > United Kingdom > Scotland (0.05)
How Do We Use Artificial Intelligence Ethically?
I'm hugely passionate about artificial intelligence (AI), and I'm proud to say that I help companies use AI to do amazing things in the world. But we must make sure we use AI responsibly, so we can make the world a better place. In this post, I'm going to give you some tips for making sure you apply AI ethically within your organization. How Do We Use Artificial Intelligence Ethically? Communicate clearly with people (externally and internally) about what AI can do and its challenges.
- North America > United States (0.05)
- Europe > United Kingdom > Scotland (0.05)
Photoshop's AI neural filters can tweak age and expression with a few clicks
Artificial intelligence is changing the world of image editing and manipulation, and Adobe doesn't want to be left behind. Today, the company is releasing an update to Photoshop version 22.0 that comes with a host of AI-powered features, some new, some already shared with the public. These include a sky replacement tool, improved AI edge selection, and -- the star of the show -- a suite of image-editing tools that Adobe calls "neural filters." These filters include a number of simple overlays and effects but also tools that allow for deeper edits, particularly to portraits. With neural filters, Photoshop can adjust a subject's age and facial expression, amplifying or reducing feelings like "joy," "surprise," or "anger" with simple sliders.
Twitter algorithm keeps white faces, crops out Black people, users say
Colin Madland, a white doctoral student, said the Twitter preview of a photo he posted of himself with a Black colleague was cropped down to only his face, CBS reported. Tony Arcieri, a programmer, also posted a telling demonstration of how the algorithm was behaving -- posting an image that included former President Barack Obama and U.S. Sen. Mitch McConnell side by side. Twitter's cropping tool narrowed down to McConnell and cropped out Obama even when Arcieri tried altering the original image, CBS reported. The artificial intelligence isn't necessarily biased, experts say, but the technology appears to be "reflecting and amplifying historical patterns of discrimination" created by humans, said Sarah Myers West, who studies artificial intelligence bias at New York University, CBS reported.
The algorithms that make decisions about your life
Thousands of students in England are angry about the controversial use of an algorithm to determine this year's GCSE and A-level results. They were unable to sit exams because of lockdown, so the algorithm used data about schools' results in previous years to determine grades. It meant about 40% of this year's A-level results came out lower than predicted, which has a huge impact on what students are able to do next. GCSE results are due out on Thursday. There are many examples of algorithms making big decisions about our lives, without us necessarily knowing how or when they do it.
- Education > Assessment & Standards > Student Performance (0.77)
- Health & Medicine (0.53)
- Banking & Finance > Insurance (0.50)
What a machine learning tool that turns Obama white can (and can't) tell us about AI bias
It's a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: "This image speaks volumes about the dangers of bias in AI." But what's causing these outputs and what do they really tell us about AI bias?
Is facial recognition tech RACIST? Expert says AI assign more negative emotions to black men's faces
Facial recognition technology has progressed to point where it now interprets emotions in facial expressions. This type of analysis is increasingly used in daily life. For example, companies can use facial recognition software to help with hiring decisions. Other programs scan the faces in crowds to identify threats to public safety. Unfortunately, this technology struggles to interpret the emotions of black faces.
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.43)
- Law > Civil Rights & Constitutional Law (0.40)
- Information Technology (0.30)