GPT-2 was a great success. OpenAI didn't want to publish the most enormous and mightiest version, with 1.5B parameters. At least, claiming that they were afraid of misusing it for less ethical purposes. Lately, they claimed that they didn't found shreds of evidence of such. All of this is legit, considering the volume of the false "news" generated using it. And the truth is that it can be very successful in developing false news/stories.
The BDN Opinion section operates independently and does not set newsroom policies or contribute to reporting or editing articles elsewhere in the newspaper or on bangordailynews.com. Keith E. Sonderling is a commissioner on the U.S. Equal Employment Opportunity Commission.The views here are the author's own and should not be attributed to the EEOC or any other member of the commission. With 86 percent of major U.S. corporations predicting that artificial intelligence will become a "mainstream technology" at their company this year, management-by-algorithm is no longer the stuff of science fiction. AI has already transformed the way workers are recruited, hired, trained, evaluated and even fired. One recent study found that 83 percent of human resources leaders rely in some form on technology in employment decision-making.
In 2020, it was estimated that disinformation in the form of fake news costs around $78 billion annually. But deepfakes, mainly in social media, have matured and are fueled by the sophistication of artificial intelligence are moving into the business sector. In 2019, Deeptrace, a cybersecurity company reported that the number of online deepfake videos doubled, reaching close to 15,000 in under a year. Several startups like Truepic, that's raised $26 million from M12, Microsoft's venture arm, has taken a different approach to deepfakes. They focus on identifying not what is fake, tracking the authenticity of the content at the point it is captured.
The recent emergence of artificial intelligence (AI)-powered media manipulations has widespread societal implications for journalism and democracy,7 national security,1 and art.8,14 AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media.21 For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions, including eye and lip movement;11,18,34,35,36 clone a speaker's voice with a few training samples and generate new natural-sounding audio of something the speaker never said;2 synthesize visually indicated sound effects;28 generate high-quality, relevant text based on an initial prompt;31 produce photorealistic images of a variety of objects from text inputs;5,17,27 and generate photorealistic videos of people expressing emotions from only a single image.3,40 The technologies for producing machine-generated, fake media online may outpace the ability to manually detect and respond to such media. We developed a neural network architecture that combines instance segmentation with image inpainting to automatically remove people and other objects from images.13,39 Figure 1 presents four examples of participant-submitted images and their transformations. The AI, which we call a "target object removal architecture," detects an object, removes it, and replaces its pixels with pixels that approximate what the background should look like without the object.