manipulation


Deep Angel, The Artificial Intelligence of Absence

#artificialintelligence

Deep Angel is an artificial intelligence that erases objects from photographs. Part art, part technology, and part philosophy, Deep Angel shares Angelus Novus' gaze into the future. With this platform, you can explore the future of automated media manipulation by either uploading your own photos, submitting a public Instagram account to the AI, or trying to detect fake images. Beyond manipulation, Deep Angel enables you to uncover the aesthetics of absence. What happens when we can remove things from the world around us? Deep Angel is part of an ongoing research project.


Fake news is real — A.I. is going to make it much worse

USATODAY - Tech Top Stories

Deepfakes are video manipulations that can make people say seemingly strange things. Barack Obama and Nicolas Cage have been featured in these videos. "The Boy Who Cried Wolf" has long been a staple on nursery room shelves for a reason: It teaches kids that joking too much about a possible threat may turn people ignorant when the threat becomes an actual danger. President Donald Trump has been warning about "fake news" throughout his entire political career. And now the real wolf might be just around the corner.


IBM's AI creates new labeled image sets using semantic content

#artificialintelligence

In a paper scheduled to be presented next week during the annual Conference on Computer Vision and Pattern Recognition (CVPR), scientists at IBM, Tel Aviv University, and Technion describe a novel AI model design -- Label-Set Operations (LaSO) networks -- designed to combine pairs of labeled image examples (e.g., a pic of a dog annotated "dog" and a sheep annotated "sheep") to create new examples that incorporate the seed images' labels (a single pic of a dog and sheep annotated "dog" and "sheep"). The coauthors believe that in the future, LaSO networks could be used to augment corpora that lack sufficient real-world data. "Our method is capable of producing a sample containing … labels present in two input samples," wrote the researchers. "The proposed approach might also prove useful for the interesting visual dialog use case, where the user can manipulate the returned query results by pointing out or showing visual examples of what she [or] he likes or doesn't like." LaSO networks learn to manipulate label sets of given samples and synthesize new ones corresponding to combined label sets, taking as input photos of different types and identifying common semantic content before implicitly removing concepts present in one sample from another sample.


Control What You Can: Intrinsically Motivated Task-Planning Agent

arXiv.org Artificial Intelligence

We present a novel intrinsically motivated agent that learns how to control the environment in the fastest possible manner by optimizing learning progress. It learns what can be controlled, how to allocate time and attention, and the relations between objects using surprise based motivation. The effectiveness of our method is demonstrated in a synthetic as well as a robotic manipulation environment yielding considerably improved performance and smaller sample complexity. In a nutshell, our work combines several task-level planning agent structures (backtracking search on task graph, probabilistic road-maps, allocation of search efforts) with intrinsic motivation to achieve learning from scratch.


Explanations can be manipulated and geometry is to blame

arXiv.org Machine Learning

Explanation methods aim to make neural networks more trustworthy and interpretable. In this paper, we demonstrate a property of explanation methods which is disconcerting for both of these purposes. Namely, we show that explanations can be manipulated arbitrarily by applying visually hardly perceptible perturbations to the input that keep the network's output approximately constant. We establish theoretically that this phenomenon can be related to certain geometrical properties of neural networks. This allows us to derive an upper bound on the susceptibility of explanations to manipulations. Based on this result, we propose effective mechanisms to enhance the robustness of explanations.


Detecting Bias with Generative Counterfactual Face Attribute Augmentation

arXiv.org Machine Learning

We introduce a simple framework for identifying biases of a smiling attribute classifier. Our method poses counterfactual questions of the form: how would the prediction change if this face characteristic had been different? We leverage recent advances in generative adversarial networks to build a realistic generative model of face images that affords controlled manipulation of specific image characteristics. We introduce a set of metrics that measure the effect of manipulating a specific property of an image on the output of a trained classifier. Empirically, we identify several different factors of variation that affect the predictions of a smiling classifier trained on CelebA.


Adobe Unveils AI Tool That Can Detect Photoshopped Faces

#artificialintelligence

Adobe, along with researchers from the University of California, Berkeley, have trained artificial intelligence (AI) to detect facial manipulation in images edited using the Photoshop software. At a time when deepfake visual content is getting commoner and more deceptive, the decision is also intended to make image forensics understandable for everyone. "This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations," the company wrote in a blog post on Friday. As part of the programme, the team trained a convolutional neural network (CNN) to spot changes in images made with Photoshop's "Face Away Liquify" feature, which was intentionally designed to change facial features like eyes and mouth. On testing, it was found that while human eyes were able to judge the altered face 53 percent of the time, the the trained neural network tool achieved results as high as 99 percent.


Adobe's prototype AI tool automatically spots Photoshopped faces

#artificialintelligence

The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe -- a name synonymous with edited imagery -- says it shares those concerns. Today, it's sharing new research in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated. It's the latest sign the company is committing more resources to this problem. Last year its engineers created an AI tool that detects edited media created by splicing, cloning, and removing objects. The company says it doesn't have any immediate plans to turn this latest work into a commercial product, but a spokesperson told The Verge it was just one of many "efforts across Adobe to better detect image, video, audio and document manipulations."


Adobe unveils new AI that can detect if an image has been 'deepfaked'

Daily Mail - Science & tech

Adobe researchers have developed an AI tool that could make spotting'deepfakes' a whole lot easier. The tool is able to detect edits to images, such as those that would potentially go unnoticed to the naked eye, especially in doctored deepfake videos. It comes as deepfake videos, which use deep learning to digitally splice fake audio onto the mouth of someone talking, continue to be on the rise. Adobe researchers have developed an AI tool that could make it easier to spot'deepfakes'. Deepfakes are so named because they utilise deep learning, a form of artificial intelligence, to create fake videos.


Joint Visual-Textual Embedding for Multimodal Style Search

arXiv.org Machine Learning

We introduce a multimodal visual-textual search refinement method for fashion garments. Existing search engines do not enable intuitive, interactive, refinement of retrieved results based on the properties of a particular product. We propose a method to retrieve similar items, based on a query item image and textual refinement properties. We believe this method can be leveraged to solve many real-life customer scenarios, in which a similar item in a different color, pattern, length or style is desired. We employ a joint embedding training scheme in which product images and their catalog textual metadata are mapped closely in a shared space. This joint visual-textual embedding space enables manipulating catalog images semantically, based on textual refinement requirements. We propose a new training objective function, Mini-Batch Match Retrieval, and demonstrate its superiority over the commonly used triplet loss. Additionally, we demonstrate the feasibility of adding an attribute extraction module, trained on the same catalog data, and demonstrate how to integrate it within the multimodal search to boost its performance. We introduce an evaluation protocol with an associated benchmark, and compare several approaches.