elhoseiny
Expanding artistic frontiers in artificial intelligence
Dr. Mohammed Elhoseiny, assistant professor of computer science at KAUST, has carved a career out of teaching machines the art of creating art. After finishing his doctoral degree at Rutgers University in 2016, Elhoseiny went on to work for Adobe Research, Baidu Research, Facebook and now KAUST. His latest research paper, Creative Walk Adversarial Networks: Novel Art Generation with Probabilistic Random Walk Deviation from Style Norms, was accepted at the premiere conference on computational creative artificial intelligence (AI), the International Conference on Computational Creativity (ICCC) 2022. The paper covers the work of Elhoseiny and his team VISION CAIR on the use of Creative Walk Adversarial Networks (CWAN) for novel, or original, art generation. CWAN learns about existing art styles in its training by being exposed to a large repository of paintings from various art movements and styles, from 5000 years ago to present times.
Creativity Inspired Zero-shot Learning
Zero-shot learning (ZSL) aims at understanding unseen categories with no training examples from class-level descriptions. To improve the discriminative power of zero-shot learning, we model the visual learning process of unseen categories with inspiration from the psychology of human creativity for producing novel art. We relate ZSL to human creativity by observing that zero-shot learning is about recognizing the unseen and creativity is about creating a likable unseen. We introduce a learning signal inspired by creativity literature that explores the unseen space with hallucinated class-descriptions and encourages careful deviation of their visual feature generations from seen classes while allowing knowledge transfer from seen to unseen classes. With hundreds of thousands of object categories in the real world and countless undiscovered species, it becomes unfeasible to maintain hundreds of examples per class to fuel the training needs of most existing recognition systems.
Sherlock: Scalable Fact Learning in Images
Elhoseiny, Mohamed (Rutgers University) | Cohen, Scott (Adobe Research) | Chang, Walter (Adobe Research) | Price, Brian (Adobe Research) | Elgammal, Ahmed (Rutgers University)
We study scalable and uniform understanding of facts in images. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g., <boy, tall>), (3) actions (e.g., <boy, playing>), and (4) interactions (e.g., <boy, riding, a horse >). Each fact has a semantic language view (e.g., < boy, playing>) and a visual view (an image with this fact). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce two learning representation models as potential baselines. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of more than 202,000 facts and 814,000 images. Our experiments show the advantage of relating facts by the structure by the proposed models compared to the designed baselines on bidirectional fact retrieval.
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)