What Sketch Explainability Really Means for Downstream Tasks
Bandyopadhyay, Hmrishav, Chowdhury, Pinaki Nath, Bhunia, Ayan Kumar, Sain, Aneeshan, Xiang, Tao, Song, Yi-Zhe
–arXiv.org Artificial Intelligence
In this paper, we explore the unique modality of sketch for explainability, emphasising the profound impact of human strokes compared to conventional pixel-oriented studies. Beyond explanations of network behavior, we discern the genuine implications of explainability across diverse downstream sketch-related tasks. We propose a lightweight and portable explainability solution -- a seamless plugin that integrates effortlessly with any pre-trained model, eliminating the need for re-training. Demonstrating its adaptability, we present four applications: highly studied retrieval and generation, and completely novel assisted drawing and sketch adversarial attacks. The centrepiece to our solution is a stroke-level attribution map that takes different forms when linked with downstream tasks. By addressing the inherent non-differentiability of rasterisation, we enable explanations at both coarse stroke level (SLA) and partial stroke level (P-SLA), each with its advantages for specific downstream tasks.
arXiv.org Artificial Intelligence
Mar-14-2024
- Country:
- Europe > United Kingdom (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology (0.69)
- Technology: