SR-GNN: Spatial Relation-aware Graph Neural Network for Fine-Grained Image Categorization

Bera, Asish, Wharton, Zachary, Liu, Yonghuai, Bessis, Nik, Behera, Ardhendu

arXiv.org Artificial Intelligence 

Abstract--Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/ scene. To this end, we propose a method that effectively captures subtle changes by aggregating contextaware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. For clarity, 4 different regions are shown here. A HE advent of deep convolutional neural networks (CNN) has significantly enhanced image recognition performance key step to address this challenge is to extract discriminating in the past decade. It is achieved mainly due to their features from vital object-parts and combine them for the abilities to provide a high-level description (e.g., global shape representation of a consistent distinctive global structure of a and appearance) of image content by capturing discriminative given class. The current state-of-the-art (SotA) approaches are object-pose and -parts information from texture and shape. We refer the interested readers to [6] for a their performance in solving fine-grained visual classification detailed survey.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found