Goto

Collaborating Authors

 Agarwal, Akshay


A Novel Sector-Based Algorithm for an Optimized Star-Galaxy Classification

arXiv.org Artificial Intelligence

Today is the age of Data-Driven astronomy, with sky surveys generating large amounts of data, and many new ones are lining up, such as the large synoptic survey telescope (LSST). One of the key motives of such surveys is to classify objects as stars or galaxies. However, manual classification can not be done for petabytes of data and large intra-class variation, which raises the need for an automated and robust classification model. Recently, several research works have been developed to help astronomers by automatically classifying the galaxies (Soumagnac et al., 2015; Ba Alawi & Al-Roainy, 2021; Chaini et al., 2022; Kim & Brunner, 2016; Garg et al., 2022). However, these models perform well but are complex. In contrast to the existing work, due to the complexity of our star-galaxy system, in this research, we have proposed the development of a classification approach utilizing a sector-based division of the sky. The prime motivation for such division can be seen in Figure 1, reflecting the variation present in different sectors and difficulties in classification. By utilizing these differences, we have developed a star-galaxy classification system that surpasses existing algorithms and yields a low computational cost.


WaveTransform: Crafting Adversarial Examples via Input Decomposition

arXiv.org Artificial Intelligence

Frequency spectrum has played a significant role in learning unique and discriminating features for object recognition. Both low and high frequency information present in images have been extracted and learnt by a host of representation learning techniques, including deep learning. Inspired by this observation, we introduce a novel class of adversarial attacks, namely `WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination). The frequency subbands are analyzed using wavelet decomposition; the subbands are corrupted and then used to construct an adversarial example. Experiments are performed using multiple databases and CNN models to establish the effectiveness of the proposed WaveTransform attack and analyze the importance of a particular frequency component. The robustness of the proposed attack is also evaluated through its transferability and resiliency against a recent adversarial defense algorithm. Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.


Unravelling Robustness of Deep Learning Based Face Recognition Against Adversarial Attacks

AAAI Conferences

Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks inspired by commonly observed distortions in the real world that are well handled by shallow learning methods along with learning based adversaries; (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, including OpenFace and VGG-Face, and two publicly available databases (MEDS and PaSC) demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. The proposed method is also compared with existing detection algorithms and the results show that it is able to detect the attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present several effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.