chi-squared
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples
Douglas, Pk, Farahani, Farzad Vasheghani
The increasing use of deep neural networks (DNNs) has motivated a parallel endeavor: the design of adversaries that profit from successful misclassifications. However, not all adversarial examples are crafted for malicious purposes. For example, real world systems often contain physical, temporal, and sampling variability across instrumentation. Adversarial examples in the wild may inadvertently prove deleterious for accurate predictive modeling. Conversely, naturally occurring covariance of image features may serve didactic purposes. Here, we studied the stability of deep learning representations for neuroimaging classification across didactic and adversarial conditions characteristic of MRI acquisition variability. We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.
Chi-Squared For Feature Selection using SelectKBest
In this video, I'll show you how SelectKBest uses Chi-squared test for feature selection for categorical features & target columns. We calculate Chi-square between each feature & the target & select the desired number of features with best Chi-square scores or the lowest p-values. The Chi-squared (χ2) test is used in statistics to test the independence of two events. More specifically in feature selection we use it to test whether the occurrence of a specific feature & the target are independent or not. For each feature & target combination, a corresponding high χ2 chi-square score or a low p-value indicates that the target column is dependent on the feature column.