Chisholm, Glenn
Projecting "better than randomly": How to reduce the dimensionality of very large datasets in a way that outperforms random projections
Wojnowicz, Michael, Zhang, Di, Chisholm, Glenn, Zhao, Xuan, Wolff, Matt
For very large datasets, random projections (RP) have become the tool of choice for dimensionality reduction. This is due to the computational complexity of principal component analysis. However, the recent development of randomized principal component analysis (RPCA) has opened up the possibility of obtaining approximate principal components on very large datasets. In this paper, we compare the performance of RPCA and RP in dimensionality reduction for supervised learning. In Experiment 1, study a malware classification task on a dataset with over 10 million samples, almost 100,000 features, and over 25 billion non-zero values, with the goal of reducing the dimensionality to a compressed representation of 5,000 features. In order to apply RPCA to this dataset, we develop a new algorithm called large sample RPCA (LS-RPCA), which extends the RPCA algorithm to work on datasets with arbitrarily many samples. We find that classification performance is much higher when using LS-RPCA for dimensionality reduction than when using random projections. In particular, across a range of target dimensionalities, we find that using LS-RPCA reduces classification error by between 37% and 54%. Experiment 2 generalizes the phenomenon to multiple datasets, feature representations, and classifiers. These findings have implications for a large number of research projects in which random projections were used as a preprocessing step for dimensionality reduction. As long as accuracy is at a premium and the target dimensionality is sufficiently less than the numeric rank of the dataset, randomized PCA may be a superior choice. Moreover, if the dataset has a large number of samples, then LS-RPCA will provide a method for obtaining the approximate principal components.
Suspiciously Structured Entropy: Wavelet Decomposition of Software Entropy Reveals Symptoms of Malware in the Energy Spectrum
Wojnowicz, Michael (Cylance) | Chisholm, Glenn (Cylance ) | Wolff, Matt (Cylance )
Sophisticated malware authors can sneak hidden malicious code into portable executable files, and this code can be hard to detect, especially if it is encrypted or compressed. However, when an executable file shifts between native code, encrypted or compressed code, and padding, there are corresponding shifts in the file's representation as an entropy signal. In this paper, we develop a method for automatically quantifying the extent to which the patterned variations in a file's entropy signal makes it "suspicious." A corpus of n = 39,968 portable executable files were studied, 50% of which were malicious. Each portable executable file was represented as an entropy stream, where each value in the entropy stream describes the amount of entropy at a particular locations in the file. Wavelet transforms were then applied to this entropy signal in order to extract the amount of entropic energy at multiple scales of code resolution. Based on this entropic energy spectrum, we derive a Suspiciously Structured Entropic Change Score (SSECS), a single scalar feature which quantifies the extent to which a given file's entropic energy spectrum makes the file suspicious as possible malware. We found that, based on SSECS alone, it was possible to predict with 68.7% accuracy whether a file in this corpus was malicious or legitimate (a 18.7% gain over random guessing). Moreover, we found that SSECS contains predictive information not contained in mean entropy alone. Thus, we argue that SSECS could be a useful single feature for machine learning models which attempt to identify malware based on millions of file features.