
How it works…
Unlike the previous recipe, in which we analyzed a single file's N-grams, in this recipe, we look at a large collection of files to understand which N-grams are the most informative features. We start by specifying the folders containing our samples, our value of N, and import some modules to enumerate files (step 1). We proceed to count all N-grams from all files in our dataset (step 2). This allows us to find the globally most frequent N-grams. Of these, we filter down to the K1=1000 most frequent ones (step 3). Next, we introduce a helper method, featurizeSample, to be used to take a sample and output the number of appearances of the K1 most common N-grams in its byte sequence (step 4). We then iterate through our directories of files, and use the previous featurizeSample function to featurize our samples, as well as record their labels, as malicious or benign (step 5). The importance of the labels is that the assessment of whether an N-gram is informative depends on being able to discriminate between the malicious and benign classes based on it.
We import the SelectKBest library to select the best features via a score function, and the two score functions, mutual information and chi-squared (step 6). Finally, we apply the three different feature selection schemes to select the best N-grams and apply this knowledge to transform our features (step 7). In the first method, we simply select the K2 most frequent N-grams. Note that the selection of this method is often recommended in the literature, and is easier because of not requiring labels or extensive computation. In the second method, we use mutual information to narrow down the K2 features, while in the third, we use chi-squared to do so.