You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the Sorel-20M repository, in the train.py, the train_network() function calls get_generator() which initializes the Generator class, which in turn calls the Dataset class that calls the LMDBReader class. LMDBReader has a function called features_postproc_func which per my understanding is applying some logarithmic function on the ember features before using them. This chain is not followed in the training of the LGB model where the Ember features are read directly from the numpy arrays and no pre-processing is applied (as expected).
Looking at the code in secml_malware I see that the ember features are fed directly to the neural network without any preprocessing and I'm wandering if this should be added in the feature extractor.
As a side note, in my testing of the Sorel models and data, if I don't apply the features_postproc_func I get really bad results with the pretrained sorel nets, so I think this is needed.
The text was updated successfully, but these errors were encountered:
Thank you for having opened the issue!
I will investigate. I naively thought that the models were trained on the plain EMBER features. I will have a look!
So, small update, I included the feature post processing function inside the feature extractor (thank you for making me notice!). I still have to bulk test it on a larger dataset to see if the performances match the ones described in the paper, but this is already a step forward.
In the Sorel-20M repository, in the
train.py
, thetrain_network()
function callsget_generator()
which initializes theGenerator
class, which in turn calls theDataset
class that calls theLMDBReader
class. LMDBReader has a function calledfeatures_postproc_func
which per my understanding is applying some logarithmic function on the ember features before using them. This chain is not followed in the training of the LGB model where the Ember features are read directly from the numpy arrays and no pre-processing is applied (as expected).Looking at the code in secml_malware I see that the ember features are fed directly to the neural network without any preprocessing and I'm wandering if this should be added in the feature extractor.
As a side note, in my testing of the Sorel models and data, if I don't apply the
features_postproc_func
I get really bad results with the pretrained sorel nets, so I think this is needed.The text was updated successfully, but these errors were encountered: