Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems that arise when training gmn models with other features #58

Open
zxhlbl opened this issue May 21, 2024 · 0 comments
Open

Problems that arise when training gmn models with other features #58

zxhlbl opened this issue May 21, 2024 · 0 comments

Comments

@zxhlbl
Copy link

zxhlbl commented May 21, 2024

dear author:
I hope this message finds you well. I am currently working on retraining the Graph Matching Networks (GMN) model using different feature sets and have encountered a challenge that I hope you can help me with.

Specifically, I have noticed that during retraining, the model exhibits a strong bias towards the positive class. This results in the Euclidean distance between the encodings of function pairs consistently being less than 1. However, according to the loss function, the Euclidean distance for encodings of negative class function pairs should be greater than (1+margin).

I am trying to understand the potential reasons for this bias and how to adjust the training process or feature selection to correct it. Could you provide any insights or suggestions on how to address this issue?

Thank you very much for your time and assistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant