Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教 #34

Open
lin-uice opened this issue Sep 5, 2024 · 10 comments
Open

请教 #34

lin-uice opened this issue Sep 5, 2024 · 10 comments

Comments

@lin-uice
Copy link

lin-uice commented Sep 5, 2024

您好,您的benchmark和pipeline写的特别好。但是在当我重新运行您的代码的过程中(bash scripts/runfb.sh)。并且调试您的代码(在尝试找原因)的过程中,发现了您runfb.sh中,Linear的部分,model是没有定义的(该bug未解决。如果自己修改,可能和您本身的意思不同)
image
尝试修改了,仍然在报错(已把其注释掉,在跑AdjConv的内容,MLP的内容我跑的部分实验数据结果有误,正在检查中)。(仍然在调试中)
非常抱歉因为这些bug打扰到您,如果可以答疑解惑,不胜感激!

@lin-uice
Copy link
Author

lin-uice commented Sep 5, 2024

额外bug: (r_train, r_val) = map(int, self.data_split.split('/')[:2])
ValueError: invalid literal for int() with base 10: 'Random 60'。
查阅代码,注释掉# self.data_split = f"Random {self.data_split}"就没问题了

@nyLiao
Copy link
Member

nyLiao commented Sep 9, 2024

Hi, thanks for reporting the issue!

model arg for linear models can be DecoupledFixed for decoupled arch or Iterative for iterative ones.

For data split, we are refactoring the data loading pipeline. So this should be solved soon.

@lin-uice
Copy link
Author

lin-uice commented Sep 9, 2024

Thank you for your reply! I have rerun your code and get some interesting findings. (But got some bad records). The exps are still running. Hoping consult with you after running code.

@lin-uice
Copy link
Author

lin-uice commented Sep 9, 2024

The dataset split might be your code may mistake:self.data_split = f"Random {self.data_split}"(the word "Random" might lead to the wrong)
image
I comment out the line self.data_split = f"Random {self.data_split}", then it runs normally

@lin-uice
Copy link
Author

The results of your experiment differ significantly from the results in the paper. I will try running it again after you reconstruct the pipline.

@nyLiao
Copy link
Member

nyLiao commented Oct 14, 2024

Hi, we would like to follow up to see if you are still experiencing the reproducibility issues with the latest version. Kindly note that since we changed the data pipeline, the results may differ (usually they are more stable) from those in the arXiv version of the paper.

@lin-uice
Copy link
Author

Thank you for your reply!
I am currently busy with the experiments for The conference in January next year, so I won't be able to provide you with results in a timely manner!. I will reproduce your experiments before next weekend (If I not counter bugs)(I will try to do it this week), your work is very meaningful.

@lin-uice
Copy link
Author

Hi, we would like to follow up to see if you are still experiencing the reproducibility issues with the latest version. Kindly note that since we changed the data pipeline, the results may differ (usually they are more stable) from those in the arXiv version of the paper.

Hello!
I've reproduced the results from your paper, but I've noticed some discrepancies. I'm using a V100 GPU with 16GB of memory. My pytorch and pyg all suit 11.8. I'm not sure why there's a difference in the results when running Optuna.

Seeds Used: 20, 21, 22, 23, 24, 25

Model Convolution Type Chameleon Filtered Citeseer Cora Squirrel Filtered
DecoupledFixed AdjConv-appr 38.0234 72.5824 82.2378 23.2988
DecoupledFixed AdjConv-impulse 37.6762 70.3392 84.0502 26.0318
DecoupledFixed AdjConv-mono 35.3095 71.6144 77.8806 26.4272
DecoupledFixed AdjiConv-ones 34.7247 71.6074 84.8014 30.0336
DecoupledVar AdjiConv 39.752 67.507 84.8916 28.4191
DecoupledVar ChebConv 30.595625 68.1796 64.4058 23.853
DecoupledVar ChebIIConv 25.431 57.3408 82.632 25.8602
DecoupledVar ClenshawConv 27.0886 68.4384 75.2602 20.4962
DecoupledVar HornerConv 35.4326 73.12 84.1222 23.0216
MLP MLP 29.6674 66.6186 69.33557143 19.2316

My experimental backbone is ChebNetII, and my research focuses on contrastive learning. The results I obtained are state-of-the-art, even surpassing the supervised GNNs in your table. I'm puzzled as to why this is the case.

Regarding Other GNNs

I'm also interested in experimenting with seeds like BernNet, but my GPU is currently occupied with my current experiments. I expect to run these experiments once my current work is completed.

@nyLiao
Copy link
Member

nyLiao commented Nov 4, 2024

Hi,

We have uploaded configurations on some datasets used in our experiments. Hopefully this can help you reproduce the results.

We observe that some of the convs (e.g. ChebII and Clenshaw) may be unstable under the DecoupledVar model due to their parameterization designs. If you intend to use these convs, you may consider the Iterative model, as in the example script runfb-iter.sh.

Also, let me note that since this implementation aims to evaluate these filters in a fair setting, it does not guarantee reaching SOTA. If you intend to apply some specific models, you may want to refer to their original codes (given in the docstrings) for adding some of the tricks (e.g. normalization, pre- and post-processing) for better performance.

@lin-uice
Copy link
Author

lin-uice commented Nov 4, 2024

Thank you for your reply! I won't use it for my exp. I only intrested in your work. Iterative conv indeed might bring some ubstable phenomenon(in my experence).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants