-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请教 #34
Comments
额外bug: (r_train, r_val) = map(int, self.data_split.split('/')[:2]) |
Hi, thanks for reporting the issue!
For data split, we are refactoring the data loading pipeline. So this should be solved soon. |
Thank you for your reply! I have rerun your code and get some interesting findings. (But got some bad records). The exps are still running. Hoping consult with you after running code. |
The results of your experiment differ significantly from the results in the paper. I will try running it again after you reconstruct the pipline. |
Hi, we would like to follow up to see if you are still experiencing the reproducibility issues with the latest version. Kindly note that since we changed the data pipeline, the results may differ (usually they are more stable) from those in the arXiv version of the paper. |
Thank you for your reply! |
Hello! Seeds Used: 20, 21, 22, 23, 24, 25
My experimental backbone is ChebNetII, and my research focuses on contrastive learning. The results I obtained are state-of-the-art, even surpassing the supervised GNNs in your table. I'm puzzled as to why this is the case. Regarding Other GNNsI'm also interested in experimenting with seeds like BernNet, but my GPU is currently occupied with my current experiments. I expect to run these experiments once my current work is completed. |
Hi, We have uploaded configurations on some datasets used in our experiments. Hopefully this can help you reproduce the results. We observe that some of the convs (e.g. ChebII and Clenshaw) may be unstable under the Also, let me note that since this implementation aims to evaluate these filters in a fair setting, it does not guarantee reaching SOTA. If you intend to apply some specific models, you may want to refer to their original codes (given in the docstrings) for adding some of the tricks (e.g. normalization, pre- and post-processing) for better performance. |
Thank you for your reply! I won't use it for my exp. I only intrested in your work. Iterative conv indeed might bring some ubstable phenomenon(in my experence). |
您好,您的benchmark和pipeline写的特别好。但是在当我重新运行您的代码的过程中(bash scripts/runfb.sh)。并且调试您的代码(在尝试找原因)的过程中,发现了您runfb.sh中,Linear的部分,model是没有定义的(该bug未解决。如果自己修改,可能和您本身的意思不同)
尝试修改了,仍然在报错(已把其注释掉,在跑AdjConv的内容,MLP的内容我跑的部分实验数据结果有误,正在检查中)。(仍然在调试中)
非常抱歉因为这些bug打扰到您,如果可以答疑解惑,不胜感激!
The text was updated successfully, but these errors were encountered: