Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix patch to be compatible with mg5 branch gpucpp_goodhel #955

Closed
wants to merge 5 commits into from

Conversation

oliviermattelaer
Copy link
Member

This is a simple patch to make the plugin work for the gpucpp_goodhel branch
(removing the double list for the helicity initialization and allowing limhel to be control from the run_card.)

The fact that limhel is now a user parameter makes it important to be correctly handle by the cudacpp interface as well which is not done in this PR.

@oliviermattelaer
Copy link
Member Author

@valassi

The PR obviously now crashes since limhel !=0 since now that parameter is part of the run_card setup.
I can do a quick/temporary fix which is setting that parameter in the run_card by default when doing output in the simd/cuda mode to have this PR going trough quickly.
But I guess that we need to

  1. change the test to enforce limhel=0
  2. change cudacpp to support correctly limhel (even if the algorithm is not 100% identical to the fortran one, like when this was done via a running sum rather than comparing one helicity to the average of the helicity for that event (like done in fortran).

Do you want to do 1 or 2 directly? or should I implement the different default value?
(which is super easy to do)

@oliviermattelaer
Copy link
Member Author

I had to remove a useless call to reset_cumulative_call in the cudacpp since now I discard those in the fortran.
And now all tests are passing.
Just we can either review/merge this or first fix the test and put back in cudacpp some support of limhel.

(maybe we can skype on this tommorow --if you are available--)

@oliviermattelaer
Copy link
Member Author

Hi,

Just to comment that the check that MC over helicity was not used for the cudacpp plugin was already in place.

So do not need to do anything here.

Olivier

Copy link
Member

@valassi valassi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @oliviermattelaer this is goo dto go for me, but first we need to mertge master into it and fix conflicts. This can be done by merging #986 into this. Can I go ahead?

valassi added a commit to valassi/madgraph4gpu that referenced this pull request Sep 15, 2024
…or360 madgraph5#992, which includes his/my goodhel madgraph5#955 and madgraph5#986) - all ok

STARTED  AT Sun Sep 15 09:27:42 AM CEST 2024
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean
ENDED(1) AT Sun Sep 15 11:28:15 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean
ENDED(2) AT Sun Sep 15 11:41:01 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean
ENDED(3) AT Sun Sep 15 11:51:46 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst
ENDED(4) AT Sun Sep 15 11:54:36 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst
ENDED(5) AT Sun Sep 15 11:57:23 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common
ENDED(6) AT Sun Sep 15 12:00:17 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean
ENDED(7) AT Sun Sep 15 12:22:27 PM CEST 2024 [Status=0]
valassi added a commit to valassi/madgraph4gpu that referenced this pull request Sep 15, 2024
…r360 madgraph5#992, which includes his/my goodhel madgraph5#955 and madgraph5#986) - all ok

NB: with respect to my goodhel madgraph5#986, the xsec mismatch madgraph5#991 in ggtt is now fixed (by Olivier's additional commit 7d0a553 I assume)

The only failure is the expected LHE mismatch madgraph5#833 in heft_gg_bb fptype=f

STARTED  AT Sun Sep 15 12:22:27 PM CEST 2024
(SM tests)
ENDED(1) AT Sun Sep 15 04:18:02 PM CEST 2024 [Status=0]
(BSM tests)
ENDED(1) AT Sun Sep 15 04:28:37 PM CEST 2024 [Status=0]

24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_d_inl0_hrd0.txt
1 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_m_inl0_hrd0.txt
@valassi
Copy link
Member

valassi commented Sep 15, 2024

This #955 'goodhel' as well as the additional #986 'additional goodhel' (which would be needed to fix the master conflicts) are included in the #992 which is about to be merged. I close this for simplicity.

In any case I have marked that #992 fixes also the issues #872 and #950 that are fixed in this #986 and the underlying #955

Closing this for simplicity. All this work will be merged via #992.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants