-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix cudacpp getGoodHel to define a good helicity as ME>0 (fix mismatch in cross sections for fortran and cpp in gg_uu) #705
Conversation
…uce madgraph5#630 with only one P1
…processes (u d s c) in P1
…re including u d s c)
…ly 2 (u d, NO c s) processes in P1
…ly 1 (u, NO d c s) process in P1
…ly simply a u quark (madgraph5#630) Will add gu_ttu to the git repo to investogate further
…madgraph5#630 ./CODEGEN/generateAndCompare.sh gu_gu -c "generate g u > g u" --mad
…madgraph5#630 ./CODEGEN/generateAndCompare.sh gg_gg -c "generate g g > g g" --mad
…madgraph5#630 ./CODEGEN/generateAndCompare.sh gg_uu -c "generate g g > u u~" --mad
… some things are different
…umerators and denominators that vary a lot with helicities!?
… result from diagram cancellations, where num and denom from individual diagrams are still large! This completes most of the debugging phase, will start reverting to cleaner code
…EL=0 (NB this is NOT the same definition of LIMHEL as in matrix1.f)
…MHEL=0 pushd gg_uu.mad/SubProcesses/P1_gg_uux/; make -f cudacpp.mk avxall -j; popd; ./tmad/teeMadX.sh -gguu +10x
…rtran and cudacpp, but the code crashes... To add to the list of things to do: - understand why LIMHEL>0 crashes in fortran! Will revert this and maybe try a different way to use a subset of helicities
Revert "[fvsc] in gguu, try to increase LIMHEL back from 0 to 1E-8 in both fortran and cudacpp, but the code crashes..." This reverts commit dcfabab. Revert "[fvsc] rerun gguu tmad, test ok with new getgoodhel comparing if > LIMHEL=0" This reverts commit 98902c8. Revert "[fvsc] in gguu.mad, rewrite getgoodhel in terms of comparison to LIMHEL=0 (NB this is NOT the same definition of LIMHEL as in matrix1.f)" This reverts commit daa69a8.
…ortran), which is a suppressed one... the xsecs both change, though not exactly in the same way For the moment, will keep LIMHEL=0 and the new getgoodhel implementation... as for the rest of LIMHEL etc, this will be in another issue Will revert this now
Revert "[fvsc] in gguu, last test for the moment: skip ihel=15 (c++) or 16 (fortran), which is a suppressed one... the xsecs both change, though not exactly in the same way" This reverts commit 774f5ea.
STARTED AT Sun Aug 13 22:41:05 CEST 2023 ./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean ENDED(1) AT Mon Aug 14 01:15:05 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean ENDED(2) AT Mon Aug 14 01:34:33 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean ENDED(3) AT Mon Aug 14 01:44:48 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst ENDED(4) AT Mon Aug 14 01:48:01 CEST 2023 [Status=0] ./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst ENDED(5) AT Mon Aug 14 01:51:13 CEST 2023 [Status=0]
…but gqttq is still not ok STARTED AT Mon Aug 14 01:54:30 CEST 2023 ENDED AT Mon Aug 14 06:16:31 CEST 2023 Status=0 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gguu_mad/log_gguu_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gguu_mad/log_gguu_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gguu_mad/log_gguu_mad_m_inl0_hrd0.txt 0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt 0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt 0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
…graph5#630 and reasonably understood Will include it back for further studies when/if needed
This MR is now ready for merging. The MR contains a lot of investigations about comparisons of fortran and cpp cross sections in tmad tests for several processes, including gq_ttq but especially focusing on gg_uu. Many of these investigations will result in opening new issues shortly. Of the main issues that was identified, specifically for gguu, following up on work by @zeniheisser, is that the old getGoodHel implementation was wrong. The goal was to define a good helicity as an helicity provided a non-zero ME contribution, but the implementation was unable to detect some helicities contributing a relative contribution lower than 1E-16 (or whatever the FP precision is). Therefore gguu in cudacpp was using 6 instead of 8 good helicities, resulting in different cross sections from fortran. This is now fixed and the contribution is compared to 0 (eventually, maybe the same LIMHEL will be used in fortran and cudacpp). The main benefit of this MR is therefore the fix for getgoodhel (this closes #630). In the investigation I had added gguu.mad to the repo, but I am now removing it to avoid having too many processes there. Note that gq_ttq cross sections are still wrong, this is now moved to a new issue #748. |
The CI tests have passed, I am self merging this one. |
…ng getgoodhel/uu_gg madgraph5#705 into upstream/master)
WIP: add gu_ttu (with different cross sections for fortran and cpp)
This is a MR to investigate #630. The issue does not only affect gq_ttq where q is anyone of 8 quarks/antiquarks, it also affects simply the u quark.