Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete PR 955 for goodhel (resync to master, regenerate code) #986

Closed
wants to merge 519 commits into from

Conversation

valassi
Copy link
Member

@valassi valassi commented Sep 2, 2024

This is a WIP PR to extend Olivier's PR #955 about master_goodhel.

It is meant to fix the pp_tt012j bug #872 using a different approach than my proposal in PR #935.

For the moment I just merged the latest master into Olivier's PR and regenerated gg_tt.mad, fixing all conflicts and adding a new mg5amcnlo branch.

Now I will run the CI and in parallel do more tests. Probably I will add bits and pieces of my 935 here.

…for MEK channelid processing (less verbose, add a tag)
…figSDE madgraph5#917

(Previously I had throught of adding it as CPPProcess::nconfig, but this is slightly more complex to code-generate)
…h an associated iconfig in the multichannel test madgraph5#917

This builds in C++ but gives a warning in CUDA

ccache /usr/local/cuda-12.0/bin/nvcc  -I. -I../../src -I../../../../../test/googletest/install_gcc11.3.1/include -I../../../../../test/googletest/install_gcc11.3.1/include  -Xcompiler -g  -Xcompiler -O0 -gencode arch=compute_70,code=compute_70 -gencode arch=compute_70,code=sm_70 -G -use_fast_math -Xcompiler -Wunused-parameter -I/usr/local/cuda-12.0/include/ -DUSE_NVTX  -std=c++17  -ccbin /usr/lib64/ccache/g++ -DMGONGPU_FPTYPE_DOUBLE -DMGONGPU_FPTYPE2_DOUBLE -Xcompiler -fPIC -DMGONGPU_CHANNELID_DEBUG -c -x cu runTest.cc -o runTest_cuda.o
runTest.cc(61): warning #20091-D: a __device__ variable "mgOnGpu::channel2iconfig" cannot be directly read in a host function
Remark: The warnings can be suppressed with "-diag-suppress <warning-number>"
…uda builds madgraph5#917 by adding a second copy of the channel2iconfig array on the host

This is quite ugly, a better solution could eventually be implemented using a single host instance with a cudaMemCpyToSymbol?
…after fixing the segfault madgraph5#917 in runTest

CUDACPP_RUNTEST_DUMPEVENTS=1 ./runTest_cuda.exe
\cp ../../test/ref/dump* ../../../CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/test/ref
…raph5#917, will now be able to regenerate the txt2 file

However, this fails at runtime:
- in C++, "runTest.cc:67: Assertion `channelId > 0' failed"
- in CUDA, "iconfig=1, channelId=4528" while "iconfig=2, channelId=2"?
…nnels and a sanity check madgraph5#910 that it equals CPPProcess::ndiagrams (but IT DOES NOT! bug madgraph5#919)
…tart idiagram loop at 0) and work around madgraph5#919 (loop until nchannels, not ndiagrams)

The test now proceeds without asserts in cppnone but needs a new reference txt2, will create it on a cuda build
…(after fixing madgraph5#885 for NB_WARP_USED)

./tmad/teeMadX.sh -ggtt +10x
… after fixing the assert madgraph5#920 in runTest

CUDACPP_RUNTEST_DUMPEVENTS=1 ./runTest_cuda.exe
\cp ../../test/ref/dump* ../../../CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/test/ref
…annelid in MatrixElementKernelBase, and replace OMPFLAGS by USEOPENMP madgraph5#758
…issues remain

STARTED  AT Thu Jul 18 01:32:29 AM CEST 2024
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean
ENDED(1) AT Thu Jul 18 03:05:32 AM CEST 2024 [Status=2]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean
ENDED(2) AT Thu Jul 18 03:22:07 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean
ENDED(3) AT Thu Jul 18 03:29:29 AM CEST 2024 [Status=2]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst
ENDED(4) AT Thu Jul 18 03:31:45 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst
ENDED(5) AT Thu Jul 18 03:33:58 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common
ENDED(6) AT Thu Jul 18 03:36:17 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean
ENDED(7) AT Thu Jul 18 03:54:13 AM CEST 2024 [Status=2]

Note: the PASSED tag has disappeared (since a while), the throughputX.sh script must be fixed madgraph5#922
…aster now?

Note: a printout from MEK about the channels has to be added

STARTED  AT Thu Jul 18 03:54:13 AM CEST 2024
(SM tests)
ENDED(1) AT Thu Jul 18 08:18:13 AM CEST 2024 [Status=0]
(BSM tests)
ENDED(1) AT Thu Jul 18 08:26:52 AM CEST 2024 [Status=0]

24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_d_inl0_hrd0.txt
1 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_m_inl0_hrd0.txt
0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_d_inl0_hrd0.txt
0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt
0 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_m_inl0_hrd0.txt
@valassi
Copy link
Member Author

valassi commented Sep 2, 2024

Hi @oliviermattelaer this is my completion of your master_ggodhel.

There are many diffs unfortunately because your mastergoodhel does not include the latest master.

In this PR anyway I am not doing much

  • merge master into your PR
  • regenerate all code in the repo
  • rerun some manual tests

Specifically

[avalassi@itscrd90 gcc11/usr] /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp> git log --oneline upstream/master..HEAD
4c35b61f3 (HEAD -> goodhel, origin/goodhel, ghav/goodhel) [goodhel] ** COMPLETE GOODHEL ** rerun 30 tmad tests on itscrd90 - all as expec>
48fa9bfeb [goodhel] rerun 102 tput tests on itscrd90 - all ok
016433bea [goodhel] regenerate all processes - erase them and add them from scratch, because many things changed in 3.6.0
94f9f9bf0 [goodhel] rerun tmad ggtt test - all ok (even if now reset_cumulative_variable is no longer called)
ae6652c46 [goodhel] regenerate gg_tt.mad - add to the repo couplings3.f and PDF/lep_densities, which I had previously forgotten
d42666dc2 [goodhel] regenerate CODEGEN patch from gg_tt.mad (note that genps.inc is now longer needed)
2cd6764ec [goodhel] regenerate gg_tt.mad - check all is ok, no change
002dee873 Merge remote-tracking branch 'upstream/master' into goodhel
d582f6c86 [goodhel] move to upstream/master codegen logs of gg_tt for easier merging
a08d91f43 [goodhel] update MG5AMC from ffb7a2e01 (gpucpp_goodhel) to 0c754d6d8 (valassi_goodhel, i.e. gpucpp plus the merge of gpucpp_goo>
b986a8258 [goodhel] regenerate gg_tt.sa just as a cross check: the only change is that 3.5.3 becomes 3.6.0, but the code is identical
763479db5 [goodhel] regenerate gg_tt.mad using the current CODEGEN from upstream/master_goodhel
c448e89db (upstream/master_goodhel) random test
b304cc905 fixing a patch that was relying on imirror
df130b077 temporary fix to set limhel to 0 for cudacpp
76ba27a50 change mg5amc version to use
a8262594c fix patch to be compatible with mg5 branch gpucpp_goodhel

If you agree, I will (EDITED! I was missing the mg5amcnlo PRs)

@oliviermattelaer
Copy link
Member

So yes this is good to be merged (I did not review the diff and just trust you on your change)
So yes all those 4 PR can move forward.

Thanks,

Olivier

valassi and others added 16 commits September 3, 2024 10:58
…c9350 (latest gpucpp, including the merge of the former via mg5amcnlo#121)

NB: the contents of 4ef15cab1 and a696c9350 are identical - but a696c9350 is pointing again to the HEAD of gpucpp
Merge of master into master_june24 and channelid fixes/reimplementation
Merge master_june24 into master (including channelid fixes/reimplementation)
…72a63ab (valassi_goodhel_june24, i.e. merge of valassi_goodhel into gpucpp, with two conflict-fix commits)
… for easier merging

git checkout upstream/master $(git ls-tree --name-only upstream/master */CODEGEN*txt)
…master for easier merging

git checkout upstream/master $(git ls-tree --name-only upstream/master tput/logs* tmad/logs*)
…the latest upstream/master for easier merging

git checkout upstream/master $(git ls-tree --name-only upstream/master *.mad | grep -v ^gg_tt.mad)
…raph5#822 and madgraph5#985) into goodhel

Fix conflicts:
- MG5aMC/mg5amcnlo (keep current version)
- epochX/cudacpp/CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.P1 (use upstream/master, will rebuild anyway)
- epochX/cudacpp/gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f (keep goodhel comment style)
- epochX/cudacpp/gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f (keep goodhel comment style)
- epochX/cudacpp/gg_tt.mad/bin/internal/file_writers.py (keep goodhel version)
…robably need more iterations)

Build the patch from the following files, though some may no longer be necessary:
- 2 in patch.common: Source/genps.inc (can be removed?), SubProcesses/makefile
- 3 in patch.P1: auto_dsig1.f (can be removed?), driver.f, matrix1.f

./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch
git diff --no-ext-diff -R gg_tt.mad/Source/genps.inc gg_tt.mad/SubProcesses/makefile > CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.common
git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.P1
git checkout gg_tt.mad
In particular, remove the patches resulting in a different style of comments

This is the equivalent of build the patch from the following files in gg_tt.mad:
- 1 in patch.common: SubProcesses/makefile
- 2 in patch.P1: driver.f, matrix1.f

./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch
git diff --no-ext-diff -R gg_tt.mad/SubProcesses/makefile > CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.common
git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.P1
git checkout gg_tt.mad
…: a few comments have changed, but this is functionally equivalent

(I do not like the comment style, but this will need to be changed upstream in mg5amcnlo: focus on minimising the patch size instead)
…nally equivalent, only a few line numbers change)

Only the following files are needed to build the patch:
- 1 in patch.common: SubProcesses/makefile
- 2 in patch.P1: driver.f, matrix1.f

./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch
git diff --no-ext-diff -R gg_tt.mad/SubProcesses/makefile > CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.common
git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/PLUGIN/CUDACPP_SA_OUTPUT/MG5aMC_patches/PROD/patch.P1
git checkout gg_tt.mad
…a 1 per mille cross section mismatch?! madgraph5#991

./tmad/teeMadX.sh -ggtt -makeclean +10x
…ne24) to 61fdd0d8e (new valassi_goodhel_june24, including commits cherry-picked from ghav_gpucpp_june24 to try and fix fortran xsec issue madgraph5#991)
…nitialization lines appear in dsample.f, will this fix madgraph5#991?
…o fails with a 1 per mille cross section mismatch madgraph5#991

./tmad/teeMadX.sh -ggtt -makeclean +10x
@valassi
Copy link
Member Author

valassi commented Sep 3, 2024

So yes this is good to be merged (I did not review the diff and just trust you on your change)
So yes all those 4 PR can move forward

Hi @oliviermattelaer , thanks.

Unfortunately, as discussed, now that the june24 stuff has been merged into gpucpp and master, there are conflicts to solve.

Anyway, here I have just added a few commits that address those merge conflicts

Apart from the fact that the way I fixed conflicts must be checked (especially for the tests), my main issue is that now a tmad ggtt test fails, see #991

@oliviermattelaer
Copy link
Member

So the isue with #991 is fixed and part of the PR #992.

valassi added a commit to valassi/madgraph4gpu that referenced this pull request Sep 15, 2024
…or360 madgraph5#992, which includes his/my goodhel madgraph5#955 and madgraph5#986) - all ok

STARTED  AT Sun Sep 15 09:27:42 AM CEST 2024
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean
ENDED(1) AT Sun Sep 15 11:28:15 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean
ENDED(2) AT Sun Sep 15 11:41:01 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean
ENDED(3) AT Sun Sep 15 11:51:46 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst
ENDED(4) AT Sun Sep 15 11:54:36 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst
ENDED(5) AT Sun Sep 15 11:57:23 AM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common
ENDED(6) AT Sun Sep 15 12:00:17 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean
ENDED(7) AT Sun Sep 15 12:22:27 PM CEST 2024 [Status=0]
valassi added a commit to valassi/madgraph4gpu that referenced this pull request Sep 15, 2024
…r360 madgraph5#992, which includes his/my goodhel madgraph5#955 and madgraph5#986) - all ok

NB: with respect to my goodhel madgraph5#986, the xsec mismatch madgraph5#991 in ggtt is now fixed (by Olivier's additional commit 7d0a553 I assume)

The only failure is the expected LHE mismatch madgraph5#833 in heft_gg_bb fptype=f

STARTED  AT Sun Sep 15 12:22:27 PM CEST 2024
(SM tests)
ENDED(1) AT Sun Sep 15 04:18:02 PM CEST 2024 [Status=0]
(BSM tests)
ENDED(1) AT Sun Sep 15 04:28:37 PM CEST 2024 [Status=0]

24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_d_inl0_hrd0.txt
1 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuBis/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_m_inl0_hrd0.txt
@valassi
Copy link
Member Author

valassi commented Sep 15, 2024

So the isue with #991 is fixed and part of the PR #992.

Thanks a lot @oliviermattelaer ! Yes I confirm what you said.

Let's keep this open, it should automatically appear as merged when we merge #992.

In any case I will mark that #992 fixes also the issues #872 and #950 that are fixed in this #986 and the underlying #955

@valassi
Copy link
Member Author

valassi commented Sep 15, 2024

No, actually this was into master_goodhel. This is no longer necessary. It has been included in master_for360 and will be included in master via #992. Closing this, to start a cleanup.

@valassi valassi closed this Sep 15, 2024
@valassi valassi mentioned this pull request Sep 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants