Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust C768 resources for Hera #2819

Merged
merged 26 commits into from
Sep 20, 2024

Conversation

DavidHuber-NOAA
Copy link
Contributor

@DavidHuber-NOAA DavidHuber-NOAA commented Aug 9, 2024

Description

This modifies the resources for gdasfcst (everywhere) and enkfgdaseupd (Hera only). For the fcst job, the number of write tasks is increased to prevent out of memory errors from the inline post. For the eupd, the number of tasks is decreased to prevent out of memory errors. The runtime for the eupd job was just over 10 minutes.

Resolves #2506
Resolves #2498
Resolves #2916

Type of change

  • Bug fix (fixes something broken)

Change characteristics

How has this been tested?

Successfully ran through the mentioned jobs at least once each. More testing to come.

Checklist

  • Any dependent changes have been merged and published
  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • My changes generate no new warnings
  • New and existing tests pass with my changes

* origin/develop:
  Stage atmospheric backgrounds and UFS cubed-sphere history files (NOAA-EMC#2792)
  Check that a PR driver is still running before trying to kill it (NOAA-EMC#2799)
  Feature/get arch adds an empty archive job to GEFS system (NOAA-EMC#2772)
  Marine DA updates (NOAA-EMC#2802)
  Revert MSU FIX_DIRs back to glopara (NOAA-EMC#2811)
  Bugfix for updating label states in Jenkins (NOAA-EMC#2808)
  Clean-up temporary rundirs - take 2. (NOAA-EMC#2753)
  Change land surface for HR4 (NOAA-EMC#2787)
  Run METplus serially and correct the name of prod tasks (NOAA-EMC#2804)
  Update Java Agent launching script for Jenkins connections (NOAA-EMC#2762)
  Fix erroneous cdump addition (NOAA-EMC#2803)
  Update ocean post-processing triggers (NOAA-EMC#2784)
  Update the gfs_utils repository hash (NOAA-EMC#2801)
  Add fixes for metplus jobs when gfs_cyc=2 or 4 (NOAA-EMC#2791)
  Simplify resource-related variables, remove CDUMP where unneeded (NOAA-EMC#2727)
  Remove f000 from atmos rocoto tasks for replay cases (NOAA-EMC#2778)
@spanNOAA
Copy link

It seems that this pull request does not resolve the issue with eupd. For further details, you may review the log files located at:
/scratch2/BMC/wrfruc/Sijie.Pan/ufs-ar/comroot/stmp/RUNDIRS/C768_6hourly_0210/eupd.1499050/stderr
/scratch2/BMC/wrfruc/Sijie.Pan/ufs-ar/comroot/stmp/RUNDIRS/C768_6hourly_0210/eupd.2575750/stderr
These logs should provide more insight into the problem.

@DavidHuber-NOAA
Copy link
Contributor Author

@spanNOAA You are correct. Somewhere along the way, I dropped the working configuration by accident. It should be running two tasks/node (i.e. threads_per_task=20).

I am working presently to see if I can spread the tasks out differently so more tasks can run on each node. The issue is that the first 4 tasks (the I/O tasks) must store an enormous amount of data in memory (about 40GB each). The remaining tasks are not as memory intensive. One way to solve this is to develop an arbitrary distribution scheme rather than the default block distribution. Something like

srun -N <n nodes> ... --distribution=arbitrary -w tux[0,1,2,3,4,4,4,4,4,5,5,5,5,5,...,n-1, n-1, n-1, n-1, n-1]

But this is not trivial to program. I will likely go with threads_per_task=20 for the time being, but want to run one more test first and then I will update the PR.

@spanNOAA
Copy link

@spanNOAA You are correct. Somewhere along the way, I dropped the working configuration by accident. It should be running two tasks/node (i.e. threads_per_task=20).

I am working presently to see if I can spread the tasks out differently so more tasks can run on each node. The issue is that the first 4 tasks (the I/O tasks) must store an enormous amount of data in memory (about 40GB each). The remaining tasks are not as memory intensive. One way to solve this is to develop an arbitrary distribution scheme rather than the default block distribution. Something like

srun -N <n nodes> ... --distribution=arbitrary -w tux[0,1,2,3,4,4,4,4,4,5,5,5,5,5,...,n-1, n-1, n-1, n-1, n-1]

But this is not trivial to program. I will likely go with threads_per_task=20 for the time being, but want to run one more test first and then I will update the PR.

@DavidHuber-NOAA Thanks for the update.

@guoqing-noaa
Copy link
Contributor

Thanks, @DavidHuber-NOAA!

@spanNOAA Does threads_per_task=20 work for your case?

@spanNOAA
Copy link

@DavidHuber-NOAA ntasks=80 works now.

Copy link
Contributor

@guoqing-noaa guoqing-noaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR solved our C768 issues on Hera.

Thanks, @DavidHuber-NOAA

@DavidHuber-NOAA
Copy link
Contributor Author

@WalterKolczynski-NOAA @aerorahul @RussTreadon-NOAA Note that this PR (and the dependent gsi-utils PR NOAA-EMC/GSI-utils#49) has changes that will affect the C768 forecast on all machines and the gdasanalcalc job at all resolutions. I have not performed any testing to verify that values did not change. I should be able to do that on WCOSS2 next week. Opening for review in the meantime, but please do not merge until I have done that testing.

@DavidHuber-NOAA DavidHuber-NOAA added the blocked Issue is currently being blocked by another issue label Aug 20, 2024
@RussTreadon-NOAA
Copy link
Contributor

Thank you @DavidHuber-NOAA for running tests on WCOSS2. Hopefully the changes to gdasanalcalc via GSI-utils PR #49 do not alter results.

@RussTreadon-NOAA
Copy link
Contributor

@CatherineThomas-NOAA , tagging you for awareness. Not sure if / how these changes might impact plans for using Hera to run GFS v17 parallels.

@emcbot emcbot added CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed and removed CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed CI-Orion-Building **Bot use only** CI testing is cloning/building on Orion labels Sep 19, 2024
@emcbot
Copy link

emcbot commented Sep 19, 2024

CI Failed on Orion in Build# 3
Built and ran in directory /work2/noaa/stmp/CI/ORION/2819


Experiment C48_S2SWA_gefs_1921288e Terminated with  tasks failed and  dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C48_S2SWA_gefs_1921288e Terminated: **
Experiment C96_atm3DVar_1921288e Terminated with  tasks failed and  dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C96_atm3DVar_1921288e Terminated: **
Experiment C48_S2SW_1921288e Terminated with  tasks failed and  dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C48_S2SW_1921288e Terminated: **
Experiment C96C48_hybatmDA_1921288e Terminated with  tasks failed and  dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C96C48_hybatmDA_1921288e Terminated: **

@WalterKolczynski-NOAA
Copy link
Contributor

The tracker is explicitly disabled on Hercules in the J-Jobs:

if [[ "${machine}" == 'HERCULES' ]]; then exit 0; fi

I'll remove this and place it in config.base instead for both Hercules and Orion.

We really should be setting his in the hosts file.

@DavidHuber-NOAA
Copy link
Contributor Author

@TerrenceMcGuinness-NOAA Can you tell what happened with the Orion CI tests? It almost looks like the EXPDIR directories were deleted mid-run.

@TerrenceMcGuinness-NOAA TerrenceMcGuinness-NOAA added CI-Orion-Building **Bot use only** CI testing is cloning/building on Orion and removed CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed labels Sep 19, 2024
@emcbot emcbot added CI-Hercules-Running **Bot use only** CI testing on Hercules for this PR is in-progress and removed CI-Hercules-Building **Bot use only** CI testing is cloning/building on Hercules labels Sep 19, 2024
@TerrenceMcGuinness-NOAA
Copy link
Collaborator

TerrenceMcGuinness-NOAA commented Sep 19, 2024

@DavidHuber-NOAA
! Can you tell what happened with the Orion CI tests? It almost looks like the EXPDIR directories were deleted mid-run.

Yes it did ...

The Jenkins custom Workspace is designated per the PR name. When a PR CI is re-ran from a previous fail it uses the same path and removes COMROOT and EXPDIR under its RUNDIRS directory. I couldn't tell what happened specificly with the miss labeling from the logs. My hunch is that the jobs was actually hung in the controller.

@DavidHuber-NOAA
Copy link
Contributor Author

The tracker is explicitly disabled on Hercules in the J-Jobs:

if [[ "${machine}" == 'HERCULES' ]]; then exit 0; fi

I'll remove this and place it in config.base instead for both Hercules and Orion.

We really should be setting his in the hosts file.

This will require a change to all of the host files and the CI defaults (which override whatever is in the host files) and will need to be tested on each platform. I would prefer to open a new issue to tackle this as this PR is already tackling a few different issues and going beyond its scope.

@DavidHuber-NOAA
Copy link
Contributor Author

Opened #2942

@emcbot emcbot added CI-Orion-Running **Bot use only** CI testing on Orion for this PR is in-progress CI-Hercules-Passed **Bot use only** CI testing on Hercules for this PR has completed successfully and removed CI-Orion-Building **Bot use only** CI testing is cloning/building on Orion CI-Hercules-Running **Bot use only** CI testing on Hercules for this PR is in-progress labels Sep 19, 2024
@emcbot
Copy link

emcbot commented Sep 19, 2024

CI Passed on Hercules in Build# 4
Built and ran in directory /work2/noaa/stmp/CI/HERCULES/2819


Experiment C48_ATM_62ea3892 Completed 1 Cycles: *SUCCESS* at Thu Sep 19 13:24:53 CDT 2024
Experiment C48_S2SW_62ea3892 Completed 1 Cycles: *SUCCESS* at Thu Sep 19 14:31:32 CDT 2024
Experiment C96_atm3DVar_62ea3892 Completed 3 Cycles: *SUCCESS* at Thu Sep 19 14:43:42 CDT 2024
Experiment C96C48_hybatmDA_62ea3892 Completed 3 Cycles: *SUCCESS* at Thu Sep 19 14:56:08 CDT 2024
Experiment C48_S2SWA_gefs_62ea3892 Completed 1 Cycles: *SUCCESS* at Thu Sep 19 14:56:59 CDT 2024

@DavidHuber-NOAA
Copy link
Contributor Author

DavidHuber-NOAA commented Sep 20, 2024

Looking through Orion's logs, I see that all experiments completed successfully. However, for some reason the C96_atm3DVar and C48_ATM tests, once completed, did not trigger SUCCESS notifications. Manually changing label to CI-Orion-Passed.

@DavidHuber-NOAA DavidHuber-NOAA added CI-Orion-Passed **Bot use only** CI testing on Orion for this PR has completed successfully and removed CI-Orion-Running **Bot use only** CI testing on Orion for this PR is in-progress labels Sep 20, 2024
@DavidHuber-NOAA DavidHuber-NOAA merged commit 3c86873 into NOAA-EMC:develop Sep 20, 2024
5 checks passed
@DavidHuber-NOAA DavidHuber-NOAA deleted the fix/c768_hera branch September 20, 2024 11:17
@emcbot emcbot added the CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed label Sep 20, 2024
@emcbot
Copy link

emcbot commented Sep 20, 2024

Experiment C48_S2SWA_gefs FAILED on Orion in Build# 5 in
/work2/noaa/stmp/CI/ORION/2819/RUNTESTS/EXPDIR/C48_S2SWA_gefs_62ea3892

@emcbot emcbot added CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed and removed CI-Orion-Passed **Bot use only** CI testing on Orion for this PR has completed successfully CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed labels Sep 20, 2024
@emcbot
Copy link

emcbot commented Sep 20, 2024

CI Failed on Orion in Build# 5
Built and ran in directory /work2/noaa/stmp/CI/ORION/2819


**CI Failed** on Orion in Build# 3<br>Built and ran in directory `/work2/noaa/stmp/CI/ORION/2819`

Experiment C48_S2SWA_gefs_1921288e Terminated with tasks failed and dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C48_S2SWA_gefs_1921288e Terminated: **
Experiment C96_atm3DVar_1921288e Terminated with tasks failed and dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C96_atm3DVar_1921288e Terminated: **
Experiment C48_S2SW_1921288e Terminated with tasks failed and dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C48_S2SW_1921288e Terminated: **
Experiment C96C48_hybatmDA_1921288e Terminated with tasks failed and dead at Thu Sep 19 11:12:47 AM CDT 2024
Experiment C96C48_hybatmDA_1921288e Terminated: **
Experiment C48_ATM_62ea3892 Completed 1 Cycles: SUCCESS at Thu Sep 19 07:33:08 PM CDT 2024
Experiment C96_atm3DVar_62ea3892 Completed 3 Cycles: SUCCESS at Fri Sep 20 05:59:17 AM CDT 2024
Experiment C96C48_hybatmDA_62ea3892 Completed 3 Cycles: SUCCESS at Fri Sep 20 06:48:06 AM CDT 2024
Experiment C48_S2SW_62ea3892 Completed 1 Cycles: SUCCESS at Fri Sep 20 07:06:25 AM CDT 2024
Experiment C48_S2SWA_gefs_62ea3892 Terminated with tasks failed and dead at Fri Sep 20 09:09:42 AM CDT 2024
Experiment C48_S2SWA_gefs_62ea3892 Terminated: **

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI-Hera-Passed **Bot use only** CI testing on Hera for this PR has completed successfully CI-Hercules-Passed **Bot use only** CI testing on Hercules for this PR has completed successfully CI-Orion-Failed **Bot use only** CI testing on Orion for this PR has failed CI-Wcoss2-Passed **Bot use only** CI testing on WCOSS for this PR has completed successfully
Projects
None yet