Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration to tell cat-log to fetch logs from (local) disk rather than remote host #506

Open
jarich opened this issue Sep 15, 2023 · 17 comments · May be fixed by #509
Open

Allow configuration to tell cat-log to fetch logs from (local) disk rather than remote host #506

jarich opened this issue Sep 15, 2023 · 17 comments · May be fixed by #509

Comments

@jarich
Copy link

jarich commented Sep 15, 2023

For jobs running via PBS, job logs are not available until the job has completed (yes, even if the job runs for 4 hours and generates GBs of logs). I don't know why this is the case, but it is the case. PBS spools log messages somewhere and then moves them into the correct locations when the job is done. As such, if remote log fetching is enabled, then pulling the job logs from the job host means additional SSH connections, network traffic and load on the login hosts for zero value. This is even more the case when those connections are left open by users who navigate to another tab or whatever.

It would be ideal if there was a configuration item that told cat-log to only serve files from local disk, if that is what the user wanted.

@jarich jarich changed the title Allow configuration to tell cat-log to fetches logs from (local) disk rather than remote host Allow configuration to tell cat-log to fetch logs from (local) disk rather than remote host Sep 15, 2023
@hjoliver
Copy link
Member

Apparently (recent versions of?) PBS do allow you to write job stdout and stderr directly to their final destinations.

3.3.6 Writing Files Directly to Final Destination

If the MoM on the primary execution host can reach the final destination, she can write the job's standard output and
standard error files to that destination. To be reachable, the final destination host and path must either be on the execu-
tion host, or be mapped from the primary execution host via the $usecp directive in the MoM configuration file. To
specify that standard output and/or standard error should be written directly to their final destinations, use the d
sub-option to the -k option to qsub or qalter. Indicate which files to write via the e and/or o suboptions.

For example, to directly write both output and error to their final destinations:
qsub -koed -o <output path> -e <error path> job.sh

To directly write output to its final destination, and let error go through normal spooling and staging:
qsub -kod -o <output path> job.sh

From https://2021.help.altair.com/2021.1.2/PBS%20Professional/PBSUserGuide2021.1.2.pdf

@dpmatthews is this used with Cylc at your site?

Not that this invalidates the Issue - it's a reasonable request regardless.

@jarich
Copy link
Author

jarich commented Sep 18, 2023

I notice in flow/scripts/cat_log.py the documentation claims:

If remote job logs are retrieved to the workflow host on completion (global
config '[JOB-HOST]retrieve job logs = True') and the job is not currently
running, the local (retrieved) log will be accessed unless '-o/--force-remote'
is used.

There may be a bug in the calling code, if this is supposed to be the case, because if I attempt to select files for jobs I know are complete (and I can see in my cylc-run directory) I can't see these files through the log viewer.

@hjoliver This comes back to what we spoke about with respect to only showing the default file "job-activity.log" when SSH keys have not been set up properly.

image

@dpmatthews
Copy link
Contributor

@hjoliver We have a locally written command which provides access to the stdout and stderr of a job whilst it is running. For all PBS platforms we configure:

        err tailer = qcat -f -e %(job_id)s
        out tailer = qcat -f -o %(job_id)s
        err viewer = qcat -e %(job_id)s
        out viewer = qcat -o %(job_id)s

@jarich

This comes back to what we spoke about with respect to only showing the default file "job-activity.log" when SSH keys have not been set up properly.

I think the UI server is currently using the cat log --force-remote option so I'm not surprised it isn't working correctly if ssh is failing.

@dpmatthews
Copy link
Contributor

I think this issue can be resolved simply by removing the use of the -force-remote option. However, this will need a bit of investigation to make sure we understand the implications (@oliver-sanders originally thought it would be needed).

@hjoliver
Copy link
Member

We have a locally written command which provides access to the stdout and stderr of a job whilst it is running.

@jarich - I had forgotten we supported that, because we've never needed it at my site. But that sounds like something you should do at yours?

@jarich
Copy link
Author

jarich commented Sep 19, 2023

We have a locally written command which provides access to the stdout and stderr of a job whilst it is running.

@jarich - I had forgotten we supported that, because we've never needed it at my site. But that sounds like something you should do at yours?

I will mention it to those who are able to decide whether to do it or not. I'll also look at making a local edit to remove --force-remote for the time being.

@jarich
Copy link
Author

jarich commented Sep 20, 2023

I commented out the --force-remote from

'--force-remote',
and removed the -o from
cmd: List[str] = ['cylc', 'cat-log', '-m', 'l', '-o', id_.id]
and I can confirm that this achieves the goals I want without yet causing any problems.

Scott will be creating a PR to allow this to be set by configuration.

@hjoliver
Copy link
Member

I think this issue can be resolved simply by removing the use of the -force-remote option. However, this will need a bit of investigation to make sure we understand the implications (@oliver-sanders originally thought it would be needed).

@oliver-sanders, in light of @jarich 's findings, do you recall the reason for the use of force-remote? Was it to ensure that we always get an up-to-date log file rather than a stale local one? Even if that's wanted, maybe we could eschew remote retrieval if the task has finished and a local log exists. (Speculations from a chat with Jacinta and Scott today).

@jarich
Copy link
Author

jarich commented Sep 21, 2023

Establishing that the job has finished could be done by calling get_task_job_attrs and I presume the required arguments for that could be determined by unwrapping id with parse_ids

@ScottWales ScottWales linked a pull request Sep 21, 2023 that will close this issue
8 tasks
@dpmatthews
Copy link
Contributor

Establishing that the job has finished could be done by calling get_task_job_attrs and I presume the required arguments for that could be determined by unwrapping id with parse_ids

As far as I can see, cat-log already gets the log files from the remote platform for running jobs so we don't need to use the --force-remote option in the UI server (and we don't need it to be configurable). We just need to confirm this works correctly in all cases.

@oliver-sanders
Copy link
Member

@oliver-sanders, in light of @jarich 's findings, do you recall the reason for the use of force-remote?

We got things set up so that the cylc-uiserver log view worked. We were aware that we were going remote in some situations where we didn't necessarily need to. Improving cat-log to avoid this is on the list but a much lower priority for us than other ongoing work.

Some edge cases to consider:

  • Platforms where job log retrieval is not configured.
  • Lag between a job completing and the log files being synced back.
  • Log files which exceed the maximum filesize so are not synced back, but which still need to be visible.
  • Offline log file viewing for tasks which were active when the scheduler shut down.

The easiest way to avoid these issues without having to look into the implementation in depth was to go to the remote platform for the logs.

@hjoliver
Copy link
Member

@dpmatthews - given @oliver-sanders' response there, I think making it configurable as per #509 is a reasonable solution for the moment, to give BOM what they need. With the proviso that the config might not be needed after some future release. Agree?

@oliver-sanders
Copy link
Member

There's no need for configuration here, we just need to invest the time to make any required improvements to cat-log such that the option isn't needed.

Personally I have not had the time to spare to think this through so have left the default as is.

@dpmatthews
Copy link
Contributor

We should put the effort into checking whether removal of the option causes any problems rather than adding configuration we don't want

@hjoliver
Copy link
Member

hjoliver commented Sep 24, 2023

There's no need for configuration here, we just need to invest the time to make any required improvements to cat-log such that the option isn't needed.

We should put the effort into checking whether removal of the option causes any problems rather than adding configuration we don't want

The problem with that is it's needed quite urgently (apparently) and your approach requires work from us that we probably cannot spare the time for right now.

(Hence my comment "I think making it configurable as per #509 is a reasonable solution for the moment, to give BOM what they need.")

Of course I'm not keen on adding future-unnecessary config to the system in general, but that can't be absolute - it depends on how long it will take to get the feature fixed.

I guess the alternative is temporary local patching.

@dpmatthews
Copy link
Contributor

I think the risk of removing this option now is fairly small and we can probably address any issues it causes quite quickly if necessary. So, I'd prefer to just remove the option rather than cause additional work by making it configurable.

@jarich
Copy link
Author

jarich commented Sep 26, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants