Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added more details to troubleshooting around restore errors and warnings #1222

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

sseago
Copy link
Contributor

@sseago sseago commented Nov 9, 2023

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 9, 2023
@sseago
Copy link
Contributor Author

sseago commented Nov 9, 2023

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 9, 2023
@weshayutin
Copy link
Contributor

@TzahiAshkenazi let's get your eyes on this addition to troubleshooting. Can discuss in office hours as well 0/

One or more errors in any of the above categories will cause a Restore to be `PartiallyFailed` rather than `Completed`. Warnings will not cause a change to the completion status.

Note that if there are "Velero" errors (but no resource-specific errors), it is possible that the restore completed without any actual problems with restoring workloads, but careful validation of post-restore applications is advisable.
For example, if there are PodVolumeRestore and/or Node Agent-related errors, check the status of PodVolumeRestores and DataDownloads -- if none of these are failed or still running, then volume data may have been fully restored.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to expand line 112 with sudo instructions of which resources to manually investigate in addition to PodVolumeRestore, perhaps this could be just a list of objects. wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weshayutin Which resources did you have in mind? For restores, we have the restore itself and the PVR/DD -- do you mean actual restored resources or velero CRs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well sir... correct me if I'm wrong, but here's what I'm thinking.
Either:

  • velero restore describe --details has a resource by resource and velero's state. I wonder if anything in (failed) can be checked if it exists and status could be (present - failed restore)
  • OR
  • We script out a helper script that cycles through anything in failed and give the user the status or yaml output. Not sure.

Obviously not to be done in this PR, however for this PR:

  • adding a note to check velero restore describe --details
  • The user should have enough experience to then to further investigate.

The rest of what you have IMHO is very helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh ok. So instead of expanding what I'm saying in 111/112 (which is specifically addressing non-item "velero" errors, I should add another note around the other error types (cluster and namespace-scoped errors, which relate to restored items. So if there are item errors, then look at the yaml/status of those items -- I'll add a note around this as well.

@@ -99,6 +99,18 @@ This section includes how to debug a failed restore. For more specific issues re

<hr style="height:1px;border:none;color:#333;">

<h3 align="center">Restore Errors and Warnings<a id="creds"></a></h3>

Restore errors and warnings are shown in the `veleror describe` output in three groups:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit
s/veleror/velero

Copy link

openshift-ci bot commented Feb 8, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sseago

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

1 similar comment
Copy link

openshift-ci bot commented Feb 8, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sseago

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment


One or more errors in any of the above categories will cause a Restore to be `PartiallyFailed` rather than `Completed`. Warnings will not cause a change to the completion status.

For resource-specific errors ("Cluster" and "Namespaces" errors), the `restore describe --details` output should include the resource list which will list all resources that Velero succeeded in restoring. For any that errored out, check to see if the resource is actually in the cluster.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sseago The word "should" is ambiguous -- could be advice, could be "must" or "needs to" [Israelis often use 'should" when they mean this option], or "if everything works as expected." Use "recommend" for the first, "must" or "needs to" for the second, and simply state as a fact for the third.
I am reading "should" as "how it's supposed to work." If that's wrong, correct me on openshift/openshift-docs#67803, which I've requested you to review.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2024
@kaovilai
Copy link
Member

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2024
Copy link

openshift-ci bot commented Jun 19, 2024

@sseago: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/4.13-e2e-test-azure 9e9a9f5 link true /test 4.13-e2e-test-azure
ci/prow/4.12-e2e-test-aws 9e9a9f5 link true /test 4.12-e2e-test-aws
ci/prow/4.16-e2e-test-kubevirt-aws 586df2a link true /test 4.16-e2e-test-kubevirt-aws
ci/prow/4.15-e2e-test-aws 586df2a link true /test 4.15-e2e-test-aws
ci/prow/4.15-e2e-test-kubevirt-aws 586df2a link true /test 4.15-e2e-test-kubevirt-aws

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2024
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants