-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added more details to troubleshooting around restore errors and warnings #1222
base: master
Are you sure you want to change the base?
Conversation
66f92c4
to
9e9a9f5
Compare
/hold |
@TzahiAshkenazi let's get your eyes on this addition to troubleshooting. Can discuss in office hours as well 0/ |
One or more errors in any of the above categories will cause a Restore to be `PartiallyFailed` rather than `Completed`. Warnings will not cause a change to the completion status. | ||
|
||
Note that if there are "Velero" errors (but no resource-specific errors), it is possible that the restore completed without any actual problems with restoring workloads, but careful validation of post-restore applications is advisable. | ||
For example, if there are PodVolumeRestore and/or Node Agent-related errors, check the status of PodVolumeRestores and DataDownloads -- if none of these are failed or still running, then volume data may have been fully restored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to expand line 112 with sudo instructions of which resources to manually investigate in addition to PodVolumeRestore, perhaps this could be just a list of objects. wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weshayutin Which resources did you have in mind? For restores, we have the restore itself and the PVR/DD -- do you mean actual restored resources or velero CRs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well sir... correct me if I'm wrong, but here's what I'm thinking.
Either:
- velero restore describe --details has a resource by resource and velero's state. I wonder if anything in (failed) can be checked if it exists and status could be (present - failed restore)
- OR
- We script out a helper script that cycles through anything in failed and give the user the status or yaml output. Not sure.
Obviously not to be done in this PR, however for this PR:
- adding a note to check velero restore describe --details
- The user should have enough experience to then to further investigate.
The rest of what you have IMHO is very helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh ok. So instead of expanding what I'm saying in 111/112 (which is specifically addressing non-item "velero" errors, I should add another note around the other error types (cluster and namespace-scoped errors, which relate to restored items. So if there are item errors, then look at the yaml/status of those items -- I'll add a note around this as well.
docs/TROUBLESHOOTING.md
Outdated
@@ -99,6 +99,18 @@ This section includes how to debug a failed restore. For more specific issues re | |||
|
|||
<hr style="height:1px;border:none;color:#333;"> | |||
|
|||
<h3 align="center">Restore Errors and Warnings<a id="creds"></a></h3> | |||
|
|||
Restore errors and warnings are shown in the `veleror describe` output in three groups: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
s/veleror/velero
9e9a9f5
to
586df2a
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sseago The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1 similar comment
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sseago The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
||
One or more errors in any of the above categories will cause a Restore to be `PartiallyFailed` rather than `Completed`. Warnings will not cause a change to the completion status. | ||
|
||
For resource-specific errors ("Cluster" and "Namespaces" errors), the `restore describe --details` output should include the resource list which will list all resources that Velero succeeded in restoring. For any that errored out, check to see if the resource is actually in the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sseago The word "should" is ambiguous -- could be advice, could be "must" or "needs to" [Israelis often use 'should" when they mean this option], or "if everything works as expected." Use "recommend" for the first, "must" or "needs to" for the second, and simply state as a fact for the third.
I am reading "should" as "how it's supposed to work." If that's wrong, correct me on openshift/openshift-docs#67803, which I've requested you to review.
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
@sseago: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Corresponds to openshift/openshift-docs#67803