-
Notifications
You must be signed in to change notification settings - Fork 37
Replacing volume fails #154
Comments
Additionally, changing e.g.
It shouldn't say 'try deleting and then recreating' (manually), it should just force a replacement in the plan. |
Regarding the:
error - I think the best way to address this would be a separate Currently the With the attachment instead handled through a third resource, the forced replacement of which would allow it to be destroyed (volume detached from machine) thus allowing This is how such relationships are handled typically in the AWS provider, for example. |
Actually, it seems (superfly/flyctl#1758) that there's no way to detach a volume without destroying the machine anyway, so perhaps it should/needs to force replacement of I'll add it to #157. |
There seems to be a delay of up to a minute after a machine has been destroyed before the volume is no longer registered as attached. I'm applying the following workaround which has worked reliably over several runs now: resource "fly_volume" "db" {
# ...
}
resource "time_sleep" "db" {
destroy_duration = "60s"
lifecycle {
replace_triggered_by = [ fly_volume.db ]
}
}
resource "fly_machine" "db" {
# ...
mounts = [{
# ...
volume = fly_volume.db.id
}]
lifecycle {
replace_triggered_by = [ time_sleep.db ]
}
} Run:
(This is in a CI environment where the downtime is acceptable.) |
When replacing a volume attached to a machine, the delete fails with an error like "cannot delete volume, still in use". The volume property on the machine should probably be marked as "forces replacement".
The text was updated successfully, but these errors were encountered: