Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

increase size for root fs of ardana deployer #3963

Merged

Conversation

JanZerebecki
Copy link
Contributor

otherwise CI fails due to not enough free disk space

otherwise CI fails due to not enough free disk space
@skazi0
Copy link
Member

skazi0 commented Dec 22, 2021

Wasn't it ephemeral disk that had to be resized?

@JanZerebecki
Copy link
Contributor Author

No, I also made that mistake first, the emphemeral disk was already 70GB, but this value constrained the root lv to 44G.

@JanZerebecki
Copy link
Contributor Author

JanZerebecki commented Dec 22, 2021

command: lvresize -L {{ min_deployer_root_part_size }}G /dev/{{ root_vg }}/{{ root_lv }}

Copy link
Member

@skazi0 skazi0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

@JanZerebecki JanZerebecki merged commit a7dbe39 into SUSE-Cloud:master Dec 22, 2021
@JanZerebecki
Copy link
Contributor Author

Bypassed requirement for 2nd review.

@JanZerebecki JanZerebecki deleted the ardana-root-fs-size-increase branch December 22, 2021 19:24
@JanZerebecki
Copy link
Contributor Author

I had to increase the emphemeral disk size of the flavor to 90G so 54G is smaller than 65% of the whole disk. (See doc fix #3964 .) I did this as root on a compute of engcloud by deleting the flavor and recreating it with: openstack flavor delete cloud-ardana-job-lvm-compute; openstack flavor create --disk 90 --public --property hw_rng:allowed='True' --ram 8192 --vcpus 2 cloud-ardana-job-lvm-compute

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants