Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm: ref-count storage pool usage #9498

Open
wants to merge 1 commit into
base: 4.19
Choose a base branch
from

Conversation

rp-
Copy link
Contributor

@rp- rp- commented Aug 7, 2024

Description

If a storage pool is used by e.g. 2 concurrent snapshot->template actions, if the first action finished it removed the netfs mount point for the other action.
Now the storage pools are usage ref-counted and will only deleted if there are no more users.

Fixes: #8899

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

Run several snapshot to template actions, that are executed on the same host.

How did you try to break this feature and the system with this change?

@rp- rp- self-assigned this Aug 7, 2024
Copy link

codecov bot commented Aug 7, 2024

Codecov Report

Attention: Patch coverage is 6.06061% with 31 lines in your changes missing coverage. Please review.

Project coverage is 15.10%. Comparing base (03bdf11) to head (20735be).

Files with missing lines Patch % Lines
.../hypervisor/kvm/storage/LibvirtStorageAdaptor.java 7.40% 24 Missing and 1 partial ⚠️
...hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java 0.00% 1 Missing ⚠️
.../hypervisor/kvm/storage/KVMStoragePoolManager.java 0.00% 1 Missing ⚠️
...pervisor/kvm/storage/ManagedNfsStorageAdaptor.java 0.00% 1 Missing ⚠️
...pervisor/kvm/storage/MultipathSCSIAdapterBase.java 0.00% 1 Missing ⚠️
.../hypervisor/kvm/storage/ScaleIOStorageAdaptor.java 0.00% 1 Missing ⚠️
...hypervisor/kvm/storage/StorPoolStorageAdaptor.java 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##               4.19    #9498     +/-   ##
===========================================
  Coverage     15.10%   15.10%             
- Complexity    11220    11225      +5     
===========================================
  Files          5404     5404             
  Lines        473460   473486     +26     
  Branches      57728    59047   +1319     
===========================================
+ Hits          71525    71541     +16     
- Misses       393941   393948      +7     
- Partials       7994     7997      +3     
Flag Coverage Δ
uitests 4.30% <ø> (ø)
unittests 15.82% <6.06%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@DaanHoogland DaanHoogland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clgtm, you do have a good test scenario for this, do you @rp- ? Or is it only intermitted (i.e. not automatable)

@rp-
Copy link
Contributor Author

rp- commented Aug 12, 2024

clgtm, you do have a good test scenario for this, do you @rp- ? Or is it only intermitted (i.e. not automatable)

I'm not sure it is easy to reproducible automate that, as it is a timing/parallelism issue.
I didn't even try yet if an NFS primary storage uses the same code paths, but I might do that this week to see if it would also be affected somehow.

But we have 2 customers who didn't report any issues with this yet.

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10620

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11065)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 47025 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11065-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.33 test_primary_storage.py
test_01_primary_storage_nfs Error 0.37 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.63 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.21 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 8.74 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 8.75 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.11 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.14 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 83.40 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 50.91 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.41 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.42 test_vm_life_cycle.py
test_08_migrate_vm Error 0.06 test_vm_life_cycle.py

@rohityadavcloud rohityadavcloud added this to the 4.19.2.0 milestone Sep 3, 2024
@rohityadavcloud
Copy link
Member

@blueorangutan package

@blueorangutan
Copy link

@rohityadavcloud a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 10927

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el8 ✖️ el9 ✔️ debian ✖️ suse15. SL-JID 10950

@rp-
Copy link
Contributor Author

rp- commented Sep 5, 2024

I guess the failed packaging is nothing related to this PR?

@DaanHoogland
Copy link
Contributor

I guess the failed packaging is nothing related to this PR?

11:02:25 [ERROR] Failures: 
11:02:25 [ERROR]   VMSchedulerImplTest.testScheduleNextJobScheduleCurrentSchedule:262 expected:<Wed Sep 04 09:02:00 UTC 2024> but was:<Wed Sep 04 09:03:00 UTC 2024>

looks like a test was too slow , so might have to do with too busy container. retrying @rp-

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10988

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11364)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 56881 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11364-kvm-ol8.zip
Smoke tests completed. 125 look OK, 8 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.66 test_primary_storage.py
test_01_primary_storage_nfs Error 0.33 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.62 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.22 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 9.77 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 9.77 test_snapshots.py
test_01_volume_usage Failure 848.98 test_usage.py
test_01_deploy_vm_on_specific_host Error 0.10 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.13 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 87.68 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 52.01 test_vm_life_cycle.py
test_01_secure_vm_migration Error 316.92 test_vm_life_cycle.py
test_02_unsecure_vm_migration Error 459.21 test_vm_life_cycle.py
test_08_migrate_vm Error 0.09 test_vm_life_cycle.py
test_06_download_detached_volume Error 310.28 test_volumes.py

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 11033

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11408)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 44373 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11408-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.31 test_primary_storage.py
test_01_primary_storage_nfs Error 0.30 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.60 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.21 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 8.63 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 8.63 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.09 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.14 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 82.29 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 50.76 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.37 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.37 test_vm_life_cycle.py
test_08_migrate_vm Error 0.08 test_vm_life_cycle.py

@rp- rp- force-pushed the 4.19-kvm-refcount-storagepool-usage branch from c599a58 to b896bbc Compare November 6, 2024 07:31
@rp-
Copy link
Contributor Author

rp- commented Nov 6, 2024

@blueorangutan package

@blueorangutan
Copy link

@rp- a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 11516

@rp-
Copy link
Contributor Author

rp- commented Nov 6, 2024

@blueorangutan test

@DaanHoogland
Copy link
Contributor

@blueorangutan test

The blue ape won't listen to you if you tell it to run tests @rp- (to prevent our lab getting to full we keep control on regression test runs). Other commands will work (like 'package' and 'ui') It should have told you after your attempt ???
@blueorangutan help will give the help output.

@blueorangutan
Copy link

@DaanHoogland [SL] unsupported parameters provided. Supported mgmt server os are: ol8, ol9, debian12, rocky8, alma9, suse15, centos7, centos6, alma8, ubuntu18, ubuntu22, ubuntu20, ubuntu24. Supported hypervisors are: kvm-centos6, kvm-centos7, kvm-rocky8, kvm-ol8, kvm-ol9, kvm-alma8, kvm-alma9, kvm-ubuntu18, kvm-ubuntu20, kvm-ubuntu22, kvm-ubuntu24, kvm-debian12, kvm-suse15, vmware-55u3, vmware-60u2, vmware-65u2, vmware-67u3, vmware-70u1, vmware-70u2, vmware-70u3, vmware-80, vmware-80u1, vmware-80u2, vmware-80u3, xenserver-65sp1, xenserver-71, xenserver-74, xcpng74, xcpng76, xcpng80, xcpng81, xcpng82

@rp-
Copy link
Contributor Author

rp- commented Nov 7, 2024

No it didn't tell me, but I was guessing that I don't have the "rights" to do it.

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@JoaoJandre
Copy link
Contributor

@rp- did you test your solution with adding and removing primary storages?

@JoaoJandre
Copy link
Contributor

@rp- did you test your solution with adding and removing primary storages?

My fear is that we might have many references to a primary storage and when trying to delete it ACS will not umount it, even if it should.

@blueorangutan
Copy link

[SF] Trillian test result (tid-11765)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 49600 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11765-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.36 test_primary_storage.py
test_01_primary_storage_nfs Error 0.34 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.55 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.20 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 10.74 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 10.74 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.09 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.13 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 84.39 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 55.00 test_vm_life_cycle.py
test_01_secure_vm_migration Error 175.92 test_vm_life_cycle.py
test_08_migrate_vm Error 0.07 test_vm_life_cycle.py

@rp-
Copy link
Contributor Author

rp- commented Nov 8, 2024

@JoaoJandre
So yeah, this doesn't really work with primary storage NFS (maybe others handled with libvirt too).
Currently ModifyStoragePool would call createStoragePool, but not deleteStoragePool. which means it would just increase the refCount but never actually remove it from libvirt. (but the HA code would already unmount it....)
So the next createStoragePool would ask libvirt if it knows about the pool and it still does, but later the new "isNFSreallymounted" check would fail.

Adding a deletStoragePool to ModifyStoragePool has the negative effect that there would never be a mounted primary storage on the agent, which is currently assumed and creating instances/copy templates will fail.

I'm not really sure how to move forward with this then, maybe don't let the management server create 2 jobs with the same secondary storage pool to the same agent in parallel.

@JoaoJandre
Copy link
Contributor

@JoaoJandre So yeah, this doesn't really work with primary storage NFS (maybe others handled with libvirt too). Currently ModifyStoragePool would call createStoragePool, but not deleteStoragePool. which means it would just increase the refCount but never actually remove it from libvirt. (but the HA code would already unmount it....) So the next createStoragePool would ask libvirt if it knows about the pool and it still does, but later the new "isNFSreallymounted" check would fail.

Adding a deletStoragePool to ModifyStoragePool has the negative effect that there would never be a mounted primary storage on the agent, which is currently assumed and creating instances/copy templates will fail.

I'm not really sure how to move forward with this then, maybe don't let the management server create 2 jobs with the same secondary storage pool to the same agent in parallel.

We could keep the ref-count for the secondary storage only. The com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager#createStoragePool(java.lang.String, java.lang.String, int, java.lang.String, java.lang.String, com.cloud.storage.Storage.StoragePoolType, java.util.Map<java.lang.String,java.lang.String>, boolean) already takes a parameter to determine if the pool is primary or not, we could pass it to com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor#createStoragePool to determine if we should keep the ref-count or not.

@rp-
Copy link
Contributor Author

rp- commented Nov 8, 2024

@JoaoJandre
We could keep the ref-count for the secondary storage only. The com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager#createStoragePool(java.lang.String, java.lang.String, int, java.lang.String, java.lang.String, com.cloud.storage.Storage.StoragePoolType, java.util.Map<java.lang.String,java.lang.String>, boolean) already takes a parameter to determine if the pool is primary or not, we could pass it to com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor#createStoragePool to determine if we should keep the ref-count or not.

And on delete? only check if the uuid is part of the refcount map?

@JoaoJandre
Copy link
Contributor

@JoaoJandre
We could keep the ref-count for the secondary storage only. The com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager#createStoragePool(java.lang.String, java.lang.String, int, java.lang.String, java.lang.String, com.cloud.storage.Storage.StoragePoolType, java.util.Map<java.lang.String,java.lang.String>, boolean) already takes a parameter to determine if the pool is primary or not, we could pass it to com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor#createStoragePool to determine if we should keep the ref-count or not.

And on delete? only check if the uuid is part of the refcount map?

@rp- Yeah, if it is not part of the map, we assume that it can be deleted. If it is, we check the refcount.

@rp- rp- force-pushed the 4.19-kvm-refcount-storagepool-usage branch from b896bbc to 427ed5a Compare November 8, 2024 14:24
@rp-
Copy link
Contributor Author

rp- commented Nov 8, 2024

@blueorangutan package

@rp-
Copy link
Contributor Author

rp- commented Nov 8, 2024

@JoaoJandre This seem to work in my tests now

@rp- rp- force-pushed the 4.19-kvm-refcount-storagepool-usage branch from 427ed5a to 7064436 Compare November 8, 2024 14:36
If a secondary storage pool is used by e.g.
2 concurrent snapshot->template actions,
if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
@rp- rp- force-pushed the 4.19-kvm-refcount-storagepool-usage branch from 7064436 to 20735be Compare November 8, 2024 17:53
@rp-
Copy link
Contributor Author

rp- commented Nov 8, 2024

@blueorangutan package

@blueorangutan
Copy link

@rp- a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 11533

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants