You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug/feature
In 0.30.0 the new version of the reclass fork's latest version is used, however, this iteration comes with severe performance penalties when class interpolation is used. After tracing with PyTrace we narrowed the issue to the following line:
It looks that before class interpolation happens a previous interpolation step happens to resolve the class in the classes. This uses a deep copy of the class but that generally raises the compilation time exponentially with any new target. In a setup with over 100 targets, a compilation can get into 200 seconds. Hopefully, someone more experienced can also take a look.
We use extensive class interpolations to keep the one remote inventory and just version it without changes in the inventory itself.
This one is tricky, the problem is the performance issues are caused by the deepcopy which was introduced in kapicorp/reclass#3. We'd have to do quite a bit of work to refactor reclass in such a way that we can remove the deepcopy without regressing to the state we had before merging that PR (see also my comments in https://kubernetes.slack.com/archives/C981W2HD3/p1654267512088039)
A workaround for us was using relative path class imports. Class interpolation was used for versioning. Now we separate by inventories and only call a single class per inventory to import the stuff into the target. Hopefully, this can patch it up for the folks hitting this issue. We had teams getting to 30 minutes mark for compilation, just waiting on reclass to construct the inventory.
This issue is stale because it has been open for 1 year with no activity.
Remove the stale label or comment if this issue is still relevant for you.
If not, please close it yourself.
Describe the bug/feature
In 0.30.0 the new version of the reclass fork's latest version is used, however, this iteration comes with severe performance penalties when class interpolation is used. After tracing with PyTrace we narrowed the issue to the following line:
https://github.com/kapicorp/reclass/blob/5246e6af973df7b5375af2aa0765b7664d4e88d7/reclass/core.py
It looks that before class interpolation happens a previous interpolation step happens to resolve the class in the classes. This uses a deep copy of the class but that generally raises the compilation time exponentially with any new target. In a setup with over 100 targets, a compilation can get into 200 seconds. Hopefully, someone more experienced can also take a look.
We use extensive class interpolations to keep the one remote inventory and just version it without changes in the inventory itself.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Inventory should take 5 seconds usually. However it takes 50 seconds. Lowering targets count reduce the time.
Screenshots
If applicable, add screenshots to help explain your problem/idea.
** If it's a bug (please complete the following information):**
python --version
: Python 3.9.9pip3 --version
: pip 21.2.4 from /.pyenv/versions/3.9.9/lib/python3.9/site-packages/pip (python 3.9)Are you using pyenv or virtualenv?
PyEnv and venvThe text was updated successfully, but these errors were encountered: