Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serious performance degradation on compilation times with class interpolation #839

Open
Cyanopus opened this issue Jun 1, 2022 · 4 comments
Labels
performance issue regarding performance loss Stale

Comments

@Cyanopus
Copy link
Contributor

Cyanopus commented Jun 1, 2022

Describe the bug/feature
In 0.30.0 the new version of the reclass fork's latest version is used, however, this iteration comes with severe performance penalties when class interpolation is used. After tracing with PyTrace we narrowed the issue to the following line:

https://github.com/kapicorp/reclass/blob/5246e6af973df7b5375af2aa0765b7664d4e88d7/reclass/core.py

It looks that before class interpolation happens a previous interpolation step happens to resolve the class in the classes. This uses a deep copy of the class but that generally raises the compilation time exponentially with any new target. In a setup with over 100 targets, a compilation can get into 200 seconds. Hopefully, someone more experienced can also take a look.

We use extensive class interpolations to keep the one remote inventory and just version it without changes in the inventory itself.

classes:
  - example.${example:version}.app_a
  - example.${example:version}.app_b
  - example.${example:version}.app_c

To Reproduce
Steps to reproduce the behavior:

  1. Create target with class interpolation -
classes:
  - example.0.1.0.version
  - example.${example:version}.app_a
  - example.${example:version}.app_b
  - example.${example:version}.app_c
  1. Copy this target N times - N is more than 20 and increase it by 2-3 times.
  2. Compile and track the compilation time.

Expected behavior
Inventory should take 5 seconds usually. However it takes 50 seconds. Lowering targets count reduce the time.

Screenshots
If applicable, add screenshots to help explain your problem/idea.

** If it's a bug (please complete the following information):**

  • python --version: Python 3.9.9
  • pip3 --version: pip 21.2.4 from /.pyenv/versions/3.9.9/lib/python3.9/site-packages/pip (python 3.9)
  • Are you using pyenv or virtualenv? PyEnv and venv
@ademariag
Copy link
Contributor

@simu is this the issue you mentioned?

@simu
Copy link
Contributor

simu commented Nov 21, 2022

This one is tricky, the problem is the performance issues are caused by the deepcopy which was introduced in kapicorp/reclass#3. We'd have to do quite a bit of work to refactor reclass in such a way that we can remove the deepcopy without regressing to the state we had before merging that PR (see also my comments in https://kubernetes.slack.com/archives/C981W2HD3/p1654267512088039)

@Cyanopus
Copy link
Contributor Author

Cyanopus commented Dec 8, 2022

A workaround for us was using relative path class imports. Class interpolation was used for versioning. Now we separate by inventories and only call a single class per inventory to import the stuff into the target. Hopefully, this can patch it up for the folks hitting this issue. We had teams getting to 30 minutes mark for compilation, just waiting on reclass to construct the inventory.

@MatteoVoges MatteoVoges added the performance issue regarding performance loss label Apr 20, 2023
Copy link

github-actions bot commented Jul 6, 2024

This issue is stale because it has been open for 1 year with no activity.
Remove the stale label or comment if this issue is still relevant for you.
If not, please close it yourself.

@github-actions github-actions bot added the Stale label Jul 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance issue regarding performance loss Stale
Projects
None yet
Development

No branches or pull requests

4 participants