Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Versions & Next Steps #48

Closed
ccrvlh opened this issue Aug 10, 2022 · 3 comments
Closed

Versions & Next Steps #48

ccrvlh opened this issue Aug 10, 2022 · 3 comments

Comments

@ccrvlh
Copy link

ccrvlh commented Aug 10, 2022

I'm sorry to open an issue, a discussion would probably be better, but it's not enabled on the repo.

Wondering if there's any hard requirement on keeping deprecated Ansible (2.4, 2.5, 2.6, 2.7) and Python (2.7, 3.5) versions on the repo. The tox env has lots of combination of older versions, even though the setup.py required 2.8 or higher. My initial suggestion would be to deprecated everything from 2.4-2.8, start testing agains 2.11-2.13 of core, and deprecated Python 2.7, and stop testing against 3.5.

While looking at this, I though it would be worth to take a look at the open issues, my take on some of them so far:

#17: There seems to be an issue when using localhost. There’s a part of the code that configures Ansible to local if the host is a localhost (and variations). The issue here that I encountered myself also, is when the port is different (I was testing two vagrant boxes: localhost:2222 and localhost:2200). If I also added the port as a parameter to test before configuring Ansible to local it works. So this could be closed.

#21: There are a couple of (reasonable) ways to make this work. One of them is use a module called import_playbook. I haven’t tested its, but it’s in the Ansible repository, I could try. The other is a bit more laborious, but shouldn’t be all that difficult. The playbook is just a dictionary (the same dictionary generated at the ModuleRunner (PlaySource) so it would probably be possible to either (1) create the playbook (which is something I’m particularly interested in) or (2) parse the YAML with ansible’s own DataLoader . This could stay on the roadmap, would take some time to assess, but probably doable.

#24: This is only an example, would suggest to close the issue and move this to the Readme.

#32: There are a couple of ways to go around this. The first would be set the connection_timeout parameter at ansible’s main ansible.cfg configuration (I’m not sure how to do that programmatically, probably through the VariableManager? The second is to add the async param to the PlaySource. Should be only the argument async (not tested). Something like: async=180000, poll: 60 The async would allow for time, and the poll would ping the connection to Keep it alive.

#33: This is a bit tricky, since as @herf mentioned this happens on the file loading layer (it is as if Ansible implemented its own Jinja logic through Yaml). This could be done if there were more separation between the module loader and the executor, but don’t really think it would be worth to deal with it know. Perhaps keep this in the backlog, and close the issue.

#34 I think the problem might again be with the localhost port and with the need to have sshpass installed when using SSH Password. Will take a look at that in the tests as well.

#40 Agree with @href it seems like an Ansible problem. Could be closed probably.

#41 This is interesting, it sort of has to do with how the results Callback is implemented. I couldn’t test in multiple servers, but will try to look at that as well.

#45: I found (didn’t realize that before) that stdout is a “fake” method, since it’s actually a key in the dictionary and the Results are implemented with a special getattr method that make it look like a method (which is pretty nice, haven’t seen that before). One way to deal with that is to probably pass dry_run around and deal with this cases in the result dictionary.


This would still leave a couple of issue open, which I really don’t know how to fix, but I would happily look at that when the above is done.

I ended up opening a new branch, and I've been working of a few proposals with a bit of refactoring since the codebase hasn't been very active for the past year or so. The branch I've been working on is here.

I could manage to make most of the tests pass, but still missing the live test connection and a couple of tests which I couldn't make it work in the current repo, and in the new one also (interleaving, and host_keys). Sill not running inside the container, will tackle this next.

Even though the diff won’t be nice, the changes are pretty trivial, and mostly keep all of the business logic in place, just a bit of refactoring. At a first glance I don't think it would be a breaking change since it only moves internal pieces around, but its a relevant change nonetheless. The CHANGELOG I kept so far:

  1. Refactor options logic on the API to make the init a bit cleaner
  2. Refactor execute logic on the Runner to make the execute a bit cleaner
  3. Moved InventoryManager => inventory.py
  4. Added docstrings / types (still 2.7 style)
  5. Removed CHANGELOG from README (created CHANGELOG)
  6. Added a Quickstart section to README
  7. Changed name (module_runner => runner) to avoid confusions with the results
  8. Changed name (runner_result => results) to avoid confusion with the module
  9. Moved code blocks: (common => utils) reduced files and made sense
  10. Moved code block: context_managers => utils to leave a file with only the class
  11. Inventory L35 => Sets “local” even if port != 22 (running docker 2222) BUG. Fixed it by testing if it’s port 22 before setting local
  12. Created Dockerfile (to run tests)
  13. Created Docker-compose (SSH into another container)
  14. Moved tests to top level
  15. Added Makefile
  16. Conftest => pass **options was not working: options=options (Probably my mistake during the refactor, will have to double check that).

I do apology for all of that, all at one. My initial intention was to just add the typing and the docstrings so I could better understand the logic, but ended up doing more than that. I particularly liked the result, but this is just personal style and preferences. Hope you guys like it as well.

If guys are interesting in taking a look at the new branch, I could invite the maintainers to the fork as a way to keep this repo simple. If you think the diff is just too big, I would totally understand, I could just create a new branch working only on some aspects that you guys might me interested in (from this list/issues), no worries at all.

Cheers

PS: I'm particularly interested in all of this. I actually had opened a PyAnsible repo and organization in Aug/21 (!) to do exactly what Suitable does. So count me in to help in any way I can.

@Daverball
Copy link
Member

@lowercase00 There isn't really a hard requirement to support older version of Python and Ansible. There's always older releases for people that want to keep using Suitable on older versions. I would probably do a bunch of pre-releases though before pushing a major production release.

Generally your changes look sensible to me, but I haven't really had the time to take a deep dive and I probably won't have time do that for at least another couple of months. I'm definitely happy to add type annotations to the codebase and clean up the API a bit.

I'm personally not super thrilled about using Black, I'd prefer to just keep relying on flake8 to catch linting errors but let the code otherwise keep its own style within reason.

@ccrvlh
Copy link
Author

ccrvlh commented Aug 11, 2022

@Daverball sure, I'll remove black, I started adopting it more after I found a combination of settings that I liked, but flake8 should be enough indeed. I suggest waiting for your review before splitting commits/PRs. After you have reviewed it we can decide on how to proceed for a next version. In the mean time I'll keep working on test scenarios, there still a couple of tests missing - the current tests run all on localhost, so interact directly with Python's. os this is fine, but not the most realistic case, since we'll be dealing with remote hosts for the majority of time.

@Daverball
Copy link
Member

Hi @ccrvlh

Sorry that this has been left sitting around for so long. I implemented some of your improvements independently, and copied a couple of small fixes and improvements from your branch, but left out most of the refactors to avoid API instability.

I will close this as completed, even though some of the other referenced issues are still open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants