Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Adding middleware support in factory #195

Draft
wants to merge 9 commits into
base: main
Choose a base branch
from
Draft

[WIP] Adding middleware support in factory #195

wants to merge 9 commits into from

Conversation

almahmoud
Copy link
Collaborator

This adds support for middleware in factory, which then gets inherited by any provider object created by this factory.

Three options to add middleware from factory:

  1. Direct middleware add which will keep the same object across calls from any provider created by this factory after adding (in this example, the count will continue for all calls)

        class SomeDummyClass(object):
            count = start_count

            @intercept(event_pattern="*", priority=2499)
            def return_incremented(self, event_args, *args, **kwargs):
                self.count += 1
                return self.count

        some_obj = SomeDummyClass()
        factory.middleware.add(some_obj)
  1. Middleware constructor add which will create the middleware with the class and arguments when making a new middleware manager for the provider object (in this example, count will only keep track of calls within each provider):
        class SomeDummyClassWithArgs(object):
            def __init__(self, start, increment):
                self.count = start
                self.increment = increment

            @intercept(event_pattern="*", priority=2499)
            def return_incremented(self, event_args, *args, **kwargs):
                self.count += self.increment
                return self.count

        factory = CloudProviderFactory()
        factory.middleware.add_constructor(SomeDummyClassWithArgs,
                                           start_count, increment)

and 3) adding events directly which will recreate a handler with the same callback function (bound methods will remain bound to their object)

        def return_hello(event_args, *args, **kwargs):
            return "hello"

        factory = CloudProviderFactory()
        factory.middleware.events.intercept("*", 2490, return_hello)

For all these scenarios, provider objects will inherit the events from the factory:

aws = factory.create_provider(ProviderList.AWS, {})
az = factory.create_provider(ProviderList.AZURE, {})
# Both providers will inherit all middleware/handlers added to the factory

@coveralls
Copy link

coveralls commented Feb 22, 2019

Pull Request Test Coverage Report for Build 1500

  • 361 of 377 (95.76%) changed or added relevant lines in 12 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+10.1%) to 88.855%

Changes Missing Coverage Covered Lines Changed/Added Lines %
cloudbridge/cloud/factory.py 112 128 87.5%
Totals Coverage Status
Change from base Build 1470: 10.1%
Covered Lines: 7292
Relevant Lines: 7902

💛 - Coveralls

Copy link
Contributor

@nuwang nuwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss the details later on today. I think the idea of maintaining similarity to a local event manager when subscribing to factory level events is a good one, we just need to make it behave in a way that's unsurprising. Perhaps the one that's least surprising is to simply share the middleware object. Fundamentally, I guess we have to answer these questions:
a. is factory level middleware different from individual provider level middleware, or should they be interchangeable? That is, is can provider.middleware.add(middleware) behave the same as factory.middleware.add(middleware)?
b. How do we avoid middleware internal to providers accidentally subscribing to "global" events. Should we?
c. If the answer is no, how do provider internal events, relate to global external events? How would the chain of handlers work? Would a namespace work?

@@ -83,11 +83,11 @@ def debug_mode(self):


class BaseCloudProvider(CloudProvider):
def __init__(self, config):
def __init__(self, config, middleware_manager=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, better than sending in middleware as a list.

super(ParentMiddlewareManager, self).__init__(event_manager)
self.middleware_constructors = []

def add_constructor(self, middleware_class, *args):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could possibly be renamed to add_middleware_class, accepting the class in question, and both the constructor args and kwargs.

new_handler = handler.__class__(handler.event_pattern,
handler.priority,
handler.callback)
new_manager.events.subscribe(new_handler)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this means that events added to the factory later won't be subscribed to previously created providers? This is ok, but I think we need to clarify how exactly factory event subscriptions are meant to work and what the intended use cases are, so we know for sure that this is the correct behaviour.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was the original question that I had about this: whether we should make the provider object independent after it was created, or always treat it as a "child" of the factory. If the latter, it's definitely doable, but I would then suggest making the factory accessible through the provider as well.

def get_subscribed_handlers(self):
handlers = []
# Todo: Expose this better in pyeventsystem library
event_dict = self.events._SimpleEventDispatcher__events
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why we can't just access self.events?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.events is the SimpleEventDispatcher, which currently doesn't have a property exposing the events dict. We definitely should add the property, but I was avoiding changing the other library for the time being, before we discuss this further.

@@ -27,9 +79,14 @@ class CloudProviderFactory(object):
"""

def __init__(self):
self._middleware = ParentMiddlewareManager()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a different name because ParentMiddlewareManager doesn't seem to explain what it's doing. Also, there is the issue of namespacing. And finally, I guess there's the issue of being able to intercept behaviour of a particular cloud (say openstack), and change behaviour for all classes of that cloud. One way would be to prefix the cloudname to the event. Another way would be is to check the instancetype of the sender: if isinstance(OpenStackProvider, event_args['sender'].provider) or some such thing. I think namespacing would be good as long as isolation is maintained.

# Removing install step from add. Since this manager is meant to create
# other managers rather than run. This will also simplify separating
# handlers added through middleware from those subscribed directly
def add(self, middleware):
Copy link
Contributor

@nuwang nuwang Feb 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should provide dual semantics here - one for shared middleware objects, and another for independent middleware, just to keep it simple. I'm currently leaning in favour of the factory namespacing child event dispatchers, and forwarding them as necessary, so that we still maintain event isolation within a given provider. I think we probably just need to avoid a scenario where providers are unwittingly subscribing to events outside of the provider.

@codecov
Copy link

codecov bot commented Feb 22, 2019

Codecov Report

Merging #195 into master will decrease coverage by 0.07%.
The diff coverage is 94.42%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #195      +/-   ##
==========================================
- Coverage   88.03%   87.96%   -0.08%     
==========================================
  Files          36       36              
  Lines        7775     7902     +127     
  Branches      854      868      +14     
==========================================
+ Hits         6845     6951     +106     
- Misses        594      610      +16     
- Partials      336      341       +5
Impacted Files Coverage Δ
cloudbridge/cloud/providers/azure/services.py 88.2% <100%> (ø) ⬆️
cloudbridge/cloud/providers/aws/services.py 90.09% <100%> (ø) ⬆️
cloudbridge/cloud/providers/gcp/provider.py 91.95% <100%> (ø) ⬆️
cloudbridge/cloud/providers/aws/provider.py 96.22% <100%> (ø) ⬆️
cloudbridge/cloud/providers/mock/provider.py 100% <100%> (ø) ⬆️
cloudbridge/cloud/base/provider.py 88.23% <100%> (ø) ⬆️
cloudbridge/cloud/providers/openstack/provider.py 88.46% <100%> (ø) ⬆️
cloudbridge/cloud/base/services.py 88.23% <100%> (ø) ⬆️
cloudbridge/cloud/providers/openstack/services.py 82.08% <100%> (ø) ⬆️
cloudbridge/cloud/providers/gcp/services.py 83.31% <100%> (ø) ⬆️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ccafd0b...aa50481. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants