Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Move caching and other implementation details into Implementation / SPI #248

Open
lprimak opened this issue Jul 26, 2024 · 6 comments

Comments

@lprimak
Copy link

lprimak commented Jul 26, 2024

Currently, BeanELResolver (and others?) have implementation details, such as caching, built into the API.
There are a number of issues with this approach, including inability to update these implementation details
without breaking existing API signatures.
This also makes it impossible to properly invalidate the cache, which is dependent on integration with frameworks and application servers.

I propose to create an SPI and move the implementation details out of the API and into EL implementation.

I also propose adding invalidation methods to the default caching implementation, such as clearProperties(ClassLoader)
that removes all beans that belong to a specific ClassLoader

Supersedes #218
Supersedes #214
Supersedes #215

@markt-asf
Copy link
Contributor

Thanks for moving this into a single issue. It is easier to track than three separate issues for essentially the same problem.

The original issues discussed caching of the following:

  • the result of the expression factory lookup;
  • the result of looking up properties for a Java bean.

Caching also exists in the current Jakarta EL API in the ImportHandler.

A number of implementations (at least WildFly, JBoss EAP and Tomcat - there may well be others) have their own variant of the EL API at least in part so they can use an alternative cache implementation.

There appears to be general consensus that making the cache implementations pluggable would be a good thing.

There are differing opinions on how the caches should be implemented. The pluggable approach largely solves that by allowing different approaches to caching underneath the EL API.

As we start designing this solution for EL 6.1, I think we first need to discuss and agree answers to the following:

  1. Is a solution that can also be applied to EL 6.0 and released in a 6.0.x release (i.e. no public API changes) essential, nice to have or unnecessary?

  2. Should the selection of the cache implementation be under the control of the EL implementation (the container in a Jakarta EE environment) or the end user (the application in a Jakarta EE environment)?

  3. Are there are other locations in the EL API where caching is being used and/or should be used that we want to consider as part of this issue?

  4. Should the EL API provide default cache implementations that the EL implementation can then override if desired or should the EL implementation be required to provide cache implementations?

  5. Anything else before we start thinking about designing the solution?

@markt-asf
Copy link
Contributor

My current thoughts:

  1. Is a solution that can also be applied to EL 6.0 and released in a 6.0.x release (i.e. no public API changes) essential, nice to have or unnecessary?

Nice to have but I'd prioritise the 'right' solution for 6.1 onwards over a not quite as good solution that could be back-ported to 6.0.

  1. Should the selection of the cache implementation be under the control of the EL implementation (the container in a Jakarta EE environment) or the end user (the application in a Jakarta EE environment)?

The EL implementation which can then - if it wishes - delegate to the end user. I am concerned that delgating cache selection to the end user (the application in a Jakarta EE container) will create a whole new set of issues around how to select the implementation in a Jakarta EE container when multiple applications want - worse require - different implementations.

  1. Are there are other locations in the EL API where caching is being used and/or should be used that we want to consider as part of this issue?

None I am aware of.

  1. Should the EL API provide default cache implementations that the EL implementation can then override if desired or should the EL implementation be required to provide cache implementations?

Yes. We shouldn't force EL implementations to re-implement caching if they are happy with the current defaults provided by the EL API.

  1. Anything else before we start thinking about designing the solution?

Nothing comes to mind so far.

@OndroMih
Copy link

OndroMih commented Jul 30, 2024

  1. Focus on 6.1. Implementations have a workaround for 6.0.x to patch the API
  2. Focus on SPI for implementations. Little need that apps provide a caching mechanism.
  3. Other locations where chaching is used? I don't know
  4. If we start with a green field, I would avoid having any caching in the API. But, since it's already there, it's OK to continue providing it for implementations that don't provide their own one. It would be nice to extract the caching mechanism to a separate class, which can be used by the implementations if they need to decorate it (e.g. separate cache for different classloaders).
  5. SPI should be OSGi friendly (where ServiceLocator is not applicable) and should work with custom classloaders (e.g. with current thread classloader). As an example, we can follow MicroProfile Config resolvers - in OSGi, possible to set the global resolver via static setter, or ServiceLocator is used as a fallback.

@markt-asf
Copy link
Contributor

Any more thoughts on these questions before I start thinking about an API for this?

@lprimak
Copy link
Author

lprimak commented Aug 21, 2024

Im good with any API that allows clearing the cache by key's ClassLoader, thank you!
Everything else is described here pretty well!

@markt-asf
Copy link
Contributor

I don't think the EL API is going to include explicit cache clearing. Individual implementations may well do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants