Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching mechanism for AcquisitionModel norm #1243

Open
paskino opened this issue Apr 25, 2024 · 5 comments
Open

Caching mechanism for AcquisitionModel norm #1243

paskino opened this issue Apr 25, 2024 · 5 comments
Assignees
Milestone

Comments

@paskino
Copy link
Contributor

paskino commented Apr 25, 2024

When the norm of an AcquisitionModel is called it gets calculated and returned. However, once called again the norm is recalculated and as the value is not cached. This can lead to a consistent increase of computation time.

I made a thin wrapper for SIRF MR AcquisitionModel as below.

https://github.com/paskino/SIRF-Contribs/blob/b8e542d4f7af1ed1e47eaed8517d4be0517aca17/src/notebooks/utils.py#L147-L162

Python already provides a caching mechanism:
https://docs.python.org/3/library/functools.html#functools.lru_cache

@KrisThielemans
Copy link
Member

Ideally, this caching happens at the C++ level. However, the cache should be invalidated when a set_ function is called. Possibly easiest to check that is if set_up() has been called. (In STIR, we have an already_set_up member for this.)

Unfortunately, anything with caching gets complicated in a multi-threaded context. We don't seem to have any in SIRF though, so we probably don't care yet.

So, something like

bool already_setup = false;
// if negative, it hasn't be computed yet
double cached_norm = -1.;

double norm() const
{
  if (cached_norm < 0)
    cached_norm = norm_calculation
  return cached_norm;
}

@evgueni-ovtchinnikov
Copy link
Contributor

Do not like the idea, I am afraid - we will need to invalidate the cache in every non-const method.

@KrisThielemans
Copy link
Member

On the C++ side, we know what the non-const methods do, so we know when we have to invalidate (and that would just be when the underlying model changes). A bit painful, but not too much, I think.

On the Python side, const doesn't exist. People can mess up member variables as much as they want (people are supposed to be well-behaved!). One more reason not to do this in Python...

@KrisThielemans
Copy link
Member

For PET, there's maybe 8 methods that invalidate the cache (changing the additive/background term doesn't change the norm, but fine).

So, the strategy would be to

  • in non-const methods, set _already_set_up=false
  • in set_up(), set _already_set_up=true and _norm_cached=false
  • in most const methods, check _already_set_up==true
  • in norm() check if _already_set_up==false, then throw error. Otherwise, if _norm_cached==true, return it, etc.

Not painless :-( but far safer for the set_up part anyway (i.e. we need to do that to prevent crashes I think).

@evgueni-ovtchinnikov
Copy link
Contributor

what if the user is after a more accurate computation of the norm, and just wants to re-run it with a larger number of iterations? (Or different subset or subsets' number.)

I believe it is the user's code responsibility to decide whether to use already computed norm or compute it again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants