You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the estimation/integration of the normalization factor for constraint priors performs a fairly simple Monte Carlo integration that stops once a target number of accepted samples has been produced. This has two issues:
If the constraint only removes a small part of the unconstrained volume, the number of accepted samples is reached comparatively fast. Thus, the total number of proposed samples will be small, leading to larger variances in the quality of the integral compared to constraints that remove, say, half of the prior volume.
On the flip side, if the constraint removes almost all of the prior volume, the integration routine will take a long time to converge to the target number of samples. This case is somewhat artificial since, for such priors, a different parametrization should probably be used to improve sampling efficiency anyway, but especially in very high dimensions, the prior volume removed by a constraint might be significantly larger than naively expected.
For these reasons, I propose switching to an off-the-shelf stochastic integration routine and to check the integration error, for instance the qmc_quad routine implemented by scipy. Alternatively, one could at max_iterargument, or similar, to fix excessive routimes in case 2.
If such changes are up for consideration, I would go ahead on an implementation.
The text was updated successfully, but these errors were encountered:
Currently, the estimation/integration of the normalization factor for constraint priors performs a fairly simple Monte Carlo integration that stops once a target number of accepted samples has been produced. This has two issues:
For these reasons, I propose switching to an off-the-shelf stochastic integration routine and to check the integration error, for instance the
qmc_quad
routine implemented by scipy. Alternatively, one could atmax_iter
argument, or similar, to fix excessive routimes in case 2.If such changes are up for consideration, I would go ahead on an implementation.
The text was updated successfully, but these errors were encountered: