-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mock MMM posterior with prior #582
Conversation
I have only two tests failing on my side because the values are now in different scale. However, the tests run much faster 🎉 ~22mins -> 17mins -> 11mins |
I've mock the posterior with a prior I've removed the two tests that were based on a "good" fit based on the input y:
If there are any alternative suggestions, let me know. Would love to hear your thoughts @ricardoV94 Working on the actually fit tests that will be marked as slow. Are there any criteria that sticks out in your mind to test? |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #582 +/- ##
==========================================
- Coverage 92.57% 91.11% -1.47%
==========================================
Files 24 21 -3
Lines 2520 2172 -348
==========================================
- Hits 2333 1979 -354
- Misses 187 193 +6 ☔ View full report in Codecov by Sentry. |
# Original scale constraint | ||
assert np.all(posterior_predictive_mean >= 0) | ||
|
||
# Domain kept close | ||
lower, upper = np.quantile(a=posterior_predictive_mean, q=[0.025, 0.975], axis=0) | ||
assert lower < toy_y.mean() < upper |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense for these tests not to work with the changes? They seem like rather useful checks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented this above. I will add these to the tests from the actual fit.
Using the normal likelihood doesn't guarantee the prior will meet these constraints. They were geared toward a fit model. Not a prior predictive
updates: - [github.com/astral-sh/ruff-pre-commit: v0.3.0 → v0.3.2](astral-sh/ruff-pre-commit@v0.3.0...v0.3.2) - [github.com/pre-commit/mirrors-mypy: v1.8.0 → v1.9.0](pre-commit/mirrors-mypy@v1.8.0...v1.9.0) - [github.com/nbQA-dev/nbQA: 1.8.3 → 1.8.4](nbQA-dev/nbQA@1.8.3...1.8.4)
Co-authored-by: Colt Allen <[email protected]>
Ran I've added some slow tests to the MMM test suite that build of what was already existing but some additional checks like parameters being close to actuals Let me know what you think of the quality of these tests! @juanitorduz |
Probably you wanted |
Fine if we just squash? Or shall I retry? |
Don't retry :D |
Only 1 test failing when trying to recover the original parameters. Trying to toy around with it as I think the scaling of the variables of X might be messing up the recovery. Maybe we can save this for future issue with the speed gain from the non-slow tests Wonder if you have any thoughts @juanitorduz . Any suggestions would be helpful |
Test issue might be solved here @juanitorduz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I just left minor comments :) Can you please resolve the small conflicts with the main branch so that we can merge this one? :)
The test, |
No idea why, seems like a meaningful failure. Better to open an issue about it |
It is open: #620 I checked and seems related to a pymc 5.12 -> 5.13 change (I tested locally and it started failing after the release) |
You confirmed it passes on main with PyMC 5.12, and fails with 5.13? To rule out changes in the codebase here as well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For context
def random_mask(df: pd.DataFrame, mask_value: float = 0.0) -> pd.DataFrame: | ||
shape = df.shape | ||
|
||
mask = rng.choice([0, 1], size=shape, p=[0.75, 0.25]) | ||
return df.mul(mask) | ||
|
||
|
||
@pytest.fixture(scope="module") | ||
def masked_toy_X(toy_X) -> pd.DataFrame: | ||
return toy_X.set_index("date").pipe(random_mask).reset_index() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rational here is that since all of the toy_X columns are pretty much white noise, the generated y is also pretty much a white noise and the model doesn't fit well
Here there are some clear stops in toy_X making the y notably depend on the covariates
Description
Related Issue
Checklist
Modules affected
Type of change
📚 Documentation preview 📚: https://pymc-marketing--582.org.readthedocs.build/en/582/