Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PC prior for Student-T degrees of freedom #6827

Closed
wants to merge 1 commit into from

Conversation

bwengals
Copy link
Contributor

@bwengals bwengals commented Jul 13, 2023

What is this PR about?
Adding penalized complexity prior for the Student-T degrees of freedom parameter. Useful in models where the likelihood was normal, but you'd need some robustness so you switch to a Student T likelihood. It's implemented already in INLA.

See discussions here:

Is related to issue I opened in pymc-examples here: pymc-devs/pymc-examples#558. This is probably the only PC prior I've seen that benefits from having it's own special class, and not as a transformation of existing distributions.

The code is working, but the PR is WIP, hoping to start a discussion on it. Would this be nice to have in PyMC? In pymc-experimental? Neither?

@ricardoV94
Copy link
Member

ricardoV94 commented Jul 13, 2023

I think it's a perfect fit to pymc-experimental and sounds very interesting

@bwengals
Copy link
Contributor Author

I think its a bit less cut and dry, since I'm not sure it fits any of the 4 reasons here well. But for pymc-experimental as a staging ground for PyMC code, then I def think it makes sense there.

@ricardoV94
Copy link
Member

But for pymc-experimental as a staging ground for PyMC code, then I def think it makes sense there

Yes. The R2D2M2 also went there first

@twiecki
Copy link
Member

twiecki commented Jul 17, 2023

Sounds though like it's pretty well-mapped and thought out, and this is just the implementation, no?

@ricardoV94
Copy link
Member

Just the two numerical approximations involved would flag it as "experimental" for me.

In my experience it's the kind of thing that really benefits from being tried out to have an understanding if it's precise/stable enough or not.

@ricardoV94
Copy link
Member

Checking the idea itself it also sounds quite "experimental". Anyway let me know when you have decided so that review comments can be left.

Among other things it seems useful to add a default transform (2, infty) instead of relying on the logp for the constraint

@bwengals
Copy link
Contributor Author

Good call on the constraint, I'll add that!

I also think the idea is pretty well mapped out (regardless of my implementation). Looks like it's been in INLA for ~7 years now, and Stan recommends it even if they don't implement it. For a couple comparisons, the R2D2M2 paper has 4 citatations while the PC prior paper has about 900. I don't know if the ZeroSumNormal idea has been written up anywhere specific (at least in a hierarchical modeling context), but I've seen it in the Stan user guide in section 31.5. So I think comparatively the backing in the lit pretty strong.

I am still on the fence though about here or pymc-experimental, so I'll leave this up a few days and hope more people weigh in! It's not in widespread use outside of INLA, and I don't know if that's because people have tried it and Gamma(2, 0.1) just works better, or if it just hasn't been implemented and tested widely.

Setting a prior on the degrees of freedom for a Student T likelihood is tricky and a common use case though, so I'm curious if it's helpful here.

@twiecki
Copy link
Member

twiecki commented Jul 18, 2023

My vote goes to PyMC, rather than PyMC-experimental. I don't really foresee there to be lots of iteration on this and I think if the implementation is correct it's a nice addition, and we save porting it over or forgetting about it. I don't think so far we moved a lot from experimental into proper.

But I don't feel super strongly about it either.

@bwengals
Copy link
Contributor Author

Talked a bit with @ricardoV94 offline, and I think we agree that it's probably a good strategy overall if most new features generally start in experimental. I think that has to be regardless of their projected usefulness though or else it won't ever be clear where something should go and then someone has to play goalkeeper, which isn't fun. Though pymc vs. pymc-experimental is a larger issue than this PR that might be worth revisiting?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants