Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

re-execute blackbox numpy notebook #496

Merged
merged 2 commits into from
Feb 16, 2023
Merged

Conversation

OriolAbril
Copy link
Member

@OriolAbril OriolAbril commented Jan 10, 2023

re-execute blackbox numpy notebook, closes #268

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@OriolAbril
Copy link
Member Author

cc @ricardoV94, not sure what is going on with the gradient comparison at the bottom, I assume I am computing a different quantity or there are normalization terms at play but I can't really tell from the existing docs.

@ricardoV94
Copy link
Member

ricardoV94 commented Jan 12, 2023

@OriolAbril the difference is that the gradient was being requested wrt to the untransformed parameters, which is something that is no longer possible in V>4.0 (see related discussion here: pymc-devs/pymc#5443)

To obtain the values as was being done before you can run this snippet:

# test the gradient that PyMC uses for the Normal log likelihood
with pm.Model() as test_model:
    m = pm.Uniform("m", lower=-10.0, upper=10.0, transform=None)
    c = pm.Uniform("c", lower=-10.0, upper=10.0, transform=None)
    pm.Normal("likelihood", mu=(m * x + c), sigma=sigma, observed=data)

gradfunc = test_model.compile_dlogp([m, c])
grad_vals_pymc = gradfunc({"m": mtrue, "c": ctrue})
print(f'Gradient returned by PyMC "Normal" distribution: {grad_vals_pymc}') 
Gradient returned by PyMC "Normal" distribution: [8.32628482 2.02949635]

However I think for teaching purposes would be better to compare the model using the custom Op with the comparison one directly like this:

ip = pymodel.initial_point()
print(f"Evaluating dlogp of model at point {ip}")
grad_vals_custom = pymodel.compile_dlogp()(ip)

# Compare withe the gradient that PyMC uses for the Normal log likelihood
with pm.Model() as test_model:
    m = pm.Uniform("m", lower=-10.0, upper=10.0)
    c = pm.Uniform("c", lower=-10.0, upper=10.0)
    pm.Normal("likelihood", mu=(m * x + c), sigma=sigma, observed=data)
grad_vals_pymc = test_model.compile_dlogp()(ip)

print(f'Gradient of model using a custom "LogLikeWithGrad": {grad_vals_custom}') 
print(f'Gradient of model using a PyMC "Normal" distribution: {grad_vals_pymc}') 
Evaluating dlogp of model at point {'m_interval__': array(0.), 'c_interval__': array(0.)}
Gradient of model using a custom "LogLikeWithGrad": [1286.63142412  250.14748176]
Gradient of model using a PyMC "Normal" distribution: [1286.63142412  250.14748176]

@OriolAbril OriolAbril marked this pull request as ready for review January 13, 2023 20:12
@twiecki twiecki merged commit 14e044d into pymc-devs:main Feb 16, 2023
@OriolAbril OriolAbril deleted the blackbox branch February 16, 2023 09:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

black box external likelihood (numpy)
3 participants