-
-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] 1) More details in render_model
(esp. for pyro.params
); 2) Returning dictionary from render_mode
#3023
Comments
Hi @nipunbatra, regarding
|
+1 for having an optional |
I think another thing to potentially consider while addressing this issue could be LaTeX support in renders. I believe Graphviz doesn't support, but Daft-PGM does. An older issue #2980 mentioned the possibility of using Graphviz for layout and then plotting using Daft-PGM. Would increase overhead. Perhaps, could be left as an example for advanced users who run
|
Hi @fritzo and team, My name is Karm. I am working with Prof. @nipunbatra and lab colleague @patel-zeel. MLE Model 1def model_mle_1(data):
mu = pyro.param('mu', torch.tensor(0.),constraint=constraints.unit_interval)
sd = pyro.param('sd', torch.tensor(1.),constraint=constraints.greater_than_eq(0))
with pyro.plate('plate_data', len(data)):
pyro.sample('obs', dist.Normal(mu, sd), obs=data) data = torch.tensor([1.,2.,3.])
get_model_relations(model_mle_1,model_args=(data,)) {'sample_sample': {'obs': []}, render_model(model_mle_1,model_args=(data,),render_distributions=True) render_model(model_mle_1,model_args=(data,),render_distributions=True,render_params=True) MAP Model 1def model_map_1(data):
k1 = pyro.param('k1',torch.tensor(1.))
mu = pyro.sample('mu', dist.Normal(0, k1))
sd = pyro.sample('sd', dist.LogNormal(mu, k1))
with pyro.plate('plate_data', len(data)):
pyro.sample('obs', dist.Normal(mu, sd), obs=data) data = torch.tensor([1.,2.,3.])
get_model_relations(model_map_1,model_args=(data,)) {'sample_sample': {'mu': [], 'sd': ['mu'], 'obs': ['sd', 'mu']}, render_model(model_map_1,model_args=(data,),render_distributions=True) render_model(model_map_1,model_args=(data,),render_distributions=True,render_params=True) MAP Model 2def model_map_2(data):
t = pyro.param('t',torch.tensor(1.),constraints.integer)
a = pyro.sample('a', dist.Bernoulli(t))
b = pyro.param('b',torch.tensor(2.))
with pyro.plate('plate_data', len(data)):
pyro.sample('obs', dist.Beta(a, b), obs=data) data = torch.tensor([1.,2.,3.])
get_model_relations(model_map_2,model_args=(data,)) {'sample_sample': {'mu': [], 'sd': ['mu'], 'obs': ['sd', 'mu']}, render_model(model_map_2,model_args=(data,),render_distributions=True) render_model(model_map_2,model_args=(data,), render_distributions=True, render_params=True) Changes made in codeBroadly, I made the following changes in pyro.infer.inspect.py.
def _pyro_post_param(self, msg):
if msg["type"] == "param":
provenance = frozenset({msg["name"]}) # track only direct dependencies
value = detach_provenance(msg["value"])
msg["value"] = ProvenanceTensor(value, provenance) Then, to add values in
I added an additional key node_data[param] = {
"is_observed": False ,
"distribution":None ,
"constraint": constraint
} Further, @fritzo, please give your feedback on this. Can I make PR If the dictionary and graph meet your expectations? |
@karm216 this looks great, we'd love PR contributing this feature! Note there are rigorous tests for |
Hi,
From the MLE-MAP tutorial, we have the following models
MLE model
If we render this, we get something like the following image
MAP model
If we render this, we get something like the following image
Coin toss graphical model images from the popular Maths for ML book
This is from Figure 8.10
We'd expect our MLE model render to look like 8.10 b) and our MAP model to look like 8.10 c)
So, when we have
latent_fairness
as a parameter, it should perhaps just be written aslatent_fairness
and under the MAP model, it should be parameterised by theBeta
distribution.From the pyro render of the MLE model, it is not easily visible how observations are related to
latent_fairness
.Feature Requests
So, I have two questions/requests
pyro.params
also show in renders. The difference in the renders betweenpyro.sample
andpyro.parameter
would be the associated distribution (and thus hyperparams) inpyro.sample
render_model
? For example, once can then use that dictionary to create their own graphical models, for example using tikz-bayesnet. For example, the code below reproduces the Figure 8.10 from MML book shown above.3.Click to toggle contents of `code`
The text was updated successfully, but these errors were encountered: