Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-level modeling support from AMRs #614

Open
djinnome opened this issue Sep 25, 2024 · 2 comments · May be fixed by #620
Open

Multi-level modeling support from AMRs #614

djinnome opened this issue Sep 25, 2024 · 2 comments · May be fixed by #620
Assignees

Comments

@djinnome
Copy link
Contributor

Ben Gyori has now support for expressions in distribution parameters so we can generate multi-level models. We need to properly sort the expressions so they are evaluated in order.

@djinnome djinnome self-assigned this Sep 25, 2024
@djinnome
Copy link
Contributor Author

djinnome commented Oct 2, 2024

Code to sample from parameter distributions should probably go here:

def _compile_param_values_mira(

@djinnome djinnome linked a pull request Oct 21, 2024 that will close this issue
5 tasks
@djinnome
Copy link
Contributor Author

djinnome commented Oct 28, 2024

Hi @SamWitty

I have a question about how to sample from a PyroSample object.

In the _compile_param_values_mira function below, I have topologically sorted the parameters in mira, and now I am compiling each parameter in order:

@_compile_param_values.register(mira.modeling.Model)
def _compile_param_values_mira(
src: mira.modeling.Model,
) -> Dict[str, Union[torch.Tensor, pyro.nn.PyroParam, pyro.nn.PyroSample]]:
values = {}
for param_name in sort_mira_dependencies(src):
param_info = src.parameters[param_name]
#param_name = get_name(param_info)
if param_info.placeholder:
continue
param_dist = getattr(param_info, "distribution", None)
if param_dist is None:
param_value = param_info.value
else:
param_value = mira_distribution_to_pyro(param_dist, free_symbols=values)
if isinstance(param_value, torch.nn.Parameter):
values[param_name] = pyro.nn.PyroParam(param_value)
elif isinstance(param_value, pyro.distributions.Distribution):
values[param_name] = pyro.nn.PyroSample(param_value)
elif isinstance(param_value, (numbers.Number, numpy.ndarray, torch.Tensor)):
values[param_name] = torch.as_tensor(param_value, dtype=torch.float32)
else:
raise TypeError(f"Unknown parameter type: {type(param_value)}")
return values

In particular, on line 99, I pass the values of the parameters that have already been compiled to Pyro as free symbols that are used to parse the sympy expressions:

param_value = mira_distribution_to_pyro(param_dist, free_symbols=values)

However, to my surprise, the values dictionary that contains the parameter values has not been evaluated yet, so instead of the broadcast_tensors(*tensors) function receiving a list of tensors as its argument, it gets a list containing a PyroSample object instead:

pyciemss/compiled_dynamics.py:26: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../.pyenv/versions/miniconda3-3.11-24.1.2-0/envs/pyciemss/lib/python3.12/functools.py:909: in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
pyciemss/mira_integration/compiled_dynamics.py:99: in _compile_param_values_mira
    param_value = mira_distribution_to_pyro(param_dist, free_symbols=values)
pyciemss/mira_integration/distributions.py:419: in mira_distribution_to_pyro
    k: safe_sympytorch_parse_expr(v, local_dict=free_symbols)
pyciemss/mira_integration/distributions.py:53: in safe_sympytorch_parse_expr
    return sympytorch.SymPyModule(expressions=[expr.args[0]])(**local_dict).squeeze()
../../../../.pyenv/versions/miniconda3-3.11-24.1.2-0/envs/pyciemss/lib/python3.12/site-packages/torch/nn/modules/module.py:1511: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
../../../../.pyenv/versions/miniconda3-3.11-24.1.2-0/envs/pyciemss/lib/python3.12/site-packages/torch/nn/modules/module.py:1520: in _call_impl
    return forward_call(*args, **kwargs)
../../../../.pyenv/versions/miniconda3-3.11-24.1.2-0/envs/pyciemss/lib/python3.12/site-packages/sympytorch/sympy_module.py:265: in forward
    out = torch.broadcast_tensors(*out)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

tensors = (PyroSample(prior=Beta()),)

    def broadcast_tensors(*tensors):
        r"""broadcast_tensors(*tensors) -> List of Tensors
    
        Broadcasts the given tensors according to :ref:`broadcasting-semantics`.
    
        Args:
            *tensors: any number of tensors of the same type
    
        .. warning::
    
            More than one element of a broadcasted tensor may refer to a single
            memory location. As a result, in-place operations (especially ones that
            are vectorized) may result in incorrect behavior. If you need to write
            to the tensors, please clone them first.
    
        Example::
    
            >>> x = torch.arange(3).view(1, 3)
            >>> y = torch.arange(2).view(2, 1)
            >>> a, b = torch.broadcast_tensors(x, y)
            >>> a.size()
            torch.Size([2, 3])
            >>> a
            tensor([[0, 1, 2],
                    [0, 1, 2]])
        """
        # This wrapper exists to support variadic args.
        if has_torch_function(tensors):
            return handle_torch_function(broadcast_tensors, tensors, *tensors)
>       return _VF.broadcast_tensors(tensors)  # type: ignore[attr-defined]
E       TypeError: expected Tensor as element 0 in argument 0, but got PyroSample

The reason for this is described in the Pyro documentation:

https://docs.pyro.ai/en/stable/nn.html#pyro.nn.module.PyroSample

assert isinstance(my_module, PyroModule)
my_module.x = PyroSample(Normal(0, 1))                    # independent
my_module.y = PyroSample(lambda self: Normal(self.x, 1))  # dependent

Note that my_module.y will not evaluate the lambda expression in the PyroSample object until getattr or setattr is called. Similarly, in the _compile_param_values_mira function, the pyro.distributions.Distribution object on line 103 is wrapped in a PyroSample object on line 104:

elif isinstance(param_value, pyro.distributions.Distribution):
values[param_name] = pyro.nn.PyroSample(param_value)

If the pyro.distributions.Distribution object contains unevaluated dependencies, such as the dependency of beta_mean on gamma_mean in the example below, then when should this dependency get evaluated?

beta_mean = Parameter(name='beta_mean',
                    distribution=Distribution(type="Beta1",
                    parameters={'alpha': sympy.Integer(10)*sympy.Symbol("gamma_mean"),
                                'beta': sympy.Integer(10)}))
gamma_mean = Parameter(name='gamma_mean',
                    distribution=Distribution(type="InverseGamma1",
                    parameters={'alpha': sympy.Integer(10),
                                'beta': sympy.Integer(10)}))

The unevaluated PyroSample objects are currently being held in the values dictionary, so when should I compile the values dictionary to a dictionary of tensors (let's call it compiled_values) so that I can compile the parameters that depend on their values?

Here are some options:

  1. inside mira_distributions_to_pyro() where mira distribution parameters are passed to their corresponding pyro distribution function:
    parameters = {
    k: safe_sympytorch_parse_expr(v, local_dict=free_symbols)
    if isinstance(v, SympyExprStr)
    else torch.as_tensor(v)
    for k, v in mira_dist.parameters.items()
    }
  2. Inside _compile_param_values_mira() where distributions are wrapped in a PyroSample:
    elif isinstance(param_value, pyro.distributions.Distribution):
    values[param_name] = pyro.nn.PyroSample(param_value)
  3. Inside the pyciemss.compiled_dynamics.CompiledDynamics.__init__() method where samples are evaluated:
    if isinstance(
    v, (torch.nn.Parameter, pyro.nn.PyroParam, pyro.nn.PyroSample)
    ):
    setattr(self, f"persistent_{get_name(k)}", v)

It seems that I would need to perform options 1-3 on gamma_mean parameter before I could perform options 1-3 on the beta_mean parameter

Any advice would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant