Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model selection example #6

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Model selection example #6

wants to merge 2 commits into from

Conversation

ivan-afonichkin
Copy link

@ivan-afonichkin ivan-afonichkin commented Apr 24, 2018

No description provided.

@OssiGalkin
Copy link

It is confusing that random states are defined in models, but they are not actually used later. As random states (or fixed seeds) are not used results will change between runs, and it seems not to be uncommon for order of how good models are compared to each other to change in both cases. Using higher sample sizes or repeating experiment more times might fix this, but running notebook is incredible slow, when using higher sample sizes due batch_size=1. I don't blame anyone but Elfi for this, as using batch_size over one is too hard at the moment.

In both cases "observed" data was generated with model that was one candidate in model selection. My intuition is that model selection would nearly always spot which one was used. However, this was not always the case, probably because experiment was repeated too few times or too few samples were used.

@vuolleko
Copy link
Member

vuolleko commented May 4, 2018

@OssiGalkin The random state is used internally by ELFI, but indeed if the user does not provide a seed, its significance is hidden. Whether a seed should be used or not can be debated; the overall results should remain unchanged anyway...

...and if they don't, a larger sample size could indeed help. Using a larger batch_size is very simple in this particular case, one just has to follow the rules of Numpy broadcasting. So reshaping the theta arguments to a suitable shape, e.g. just by theta[:, None] suffices, and after that the inference can be done very fast. @destinityx2 please make this change.

@SidRama
Copy link

SidRama commented May 7, 2018

The notebook gives a good demonstration of model selection using ELFI. I have no major comments, except that it would be nice to have a bit more of explanatory text between the code cells describing what is about to be done. I feel this would make the notebook into a more helpful tutorial.
Overall, good effort! 👍

@ivan-afonichkin
Copy link
Author

@vuolleko Fixed batch_size. Please, check it.

Copy link
Member

@vuolleko vuolleko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A very concise notebook. A couple of comments

  • It's a bit strange to have multiple separate graphs inside a single ELFI graph (model). On the other hand this approach is graphically attractive and works, so why not
  • Perhaps would be more "notebook-style" to write descriptions as Markdown cells instead of Python comments, but not a big deal
  • Since the simulators are cheap, the study could use more samples. Also, are the samples "good" i.e. how was the fixed threshold chosen?

Please fix the last point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants