Releases: blei-lab/edward
1.3.5
- Added automatic posterior approximations in variational inference (#775).
- Added use of
tf.GraphKeys.REGULARIZATION_LOSSES
to variational inference (#813). - Added multinomial classification metrics (#743).
- Added utility function to assess conditional independence (#791).
- Added custom metrics in evaluate.py (#809).
- Minor bug fixes, including automatic transformations (#808); ratio inside
ed.MetropolisHastings
(#806).
Acknowledgements
- Thanks go to Baris Kayalibay (@bkayalibay), Christopher Lovell (@christopherlovell), David Moore (@davmre), Kris Sankaran (@krisrs1128), Manuel Haussmann (@manuelhaussmann), Matt Hoffman (@matthewdhoffman), Siddharth Agrawal (@siddharth-agrawal), William Wolf (@cavaunpeu), @gfeldman.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.3.4
This version release comes with several new features, alongside a significant push for better documentation, examples, and unit testing.
ed.KLqp
's score function gradient now does more intelligent (automatic) Rao-Blackwellization for variance reduction.- Automated transformations are enabled for all inference algorithms that benefit from it [tutorial].
- Added Wake-Sleep algorithm (
ed.WakeSleep
). - Many minor bug fixes.
Examples
- All Edward examples now rely on the Observations library for data loading (no "official" public release yet; still in alpha).
- Added LSTM language model for text8. (
examples/lstm.py
) - Added deep exponential family for modeling topics in NIPS articles. (
examples/deep_exponential_family.py
) - Added sigmoid belief network for Caltech-101 silhouettes. (
examples/sigmoid_belief_network.py
) - Added stochastic blockmodel on Karate club. (
examples/stochastic_block_model.py
) - Added Cox process on synthetic spatial data. (
examples/cox_process.py
)
Documentation & Testing
- Sealed all undocumented functions and modules in Edward.
- Parser and BibTeX to auto-generate API docs.
- Added unit testing to (most) all Jupyter notebooks.
Acknowledgements
- Thanks go to Matthew Feickert (@matthewfeickert), Alp Kucukelbir (@akucukelbir), Romain Lopez (@romain-lopez), Emile Mathieu (@emilemathieu), Stephen Ra (@stephenra), Kashif Rasul (@kashif), Philippe Rémy (@philipperemy), Charles Shenton (@cshenton), Yuto Yamaguchi (@yamaguchiyuto), @evahlis, @samnolen, @seiyab.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.3.3
- Edward is updated to require a TensorFlow version of at least 1.2.0rc0.
- Miscellaneous bug fixes and revisions.
Acknowledgements
- Thanks go to Joshua Engelman (@jengelman), Matt Hoffman (@matthewdhoffman), Kashif Rasul (@kashif).
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.3.2
- More TensorBoard support, including default summaries. See the tutorial (#598, #654, #653).
- A batch training tutorial is added.
- Improved training of Wasserstein GANs via penalty (#626).
- Fixed error in sampling for
DirichletProcess
(#652). - Miscellaneous bug fixes, documentation, and speed ups.
Acknowledgements
- Thanks go to Janek Berger (@janekberger), Ian Dewancker (@iandewancker) Patrick Foley (@patrickeganfoley), Nitish Joshi (@nitishjoshi25), Akshay Khatri (@akshaykhatri639), Sean Kruzel (@closedLoop), Fritz Obermeyer (@fritzo), Lyndon Ollar (@lbollar), Olivier Verdier (@olivierverdier), @KonstantinLukaschenko, @meta-inf.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.3.1
- Fixed error in 1.3.0 when importing conjugacy submodule.
1.3.0
Edward requires a TensorFlow version of at least 1.1.0rc0. This includes several breaking API changes:
- All Edward random variables use English keyword arguments instead of Greek. For example,
Normal(loc=0.0, scale=1.0)
replaces the older syntax ofNormal(mu=0.0, sigma=1.0)
. MultivariateNormalCholesky
is renamed toMultivariateNormalTriL
.MultivariateNormalFull
is removed.rv.get_batch_shape()
is renamed torv.batch_shape
.rv.get_event_shape()
is renamed torv.event_shape
.
Model
- Random variables accept an optional
sample_shape
argument. This lets its associated tensor to represent more than a single sample (#591). - Added a
ParamMixture
random variable. It is a mixture of random variables where each component has the same distribution (#592). DirichletProcess
has persistent states across calls tosample()
(#565, #575, #583).
Inference
- Added conjugacy & symbolic algebra. This includes a
ed.complete_conditional
function (#588, #605, #613). See a Beta-Bernoulli example. - Added Gibbs sampling (#607). See the unsupervised learning tutorial for a demo.
- Added
BiGANInference
for adversarial feature learning (#597). Inference
,MonteCarlo
,VariationalInference
are abstract classes, preventing instantiation (#582).
Miscellaneous
- A more informative message appears if the TensorFlow version is not supported (#572).
- Added a
shape
property to random variables. It is the same asget_shape()
. - Added
collections
argument to random variables(#609). - Added
ed.get_blanket
to get Markov blanket of a random variable (#590). ed.get_dims
anded.multivariate_rbf
utility functions are removed.- Miscellaneous bug fixes and speed ups (e.g., #567, #596, #616).
Acknowledgements
- Thanks go to Robert DiPietro (@rdipietro), Alex Lewandowski (@AlexLewandowski), Konstantin Lukaschenko (@KonstantinLukaschenko) Matt Hoffman (@matthewdhoffman), Jan-Matthis Lückmann (@jan-matthis), Shubhanshu Mishra (@napsternxg), Lyndon Ollar (@lbollar), John Reid (@JohnReid), @Phdntom.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.2.4
- Added DirichletProcess random variable (#555)
- Added progress bar for inference (#546).
- Improved type support and error messages (#561, #563).
- Miscellaneous bug fixes.
Documentation
- Added Edward Forum (https://discourse.edwardlib.org)
- Added Jupyter notebook for all tutorials (#520).
- Added tutorial on linear mixed effects models (#539).
- Added example of probabilistic matrix factorization (#557).
- Improved API styling and reference page (#536, #548, #549).
- Updated website sidebar, including a community page (#533, #551).
Acknowledgements
- Thanks go to Mayank Agrawal (@timshell), Siddharth Agrawal (@siddharth-agrawal), Lyndon Ollar (@lbollar), Christopher Prohm (@chmp), Maja Rudolph (@mariru).
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.2.3
- Version release in sync with the published paper version, "Deep Probabilistic Programming". A companion webpage is available here (#510).
Models
- All support is removed for model wrappers (#514, #517).
- Direct fetching (
sess.run()
andeval()
) is enabled forRandomVariable
(#503). - Index, iterator, and boolean operators are overloaded for
RandomVariable
(#515).
Inference
- Variational inference is added for implicit probabilistic models (#491).
- Laplace approximation uses multivariate normal approximating families (#506).
- Removed need for manually specifying Keras session during inference (#490).
- Recursive graphs are properly handled during inference (#500).
Documentation & Examples
- Probabilistic PCA tutorial is added (#499).
- Dirichlet process with base distribution example is added (#508).
- Bayesian logistic regression example is added (#509).
Miscellanea
- Dockerfile is added (#494).
- Replace some utility functions with TensorFlow's (#504, #507).
- A number of miscellaneous revisions and improvements (e.g., #422, #493, #495).
Acknowledgements
- Thanks go to Mayank Agrawal (@timshell), Paweł Biernat (@pwl), Tom Diethe (@tdiethe), Christopher Prohm (@chmp), Maja Rudolph (@mariru), @SnowMasaya.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.2.2
Models
- Operators are overloaded for
RandomVariable
. For example, this enablesx + y
(#445). - Keras' neural net layers can now be applied directly to
RandomVariable
(#483).
Inference
- Generative adversarial networks are implemented, available as
GANInference
. There's a tutorial (#310). - Wasserstein GANs are implemented, available as
WGANInference
(#448). - Several integration tests are implemented (#487).
- The scale factor argument for
VariationalInference
is generalized to be a tensor (#467). Inference
can now work withtf.Tensor
latent variables and observed variables (#488).
Criticism
- A number of miscellaneous improvements are made to
ed.evaluate
anded.ppc
. This includes support for checking implicit models and proper Monte Carlo estimates for the posterior predictive density (#485).
Documentation & Examples
- Edward tutorials are reorganized in the style of a flattened list (#455).
- Mixture density network tutorial is updated to use native modeling language (#459).
- Mixed effects model examples are added (#461).
- Dirichlet-Categorical example is added (#466).
- Inverse Gamma-Normal example is added (#475).
- Minor fixes have been made to documentation (#437, #438, #440, #441, #454).
- Minor fixes have been made to examples (#434).
Miscellanea
- To support both
tensorflow
andtensorflow-gpu
, TensorFlow is no longer an explicit dependency (#482). - The
ed.tile
utility function is removed (#484). - Minor fixes have been made in the code base (#433, #479, #486).
Acknowledgements
- Thanks go to Janek Berger (@janekberger), Nick Foti (@nfoti), Patrick Foley (@patrickeganfoley), Alp Kucukelbir (@akucukelbir), Alberto Quirós (@bertini36), Ramakrishna Vedantam (@vrama91), Robert Winslow (@rw).
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
1.2.1
- Edward is compatible with TensorFlow 1.0. This provides significantly more distribution support. In addition, Edward now requires TensorFlow 1.0.0-alpha or above (#374, #426).
Inference
- Stochastic gradient Hamiltonian Monte Carlo is implemented (#415).
- Leapfrog calculation is streamlined in HMC, providing speedups in the algorithm (#414).
- Inference now accepts
int
andfloat
data types (#421). - Order mismatch of latent variables during MCMC updates is fixed (#413).
Documentation & Examples
- Rasch model example is added (#410).
- Collapsed mixture model example is added (#350).
- Importance weighted variational inference example is updated to use native modeling language.
- Lots of minor improvements to code and documentation (e.g., #409, #418).
Acknowledgements
- Thanks go to Gökçen Eraslan (@gokceneraslan), Jeremy Kerfs (@jkerfs), Matt Hoffman (@matthewdhoffman), Nick Foti (@nfoti), Daniel Wadden (@dwadden), Shijie Wu (@shijie-wu).
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.