Releases: Project-MONAI/GenerativeModels
Releases · Project-MONAI/GenerativeModels
Version 0.2.3
What's Changed
- Fixes model zoo scheduler args by @marksgraham in #398
- 373 add code for spade vae gan by @virginiafdez in #405
- support diff cond type in infererer by @guopengf in #416
- Move kld loss into SPADE Network by @marksgraham in #413
- support convtranspose and activation checkpointing by @guopengf in #415
- add spatial rescaler by @guopengf in #414
- Update README.md by @marksgraham in #418
- Add pad and cropping options to the Latent Diffusion Inferer (+ test). by @virginiafdez in #421
- Replace deprecated monai transforms by @marksgraham in #430
- Fixes SPADE flake8 by @marksgraham in #431
- Add dropout for conditioning cross-attention blocks. by @virginiafdez in #407
- Added SPADE-LDM code taking into account new changes to main: by @virginiafdez in #436
- Added normal and latent inferers for ControlNet. Added tests (copied … by @virginiafdez in #439
- Added SPADE functionality on the decode call for the sample methods. by @virginiafdez in #441
- Update README.md by @marksgraham in #444
- Update setuppy and pyprojecttoml by @Warvito in #401
- 445 fix controlnet tutorial by @virginiafdez in #448
- Issue was coming from the definition of SpatialPad (self.ldm_resizer)… by @virginiafdez in #449
- 446 Fix the way reversed_step in DDIM treats the first alpha in alphas_cumprod by @matanat in #452
- Add warning when initialising the patchgan discriminator with batchnorm in a distributed environment by @marksgraham in #454
New Contributors
Full Changelog: 0.2.2...0.2.3
Version 0.2.2
What's Changed
- Adding tutorial on 2D super resolution using lightning by @OeslleLucena in #365
- Adding components and refactoring of schedulers by @ericspod in #285
- Create python-publish.yml by @ericspod in #391
- Use monai instead of monai-weekly in the requirements by @mingxin-zheng in #392
New Contributors
- @mingxin-zheng made their first contribution in #392
Full Changelog: 0.2.1...0.2.2
Version 0.2.1
What's Changed
- 2d controlnet tutorial by @marksgraham in #385
- 389 add torchvision resnet50 support by @yiheng-wang-nv in #390
New Contributors
- @yiheng-wang-nv made their first contribution in #390
Full Changelog: 0.2.0...0.2.1
Version 0.2.0 - Transformers and Anomaly Detection
What's Changed
- Adds likelihood computation by @marksgraham in #122
- Add classifier-free guidance tutorial by @Warvito in #131
- Add missing scale by @Warvito in #147
- Fix 2d-ldm-tutorial. by @JessyD in #151
- Add 3d percetual loss by @Warvito in #158
- Adds inpainting tutorials by @marksgraham in #161
- Fix set_timesteps by @Warvito in #163
- Add stable diffusion v2.0 x4 upscaler tutorial by @Warvito in #148
- Added the MMD Metric and tests by @danieltudosiu in #152
- Add v_prediction and update docstrings by @Warvito in #165
- Add RadImageNet to Perceptual Loss by @Warvito in #153
- Add option num_head_channels as Sequence by @Warvito in #172
- Fix kernel_size in quant_conv and post_quant_conv layers by @Warvito in #170
- Fix "medicalnet_..." network_type used with spatial_dims==2 by @Warvito in #167
- Adds context error by @marksgraham in #175
- Remove ununsed predict_epsilon by @Warvito in #184
- Fix TestDiffusionSamplingInferer by @Warvito in #180
- Add option to use residual blocks for updownsampling by @Warvito in #176
- Optimise Attention Mechanisms by @Warvito in #145
- Fix F.interpolate usage with bfloat16 by @Warvito in #157
- Pretrained DDPM by @marksgraham in #177
- Add full precision attention by @Warvito in #189
- Add FID by @Warvito in #40
- Update tests, CI and pre-commit by @Warvito in #193
- Fix missing init.py by @Warvito in #200
- Replace FeedForward with MLPBlock by @Warvito in #201
- Remove python3.8 as default_language_version by @Warvito in #209
- Refactor code with new pre commit configuration by @Warvito in #207
- Fixes by @marksgraham in #211
- Modify sample function to divide by scale factor before passing to th… by @virginiafdez in #214
- Addition of is_fake_3d setting condition to error in PerceptualLoss by @virginiafdez in #215
- Add verification code by @Warvito in #221
- Suspend CI by @Warvito in #224
- Sequence Ordering class by @danieltudosiu in #168
- Add AutoregressiveTransformer by @Warvito in #225
- Remove ch_mult from AutoencoderKL by @Warvito in #220
- Fix print messages for MS-SSIM by @JessyD in #230
- 228 update pretrained ddpm by @marksgraham in #233
- WIP by @OeslleLucena in #202
- Add annotations from future by @Warvito in #239
- Change num_res_blocks to Sequence[int] | int by @Warvito in #238
- Change num_res_channels and num_channels to Sequence[int] | int by @Warvito in #237
- Initialise inference_timesteps to train_timesteps by @marksgraham in #240
- Fix eval mode for stage_2 by @Warvito in #246
- Fix format by @Warvito in #250
- Fix format by @Warvito in #251
- Use no_grad decorator for sample method by @Warvito in #244
- Add VQVAE + Transformer inferer by @Warvito in #242
- Fix TypeError by @Warvito in #254
- Harmonise VQVAE with AutoencoderKL by @Warvito in #248
- Adds transformer get_likelihood by @marksgraham in #257
- Changed PatchAdversarialLoss to allow for least-squares criterion to … by @virginiafdez in #262
- 258 fix diffusion inferer by @marksgraham in #265
- Fixes type by @marksgraham in #264
- Change default values by @Warvito in #267
- 106 vqvae transfomer tutorial by @Ashayp31 in #236
- 203 add scale factor to the ldm training tutorials by @virginiafdez in #271
- Mednist Bundle by @ericspod in #263
- Add cache_dir by @Warvito in #278
- Add option to use flash attention by @Warvito in #222
- Set param.requires_grad = False by @Warvito in #273
- Update loss imports by @marksgraham in #279
- Add Brain LDM to model-zoo by @Warvito in #188
- Fix 3D DDPM tutorial by @Warvito in #277
- Add use_flash_attention argument by @Warvito in #284
- Fixes transformer max_sequence_length training/inference by @marksgraham in #282
- Add MIMIC pretrained model by @Warvito in #286
- Fix validation data in tutorials by @Warvito in #291
- Fix prediction_type="sample" by @Warvito in #300
- Moving DiffusionPrepareBatch by @ericspod in #305
- Change transformer number of tokens by @Warvito in #303
- Add tutorial performing anomaly detection based on likelihoods from generative models by @Warvito in #298
- Add causal self-attention block by @Warvito in #218
- Remove TODOs by @Warvito in #310
- Fix num_head_channels by @Warvito in #316
- Update AutoencoderKL and add more content by @Warvito in #315
- fixed typo in anomaly detection tutorial by @vacmar01 in #321
- Fix generate function by @Warvito in #322
- Clip image_data values before casting to uint8 by @Warvito in #324
- 314 fix transformer training by @marksgraham in #318
- Fix xtransformer error by @Warvito in #327
- Update README.md by @Warvito in #328
- 150 - Diff-scm by @SANCHES-Pedro in #306
- Update tutorial by @Warvito in #329
- Add README file to tutorial dir by @Warvito in #330
- Add more content to tutorial by @Warvito in #331
- Fix dependencies and license by @Warvito in #332
- Update README.md by @marksgraham in #333
- Update anomaly detection notebook by @marksgraham in #334
- initial commit anomaly detection with gradient guidance by @JuliaWolleb in #190
- Fix formatting by @Warvito in https://github.com/Project-MONAI/GenerativeMo...