2.0.4
PyG 2.0.4 π
A new minor PyG version release, bringing PyTorch 1.11 support to PyG. It further includes a variety of new features and bugfixes:
Features
- Added Quiver examples for multi-GU training using
GraphSAGE
(#4103), thanks to @eedalong and @luomai nn.model.to_captum
: Full integration of explainability methods provided by the Captum library (#3990, #4076), thanks to @RBendiasnn.conv.RGATConv
: The relational graph attentional operator (#4031, #4110), thanks to @fork123aniketnn.pool.DMoNPooling
: The spectral modularity pooling operator (#4166, #4242), thanks to @fork123aniketnn.*
: Support for shape information in the documentation (#3739, #3889, #3893, #3946, #3981, #4009, #4120, #4158), thanks to @saiden89 and @arunppsg and @konstantinosKokosloader.TemporalDataLoader
: A dataloader to load aTemporalData
object in mini-batches (#3985, #3988), thanks to @otaviocxloader.ImbalancedSampler
: A weighted random sampler that randomly samples elements according to class distribution (#4198)transforms.VirtualNode
: A transform that adds a virtual node to a graph (#4163)transforms.LargestConnectedComponents
: Selects the subgraph that corresponds to the largest connected components in the graph (#3949), thanks to @abojchevskiutils.homophily
: Support for class-insensitive edge homophily (#3977, #4152), thanks to @hash-ir and @jinjh0123utils.get_mesh_laplacian
: Mesh Laplacian computation (#4187), thanks to @daniel-unyi-42
Datasets
- Added a dataset cheatsheet to the documentation that collects import graph statistics across a variety of datasets supported in PyG (#3807, #3817) (please consider helping us filling its remaining content)
datasets.EllipticBitcoinDataset
: A dataset of Bitcoin transactions (#3815), thanks to @shravankumar147
Minor Changes
nn.models.MLP
: MLPs can now either be initialized via a list ofchannels
or by specifyinghidden_channels
andnum_layers
(#3957)nn.models.BasicGNN
: FinalLinear
transformations are now always applied (except forjk=None
) (#4042)nn.conv.MessagePassing
: Message passing modules that make use ofedge_updater
are now jittable (#3765), thanks to @Padarnnn.conv.MessagePassing
: (Official) support formin
andmul
aggregations (#4219)nn.LightGCN
: Initialize embeddings viaxavier_uniform
for better model performance (#4083), thanks to @nishithshowri006nn.conv.ChebConv
: Automatic eigenvalue approximation (#4106), thanks to @daniel-unyi-42nn.conv.APPNP
: Added support for optionaledge_weight
, (690a01d), thanks to @YueeXiangnn.conv.GravNetConv
: Support fortorch.jit.script
(#3885), thanks to @RobMcHnn.pool.global_*_pool
: Thebatch
vector is now optional (#4161)nn.to_hetero
: Added a warning in caseto_hetero
is used onHeteroData
metadata with unused destination node types (#3775)nn.to_hetero
: Support for nested modules (ea135bf)nn.Sequential
: Support for indexing (#3790)nn.Sequential
: Support forOrderedDict
as input (#4075)datasets.ZINC
: Added an in-depth description of the task (#3832), thanks to @gasteigerjodatasets.FakeDataset
: Support for different feature distributions across different labels (#4065), thanks to @arunppsgdatasets.FakeDataset
: Support for custom global attributes (#4074), thanks to @arunppsgtransforms.NormalizeFeatures
: Features will no longer be transformed in-place (ada5b9a)transforms.NormalizeFeatures
: Support for negative feature values (6008e30)utils.is_undirected
: Improved efficiency (#3789)utils.dropout_adj
: Improved efficiency (#4059)utils.contains_isolated_nodes
: Improved efficiency (970de13)utils.to_networkx
: Support forto_undirected
options (upper triangle vs. lower triangle) (#3901, #3948), thanks to @RemyLaugraphgym
: Support for custom metrics and loggers (#3494), thanks to @RemyLaugraphgym.register
: Register operations can now be used as class decorators (#3779, #3782)- Documentation: Added a few exercises at the end of documentation tutorials (#3780), thanks to @PabloAMC
- Documentation: Added better installation instructions to
CONTRIBUTUNG.md
(#3803, #3991, #3995), thanks to @Cho-Geonwoo and @RBendias and @RodrigoVillatoro - Refactor: Clean-up dependencies (#3908, #4133, #4172), thanks to @adelizer
- CI: Improved test runtimes (#4241)
- CI: Additional linting check via
yamllint
(#3886) - CI: Additional linting check via
isort
(66b1780), thanks to @mananshah99 torch.package
: Model packaging viatorch.package
(#3997)
Bugfixes
data.HeteroData
: Fixed a bug indata.{attr_name}_dict
in casedata.{attr_name}
does not exist (#3897)data.Data
: Fixeddata.is_edge_attr
in casedata.num_edges == 1
(#3880)data.Batch
: Fixed a device mismatch bug in case abatch
object was indexed that was created from GPU tensors (e6aa4c9, c549b3b)data.InMemoryDataset
: Fixed a bug in whichcopy
did not respect the underlying slice (d478dcb, #4223)nn.conv.MessagePassing
: Fixed message passing with zero nodes/edges (#4222)nn.conv.MessagePassing
: Fixed bipartite message passing withflow="target_to_source"
(#3907)nn.conv.GeneralConv
: Fixed an issue in caseskip_linear=False
andin_channels=out_channels
(#3751), thanks to @danielegrattarolann.to_hetero
: Fixed model transformation in case node type names or edge type names contain whitespaces or dashes (#3882, b63a660)nn.dense.Linear
: Fixed a bug in lazy initialization for PyTorch < 1.8.0 (973d17d, #4086)nn.norm.LayerNorm
: Fixed a bug in the shape of weights and biases (#4030), thanks to @marshkann.pool
: Fixedtorch.jit.script
support fortorch-cluster
functions (#4047)datasets.TOSCA
: Fixed a bug in which indices of faces started at1
rather than0
(8c282a0), thanks to @JRowbottomGitdatasets.WikiCS
: FixedWikiCS
to be undirected by default (#3796), thanks to @pmernyei- Resolved inconsistency between
utils.contains_isolated_nodes
anddata.has_isolated_nodes
(#4138) graphgym
: Fixed the loss function regarding multi-label classification (#4206), thanks to @RemyLau- Documentation: Fixed typos, grammar and bugs (#3840, #3874, #3875, #4149), thanks to @itamblyn and @chrisyeh96 and @finquick