diff --git a/README.md b/README.md index 7f0806b..fd55741 100755 --- a/README.md +++ b/README.md @@ -1,11 +1,25 @@ # Benchmarking Spectral Graph Neural Networks -[![Docs](https://github.com/gdmnl/Spectral-GNN-Benchmark/actions/workflows/docs.yaml/badge.svg)](https://gdmnl.github.io/Spectral-GNN-Benchmark/) -[![LICENSE](https://img.shields.io/github/license/gdmnl/Spectral-GNN-Benchmark)](LICENSE) -[![Release](https://img.shields.io/github/v/release/gdmnl/Spectral-GNN-Benchmark?include_prereleases)](https://github.com/gdmnl/Spectral-GNN-Benchmark/releases/latest) -[![Python](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fgdmnl%2FSpectral-GNN-Benchmark%2Fmain%2Fpyproject.toml)](https://gdmnl.github.io/Spectral-GNN-Benchmark/installation.html#) - -`pyg_spectral` is a [PyTorch Geometric](https://pyg.org)-based framework for analyzing, implementing, and benchmarking spectral GNNs with effectiveness and efficiency evaluations. + +
+ + Docs + + + License + + + Contrib + + + Python + + + PyTorch + +
+ +`pyg_spectral` is a [PyTorch Geometric](https://pyg.org)-based framework for analyzing, implementing, and benchmarking spectral GNNs with effectiveness and efficiency evaluations. Our paper is available on [arXiv](https://arxiv.org/abs/2406.09675). > [!IMPORTANT] > ***Why this project?*** @@ -16,7 +30,12 @@ --- -[:mag: **Documentation**](https://gdmnl.github.io/Spectral-GNN-Benchmark/) | [:octocat: **GitHub**](https://github.com/gdmnl/Spectral-GNN-Benchmark/) | [:page_facing_up: **Paper**](https://arxiv.org/abs/2406.09675) | [:paperclip: **Cite**](CITATION.cff) +
+ 🔍 Documentation | + 👾 GitHub | + 📄 Paper | + 📎 Cite +
- [Installation](#installation) - [Reproduce Experiments](#reproduce-experiments) @@ -67,14 +86,14 @@ bash scripts/runmb.sh ``` ### Additional Experiments -#### Effect of graph normalization vs degree-specific accuracy (*Figure 3, 9*): +#### Effect of graph normalization (*Figure 3, 9*): ```bash bash scripts/eval_degree.sh ``` Figures can be plotted by: [`benchmark/notebook/fig_degng.ipynb`](benchmark/notebook/fig_degng.ipynb). -#### Effect of the number of propagation hops vs accuracy (*Figure 7, 8*): +#### Effect of propagation hops (*Figure 7, 8*): ```bash bash scripts/eval_hop.sh ``` @@ -162,8 +181,8 @@ The propagation matrix is specified by the `propagate_mat` argument as a string. #### Step 2: Prepare representation matrix Similar to PyG modules, our spectral filter class takes the graph attribute `x` and edge index `edge_index` as input. The `_get_convolute_mat()` method prepares the representation matrices used in recurrent computation as a dictionary: ```python -def _get_convolute_mat(self, x, edge_index): - return {'x': x, 'x_1': x} + def _get_convolute_mat(self, x, edge_index): + return {'x': x, 'x_1': x} ``` The above example overwrites the method for `SkipConv`, returning the input feature `x` and a placeholder `x_1` for the representation in the previous hop. @@ -171,14 +190,14 @@ The above example overwrites the method for `SkipConv`, returning the input feat #### Step 3: Derive recurrent forward The `_forward()` method implements recurrent computation of the filter. Its input/output is a dictionary combining the propagation matrices defined by `propagate_mat` and the representation matrices prepared by `_get_convolute_mat()`. ```python -def _forward(self, x, x_1, prop): - if self.hop == 0: - # No propagation for k=0 - return {'x': x, 'x_1': x, 'prop': prop} - - h = self.propagate(prop, x=x) - h = h + x_1 - return {'x': h, 'x_1': x, 'prop': prop} + def _forward(self, x, x_1, prop): + if self.hop == 0: + # No propagation for k=0 + return {'x': x, 'x_1': x, 'prop': prop} + + h = self.propagate(prop, x=x) + h = h + x_1 + return {'x': h, 'x_1': x, 'prop': prop} ``` Similar to PyG modules, the `propagate()` method conducts graph propagation by the given matrices. The above example corresponds to the graph propagation with a skip connection to the previous representation: $H^{(k)} = (A-I)H^{(k-1)} + H^{(k-2)}$. @@ -198,7 +217,7 @@ out = model(x, edge_index) | **Category** | **Model** | |:------------:|:----------| -| Fixed Filter | [GCN](https://arxiv.org/abs/1609.02907), [SGC](https://arxiv.org/pdf/1902.07153), [gfNN](https://arxiv.org/pdf/1905.09550), [GZoom](https://arxiv.org/pdf/1910.02370), [S²GC](https://openreview.net/pdf?id=CYO5T-YjWZV),[GLP](https://arxiv.org/pdf/1901.09993), [APPNP](https://arxiv.org/pdf/1810.05997), [GCNII](https://arxiv.org/pdf/2007.02133), [GDC](https://proceedings.neurips.cc/paper_files/paper/2019/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf), [DGC](https://arxiv.org/pdf/2102.10739), [AGP](https://arxiv.org/pdf/2106.03058), [GRAND+](https://arxiv.org/pdf/2203.06389)| +| Fixed Filter | [GCN](https://arxiv.org/abs/1609.02907), [SGC](https://arxiv.org/pdf/1902.07153), [gfNN](https://arxiv.org/pdf/1905.09550), [GZoom](https://arxiv.org/pdf/1910.02370), [S²GC](https://openreview.net/pdf?id=CYO5T-YjWZV), [GLP](https://arxiv.org/pdf/1901.09993), [APPNP](https://arxiv.org/pdf/1810.05997), [GCNII](https://arxiv.org/pdf/2007.02133), [GDC](https://proceedings.neurips.cc/paper_files/paper/2019/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf), [DGC](https://arxiv.org/pdf/2102.10739), [AGP](https://arxiv.org/pdf/2106.03058), [GRAND+](https://arxiv.org/pdf/2203.06389)| |Variable Filter|[GIN](https://arxiv.org/pdf/1810.00826), [AKGNN](https://arxiv.org/pdf/2112.04575), [DAGNN](https://dl.acm.org/doi/pdf/10.1145/3394486.3403076), [GPRGNN](https://arxiv.org/pdf/2006.07988), [ARMAGNN](https://arxiv.org/pdf/1901.01343), [ChebNet](https://papers.nips.cc/paper_files/paper/2016/file/04df4d434d481c5bb723be1b6df1ee65-Paper.pdf), [ChebNetII](https://arxiv.org/pdf/2202.03580), [HornerGCN / ClenshawGCN](https://arxiv.org/pdf/2210.16508), [BernNet](https://arxiv.org/pdf/2106.10994), [LegendreNet](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10160025), [JacobiConv](https://arxiv.org/pdf/2205.11172), [FavardGNN / OptBasisGNN](https://arxiv.org/pdf/2302.12432)| |Filter Bank|[AdaGNN](https://arxiv.org/pdf/2104.12840), [FBGNN](https://arxiv.org/pdf/2008.08844), [ACMGNN](https://arxiv.org/pdf/2210.07606), [FAGCN](https://arxiv.org/pdf/2101.00797), [G²CN](https://proceedings.mlr.press/v162/li22h/li22h.pdf), [GNN-LF/HF](https://arxiv.org/pdf/2101.11859), [FiGURe](https://arxiv.org/pdf/2310.01892)| diff --git a/docs/source/_include/pyg_spectral.nn.conv.rst b/docs/source/_include/pyg_spectral.nn.conv.rst index e3971ee..6ca2293 100644 --- a/docs/source/_include/pyg_spectral.nn.conv.rst +++ b/docs/source/_include/pyg_spectral.nn.conv.rst @@ -1,4 +1,4 @@ -pyg\_spectral.nn.conv +pyg\_spectral.nn.conv ============================= .. automodule:: pyg_spectral.nn.conv diff --git a/docs/source/_include/pyg_spectral.nn.rst b/docs/source/_include/pyg_spectral.nn.rst index e31c43a..26f8d7c 100644 --- a/docs/source/_include/pyg_spectral.nn.rst +++ b/docs/source/_include/pyg_spectral.nn.rst @@ -5,12 +5,64 @@ pyg\_spectral.nn :maxdepth: 1 pyg_spectral.nn.conv - pyg_spectral.nn.models - pyg_spectral.nn.norm .. autosummary:: + :nosignatures: :recursive: - pyg_spectral.nn.conv + pyg_spectral.nn.conv.ACMConv + pyg_spectral.nn.conv.AdjConv + pyg_spectral.nn.conv.AdjDiffConv + pyg_spectral.nn.conv.AdjResConv + pyg_spectral.nn.conv.AdjSkip2Conv + pyg_spectral.nn.conv.AdjSkipConv + pyg_spectral.nn.conv.Adji2Conv + pyg_spectral.nn.conv.AdjiConv + pyg_spectral.nn.conv.BaseMP + pyg_spectral.nn.conv.BernConv + pyg_spectral.nn.conv.ChebConv + pyg_spectral.nn.conv.ChebIIConv + pyg_spectral.nn.conv.ClenshawConv + pyg_spectral.nn.conv.FavardConv + pyg_spectral.nn.conv.HornerConv + pyg_spectral.nn.conv.JacobiConv + pyg_spectral.nn.conv.LapiConv + pyg_spectral.nn.conv.LegendreConv + pyg_spectral.nn.conv.OptBasisConv + +.. toctree:: + :maxdepth: 1 + pyg_spectral.nn.models + +.. autosummary:: + :nosignatures: + :recursive: + + pyg_spectral.nn.models.ACMGNN + pyg_spectral.nn.models.ACMGNNDec + pyg_spectral.nn.models.AdaGNN + pyg_spectral.nn.models.BaseNN + pyg_spectral.nn.models.BaseNNCompose + pyg_spectral.nn.models.CppCompFixed + pyg_spectral.nn.models.DecoupledFixed + pyg_spectral.nn.models.DecoupledFixedCompose + pyg_spectral.nn.models.DecoupledVar + pyg_spectral.nn.models.DecoupledVarCompose + pyg_spectral.nn.models.Iterative + pyg_spectral.nn.models.IterativeCompose + pyg_spectral.nn.models.PrecomputedFixed + pyg_spectral.nn.models.PrecomputedFixedCompose + pyg_spectral.nn.models.PrecomputedVar + pyg_spectral.nn.models.PrecomputedVarCompose + +.. toctree:: + :maxdepth: 1 + pyg_spectral.nn.norm + +.. autosummary:: + :nosignatures: + :recursive: + + pyg_spectral.nn.norm.TensorStandardScaler diff --git a/docs/source/_templates/autosummary/class.rst b/docs/source/_templates/autosummary/class.rst index b5c1ba8..5e252d6 100644 --- a/docs/source/_templates/autosummary/class.rst +++ b/docs/source/_templates/autosummary/class.rst @@ -3,6 +3,9 @@ .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} + :members: + :show-inheritance: + :inherited-members: {% block methods %} .. automethod:: __init__ @@ -26,4 +29,4 @@ ~{{ name }}.{{ item }} {%- endfor %} {% endif %} - {% endblock %} \ No newline at end of file + {% endblock %} diff --git a/docs/source/_templates/autosummary/module.rst b/docs/source/_templates/autosummary/module.rst index 6ec89e0..6a3aa0d 100644 --- a/docs/source/_templates/autosummary/module.rst +++ b/docs/source/_templates/autosummary/module.rst @@ -1,6 +1,18 @@ {{ fullname | escape | underline}} .. automodule:: {{ fullname }} + :members: + + {% block attributes %} + {% if attributes %} + .. rubric:: Attributes + + .. autosummary:: + {% for item in attributes %} + {{ item }} + {%- endfor %} + {% endif %} + {% endblock %} {% block functions %} {% if functions %} @@ -34,3 +46,15 @@ {%- endfor %} {% endif %} {% endblock %} + +{% block modules %} +{% if modules %} +.. rubric:: Modules + +.. autosummary:: + :recursive: +{% for item in modules %} + {{ item }} +{%- endfor %} +{% endif %} +{% endblock %} diff --git a/docs/source/_tutorial/arrangement.md b/docs/source/_tutorial/arrangement.md index d248b9c..c8f6814 100644 --- a/docs/source/_tutorial/arrangement.md +++ b/docs/source/_tutorial/arrangement.md @@ -6,9 +6,9 @@ Refer to {py:class}`benchmark.trainer.ModelLoader`. | **Category** | **Model** | |:------------:|:----------| -| Fixed Filter | [GCN](https://arxiv.org/abs/1609.02907), [SGC](https://arxiv.org/pdf/1902.07153), [gfNN](https://arxiv.org/pdf/1905.09550), [GZoom](https://arxiv.org/pdf/1910.02370), [S$^2$GC](https://openreview.net/pdf?id=CYO5T-YjWZV),[GLP](https://arxiv.org/pdf/1901.09993), [APPNP](https://arxiv.org/pdf/1810.05997), [GCNII](https://arxiv.org/pdf/2007.02133), [GDC](https://proceedings.neurips.cc/paper_files/paper/2019/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf), [DGC](https://arxiv.org/pdf/2102.10739), [AGP](https://arxiv.org/pdf/2106.03058), [GRAND+](https://arxiv.org/pdf/2203.06389)| +| Fixed Filter | [GCN](https://arxiv.org/abs/1609.02907), [SGC](https://arxiv.org/pdf/1902.07153), [gfNN](https://arxiv.org/pdf/1905.09550), [GZoom](https://arxiv.org/pdf/1910.02370), [S²GC](https://openreview.net/pdf?id=CYO5T-YjWZV), [GLP](https://arxiv.org/pdf/1901.09993), [APPNP](https://arxiv.org/pdf/1810.05997), [GCNII](https://arxiv.org/pdf/2007.02133), [GDC](https://proceedings.neurips.cc/paper_files/paper/2019/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf), [DGC](https://arxiv.org/pdf/2102.10739), [AGP](https://arxiv.org/pdf/2106.03058), [GRAND+](https://arxiv.org/pdf/2203.06389)| |Variable Filter|[GIN](https://arxiv.org/pdf/1810.00826), [AKGNN](https://arxiv.org/pdf/2112.04575), [DAGNN](https://dl.acm.org/doi/pdf/10.1145/3394486.3403076), [GPRGNN](https://arxiv.org/pdf/2006.07988), [ARMAGNN](https://arxiv.org/pdf/1901.01343), [ChebNet](https://papers.nips.cc/paper_files/paper/2016/file/04df4d434d481c5bb723be1b6df1ee65-Paper.pdf), [ChebNetII](https://arxiv.org/pdf/2202.03580), [HornerGCN/ClenshawGCN](https://arxiv.org/pdf/2210.16508), [BernNet](https://arxiv.org/pdf/2106.10994), [LegendreNet](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10160025), [JacobiConv](https://arxiv.org/pdf/2205.11172), [FavardGNN/OptBasisGNN](https://arxiv.org/pdf/2302.12432)| -|Filter Bank|[AdaGNN](https://arxiv.org/pdf/2104.12840), [FBGNN](https://arxiv.org/pdf/2008.08844), [ACMGNN](https://arxiv.org/pdf/2210.07606), [FAGCN](https://arxiv.org/pdf/2101.00797), [G$^2$CN](https://proceedings.mlr.press/v162/li22h/li22h.pdf), [GNN-LF/HF](https://arxiv.org/pdf/2101.11859), [FiGURe](https://arxiv.org/pdf/2310.01892)| +|Filter Bank|[AdaGNN](https://arxiv.org/pdf/2104.12840), [FBGNN](https://arxiv.org/pdf/2008.08844), [ACMGNN](https://arxiv.org/pdf/2210.07606), [FAGCN](https://arxiv.org/pdf/2101.00797), [G²CN](https://proceedings.mlr.press/v162/li22h/li22h.pdf), [GNN-LF/HF](https://arxiv.org/pdf/2101.11859), [FiGURe](https://arxiv.org/pdf/2310.01892)| ## Covered Datasets diff --git a/docs/source/_tutorial/configure.rst b/docs/source/_tutorial/configure.rst new file mode 100644 index 0000000..a84e046 --- /dev/null +++ b/docs/source/_tutorial/configure.rst @@ -0,0 +1,75 @@ +Configure Benchmark +=============================== + +Experiment Parameters +------------------------------- + +Refer to the help text by: + +.. code-block:: bash + + python benchmark/run_single.py --help + +--help show this help message and exit + +.. rubric:: Logging configuration + +--seed SEED random seed +--dev DEV GPU id +--suffix SUFFIX Save name suffix +-quiet Dry run without saving logs +--storage STORAGE + Storage scheme for saving the checkpoints. + Options: ``state_file``, ``state_ram``, ``state_gpu`` +--loglevel LOGLEVEL ``10``:progress, ``15``:train, ``20``:info, ``25``:result + +.. rubric:: Data configuration + +--data DATA Dataset name +--data_split DATA_SPLIT Index or percentage of dataset split +--normg NORMG Generalized graph norm +--normf NORMF Embedding norm dimension. ``0``: feat-wise, ``1``: node-wise, ``None``: disable + +.. rubric:: Model configuration + +--model MODEL Model class name +--conv CONV Conv class name +--num_hops NUM_HOPS Number of conv hops +--in_layers IN_LAYERS Number of MLP layers before conv +--out_layers OUT_LAYERS Number of MLP layers after conv +--hidden HIDDEN Number of hidden width +--dp_lin DP_LIN Dropout rate for linear +--dp_conv DP_CONV Dropout rate for conv + +.. rubric:: Training configuration + +--epoch EPOCH Number of epochs +--patience PATIENCE Patience epoch for early stopping +--period PERIOD Periodic saving epoch interval +--batch BATCH Batch size +--lr_lin LR_LIN Learning rate for linear +--lr_conv LR_CONV Learning rate for conv +--wd_lin WD_LIN Weight decay for linear +--wd_conv WD_CONV Weight decay for conv + +.. rubric:: Model-specific + +--theta_scheme THETA_SCHEME Filter name +--theta_param THETA_PARAM Hyperparameter for filter +--combine COMBINE + How to combine different channels of convs. + Options: ``sum``, ``sum_weighted``, ``cat`` + +.. rubric:: Conv-specific + +--alpha ALPHA Decay factor +--beta BETA Scaling factor + +.. rubric:: Test flags + +--test_deg Call :meth:`test_deg() ` + +Add New Dataset +-------------------------- + +Append the :meth:`SingleGraphLoader._resolve_import() ` method to include new datasets under respective protocols. diff --git a/docs/source/_tutorial/customization.rst b/docs/source/_tutorial/customization.rst deleted file mode 100644 index f9fa95d..0000000 --- a/docs/source/_tutorial/customization.rst +++ /dev/null @@ -1,127 +0,0 @@ -Customization -============= - -Add New Spectral Filter ------------------------ - -New spectral filters to :mod:`pyg_spectral.nn.conv` can be easily implemented by **only three steps**, then enjoys a range of model architectures, analysis utilities, and training schemes. - -Step 1: Define propagation matrix -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The base class :class:`pyg_spectral.nn.conv.BaseMP` provides essential methods for building spectral filters. We can define a new filter class :class:`pyg_spectral.nn.conv.SkipConv` by inheriting from it: - -.. code-block:: python - - from torch import Tensor - from pyg_spectral.nn.conv.base_mp import BaseMP - - class SkipConv(BaseMP): - def __init__(self, num_hops, hop, cached, **kwargs): - kwargs['propagate_mat'] = 'A-I' - super(SkipConv, self).__init__(num_hops, hop, cached, **kwargs) - -The propagation matrix is specified by the :obj:`propagate_mat` argument as a string. Each matrix can be the normalized adjacency matrix (:obj:`A`) or the normalized Laplacian matrix (:obj:`L`), with optional diagonal scaling, where the scaling factor can either be a number or an attribute name of the class. Multiple propagation matrices can be combined by `,`. Valid examples: :obj:`A`, :obj:`L-2*I`, :obj:`L,A+I,L-alpha*I`. - -Step 2: Prepare representation matrix -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Similar to ``PyG`` modules, our spectral filter class takes the graph attribute :obj:`x` and edge index :obj:`edge_index` as input. The :meth:`pyg_spectral.nn.conv.base_mp.BaseMP._get_convolute_mat` method prepares the representation matrices used in recurrent computation as a dictionary: - -.. code-block:: python - - def _get_convolute_mat(self, x, edge_index): - return {'x': x, 'x_1': x} - -The above example overwrites the method for :class:`SkipConv`, returning the input feature :obj:`x` and a placeholder :obj:`x_1` for the representation in the previous hop. - -Step 3: Derive recurrent forward -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The :meth:`pyg_spectral.nn.conv.base_mp.BaseMP._forward` method implements recurrent computation of the filter. Its input/output is a dictionary combining the propagation matrices defined by :obj:`propagate_mat` and the representation matrices prepared by :meth:`pyg_spectral.nn.conv.base_mp.BaseMP._get_convolute_mat`. - -.. code-block:: python - - def _forward(self, x, x_1, prop): - if self.hop == 0: - # No propagation for k=0 - return {'x': x, 'x_1': x, 'prop': prop} - - h = self.propagate(prop, x=x) - h = h + x_1 - return {'x': h, 'x_1': x, 'prop': prop} - -Similar to ``PyG`` modules, the :func:`propagate` method conducts graph propagation by the given matrices. The above example corresponds to the graph propagation with a skip connection to the previous representation: :math:`H^{(k)} = (A-I)H^{(k-1)} + H^{(k-2)}`. - -Build the model! -~~~~~~~~~~~~~~~~ - -Now the :class:`SkipConv` filter is properly defined. The following snippet use the :class:`pyg_spectral.nn.models.DecoupledVar` model composing 10 hops of :class:`SkipConv` filters, which can be used as a normal PyTorch model: - -.. code-block:: python - - from pyg_spectral.nn.models import DecoupledVar - - model = DecoupledVar(conv='SkipConv', num_hops=10, in_channels=x.size(1), hidden_channels=x.size(1), out_channels=x.size(1)) - out = model(x, edge_index) - - -Configure Experiment Parameters -------------------------------- - -Refer to the help text by: - -.. code-block:: bash - - python benchmark/run_single.py --help - -.. code-block:: - - usage: python run_single.py - options: - --help show this help message and exit - # Logging configuration - --seed SEED random seed - --dev DEV GPU id - --suffix SUFFIX Save name suffix. - -quiet Dry run without saving logs. - --storage {state_file,state_ram,state_gpu} - Storage scheme for saving the checkpoints. - --loglevel LOGLEVEL 10:progress, 15:train, 20:info, 25:result - # Data configuration - --data DATA Dataset name - --data_split DATA_SPLIT Index or percentage of dataset split - --normg NORMG Generalized graph norm - --normf [NORMF] Embedding norm dimension. 0: feat-wise, 1: node-wise, None: disable - # Model configuration - --model MODEL Model class name - --conv CONV Conv class name - --num_hops NUM_HOPS Number of conv hops - --in_layers IN_LAYERS Number of MLP layers before conv - --out_layers OUT_LAYERS Number of MLP layers after conv - --hidden HIDDEN Number of hidden width - --dp_lin DP_LIN Dropout rate for linear - --dp_conv DP_CONV Dropout rate for conv - # Training configuration - --epoch EPOCH Number of epochs - --patience PATIENCE Patience epoch for early stopping - --period PERIOD Periodic saving epoch interval - --batch BATCH Batch size - --lr_lin LR_LIN Learning rate for linear - --lr_conv LR_CONV Learning rate for conv - --wd_lin WD_LIN Weight decay for linear - --wd_conv WD_CONV Weight decay for conv - # Model-specific - --theta_scheme THETA_SCHEME Filter name - --theta_param THETA_PARAM Hyperparameter for filter - --combine {sum,sum_weighted,cat} - How to combine different channels of convs - # Conv-specific - --alpha ALPHA Decay factor - --beta BETA Scaling factor - # Test flags - --test_deg Call TrnFullbatch.test_deg() - -Add New Experiment Dataset --------------------------- - -Append the :meth:`benchmark.trainer.SingleGraphLoader._resolve_import` method to include new datasets under respective protocols. diff --git a/docs/source/_tutorial/customize.rst b/docs/source/_tutorial/customize.rst new file mode 100644 index 0000000..f756333 --- /dev/null +++ b/docs/source/_tutorial/customize.rst @@ -0,0 +1,65 @@ +Customize Spectral Modules +================================== + +Add New Filter +----------------------- + +New spectral filters to :mod:`pyg_spectral.nn.conv` can be easily implemented by **only three steps**, then enjoys a range of model architectures, analysis utilities, and training schemes. + +Step 1: Define propagation matrix +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The base class :class:`nn.conv.BaseMP ` provides essential methods for building spectral filters. We can define a new filter class :class:`nn.conv.SkipConv` by inheriting from it: + +.. code-block:: python + + from torch import Tensor + from pyg_spectral.nn.conv.base_mp import BaseMP + + class SkipConv(BaseMP): + def __init__(self, num_hops, hop, cached, **kwargs): + kwargs['propagate_mat'] = 'A-I' + super(SkipConv, self).__init__(num_hops, hop, cached, **kwargs) + +The propagation matrix is specified by the :obj:`propagate_mat` argument as a string. Each matrix can be the normalized adjacency matrix (``A``) or the normalized Laplacian matrix (``L``), with optional diagonal scaling, where the scaling factor can either be a number or an attribute name of the class. Multiple propagation matrices can be combined by ``,``. Valid examples: ``A``, ``L-2*I``, ``L,A+I,L-alpha*I``. + +Step 2: Prepare representation matrix +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Similar to PyG modules, our spectral filter class takes the graph attribute :obj:`x` and edge index :obj:`edge_index` as input. The :meth:`_get_convolute_mat() ` method prepares the representation matrices used in recurrent computation as a dictionary: + +.. code-block:: python + + def _get_convolute_mat(self, x, edge_index): + return {'x': x, 'x_1': x} + +The above example overwrites the method for :class:`SkipConv`, returning the input feature :obj:`x` and a placeholder :obj:`x_1` for the representation in the previous hop. + +Step 3: Derive recurrent forward +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The :meth:`_forward() ` method implements recurrent computation of the filter. Its input/output is a dictionary combining the propagation matrices defined by :obj:`propagate_mat` and the representation matrices prepared by :meth:`_get_convolute_mat() `. + +.. code-block:: python + + def _forward(self, x, x_1, prop): + if self.hop == 0: + # No propagation for k=0 + return {'x': x, 'x_1': x, 'prop': prop} + + h = self.propagate(prop, x=x) + h = h + x_1 + return {'x': h, 'x_1': x, 'prop': prop} + +Similar to PyG modules, the :meth:`propagate() ` method conducts graph propagation by the given matrices. The above example corresponds to the graph propagation with a skip connection to the previous representation: :math:`H^{(k)} = (A-I)H^{(k-1)} + H^{(k-2)}`. + +Build the model! +~~~~~~~~~~~~~~~~ + +Now the :class:`SkipConv` filter is properly defined. The following snippet use the :class:`nn.models.DecoupledVar ` model composing 10 hops of :class:`SkipConv` filters, which can be used as a normal PyTorch model: + +.. code-block:: python + + from pyg_spectral.nn.models import DecoupledVar + + model = DecoupledVar(conv='SkipConv', num_hops=10, in_channels=x.size(1), hidden_channels=x.size(1), out_channels=x.size(1)) + out = model(x, edge_index) diff --git a/docs/source/_tutorial/installation.rst b/docs/source/_tutorial/installation.rst index 17ac687..35abd57 100644 --- a/docs/source/_tutorial/installation.rst +++ b/docs/source/_tutorial/installation.rst @@ -19,7 +19,7 @@ The installation script already covers the following core dependencies: Advanced Options ++++++++++++++++++++++++ -Installations can be specified by pip options. The following options can also be combined. +Installations can be specified by pip options. The following options can also be combined on demand. Only ``pyg_spectral`` Package ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -28,7 +28,6 @@ Install without any options: .. code-block:: bash - pip install -r requirements.txt pip install -e . Benchmark Experiments @@ -38,7 +37,6 @@ Install with ``[benchmark]`` option: .. code-block:: bash - pip install -r requirements.txt pip install -e .[benchmark] Docs Development @@ -63,7 +61,6 @@ C++ Backend .. code-block:: bash - pip install -r requirements.txt export PSFLAG_CPP=1; pip install -e .[cpp] .. [1] Please refer to the `official guide `_ if a specific CUDA version is required for PyTorch. diff --git a/docs/source/_tutorial/reproduce.rst b/docs/source/_tutorial/reproduce.rst index 504f687..b498114 100644 --- a/docs/source/_tutorial/reproduce.rst +++ b/docs/source/_tutorial/reproduce.rst @@ -22,7 +22,7 @@ Datasets will be automatically downloaded and processed by the code. Additional Experiments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -**Effect of graph normalization vs degree-specific accuracy** (*Figure 3, 9*): +**Effect of graph normalization** (*Figure 3, 9*): .. code-block:: bash @@ -30,7 +30,7 @@ Additional Experiments Figures can be plotted by: `benchmark/notebook/fig_degng.ipynb `_. -**Effect of the number of propagation hops vs accuracy** (*Figure 7, 8*): +**Effect of propagation hops** (*Figure 7, 8*): .. code-block:: bash diff --git a/docs/source/conf.py b/docs/source/conf.py index b2e5755..2700c64 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -36,6 +36,7 @@ ] autosummary_generate = True +autosummary_imported_members = True templates_path = ['_templates'] exclude_patterns = [] @@ -48,6 +49,7 @@ html_theme_options = { "logo_only": False, "display_version": True, + "navigation_depth": 2, } html_static_path = ['_static'] diff --git a/docs/source/index.rst b/docs/source/index.rst index 20b6b58..4ae534a 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -5,21 +5,27 @@ pyg_spectral ======================================== +.. sidebar:: Useful links + + | 🔍 `Documentation `_ + | 👾 `GitHub `_ |gh-bn| + | 📄 `Paper `_ + | 📎 `Citation `_ + ``pyg_spectral`` is a `PyG `_-based framework for analyzing, implementing, and benchmarking spectral GNNs with effectiveness and efficiency evaluations. -*Why this project?* +.. admonition:: *Why this project?* + We list the following highlights of our framework compared to PyG and similar works: * **Unified Framework**: We offer a plug-and-play collection for spectral models and filters in unified and efficient implementations, rather than a model-specific design. Our rich collection greatly extends the PyG model zoo. - * **Spectral-oriented Design**: We decouple non-spectral designs and feature the pivotal spectral kernel being consistent throughout different settings. Most filters are thus easily adaptable to a wide range of model-level options, including those provided by PyG and PyG-based frameworks. - * **High scalability**: As spectral GNNs are inherently suitable for large-scale learning, our framework is feasible to common scalable learning schemes and acceleration techniques. Several spectral-oriented approximation algorithms are also supported. .. include:: _tutorial/installation.rst :end-before: Advanced Options -For advanced options, please refer to `Installation Options `_. +For advanced options, please refer to `Installation Options <_tutorial/installation.html#advanced-options>`_. .. include:: _tutorial/reproduce.rst @@ -29,7 +35,8 @@ For advanced options, please refer to `Installation Options `_ if a specific CUDA version is required for PyTorch. + +.. |gh-bn| raw:: html + + + Star diff --git a/pyg_spectral/nn/norm/standard_scale.py b/pyg_spectral/nn/norm/standard_scale.py index ef81795..3083763 100755 --- a/pyg_spectral/nn/norm/standard_scale.py +++ b/pyg_spectral/nn/norm/standard_scale.py @@ -5,6 +5,12 @@ class TensorStandardScaler(nn.Module): + """ + Applies standard Gaussian normalization to :math:`\mathcal{N}(0, 1)`. + + Args: + dim (int): Dimension to calculate mean and std. Default is 0. + """ def __init__(self, dim: int = 0): super(TensorStandardScaler, self).__init__() self.dim = dim diff --git a/pyproject.toml b/pyproject.toml index 92f61c0..1a85561 100755 --- a/pyproject.toml +++ b/pyproject.toml @@ -34,6 +34,7 @@ requires-python=">=3.10" dependencies=[ "torch_geometric>=2.5.3", "pandas>=2.0", + "numpy>=1.23,<2.0", ] [project.optional-dependencies]