diff --git a/CHANGELOG.md b/CHANGELOG.md index 15033ccb..feca97e3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,5 +2,9 @@ ## Unreleased -- Set up minimal project structure. -- Create first notebook. +- Set up project structure. +- Implement basic functionality. +- Build documentation. +- Create first notebooks. +- Introduce neural operators. +- Add CI/CD. diff --git a/README.md b/README.md index 943ce90e..49d0ea4b 100644 --- a/README.md +++ b/README.md @@ -17,11 +17,10 @@ Learning function operators with neural networks. -**Continuity** is a Python package with a general interface for machine -learning on functions operators. It implements various neural network -architectures, including DeepONets or neural operators, physics-informed loss -functions to train the networks based on PDEs, and a variety of benchmarks. - +**Continuity** is a Python package for machine learning on function operators. +It implements various neural operator architectures (e.g., DeepONets), +physics-informed loss functions to train based on PDEs, and a collection of +examples and benchmarks. ## Installation Clone the repository and install the package using pip. @@ -32,8 +31,14 @@ pip install -e . ``` ## Usage -Some examples can be found in the `notebooks` and `tests` directories. See the -[documentation](https://aai-institute.github.io/Continuity/) for more details. +Our [Documentation](https://aai-institute.github.io/Continuity/) contains a verbose introduction to operator learning, a collection of examples using Continuity, and a class documentation. + +In general, the operator syntax in Continuity is +```python +v = operator(x, u(x), y) +``` +mapping a function `u` (evaluated at `x`) to function `v` (evaluated in `y`). +For more details, see [Learning Operators](https://aai-institute.github.io/Continuity/operators/index.html). ## Contributing If you find a bug or have a feature request, please open an issue on GitHub. If diff --git a/docs/examples/index.md b/docs/examples/index.md new file mode 100644 index 00000000..fafb6860 --- /dev/null +++ b/docs/examples/index.md @@ -0,0 +1,28 @@ +--- +title: Examples +--- + +This is a collection of notebooks that showcase various applications of +Continuity. + +::cards:: cols=2 + +- title: The Basics + content: Learning function operators with Continuity + url: basics + +- title: Physics-informed + content: > + Training physics-informed neural operators + url: physicsinformed + +- title: Self-supervised + content: > + Self-supervised training of operators + url: selfsupervised + +- title: Super-resolution + content: Neural operators for super-resolution + url: superresolution + +::/cards:: diff --git a/docs/index.md b/docs/index.md index bd241532..54ae0b1e 100644 --- a/docs/index.md +++ b/docs/index.md @@ -13,10 +13,10 @@ title: Home -**Continuity** is a Python package for machine -learning on functions operators. It implements various neural network -architectures, including DeepONets or neural operators, physics-informed loss -functions to train the networks based on PDEs, and a variety of benchmarks. +**Continuity** is a Python package for machine learning on function operators. +It implements various neural operator architectures (e.g., DeepONets), +physics-informed loss functions to train based on PDEs, and a collection of +examples and benchmarks. ::cards:: cols=2 @@ -26,11 +26,16 @@ functions to train the networks based on PDEs, and a variety of benchmarks. - title: Learning Operators content: > - Basics of learning function operators with neural networks + Basics of learning function operators url: operators/index.md +- title: Examples + content: > + Some notebooks using Continuity + url: examples/index.md + - title: Browse the API - content: Full documentation of the API + content: Full class documentation url: api/continuity/index.md ::/cards:: diff --git a/docs/operators/index.md b/docs/operators/index.md index 70180628..30be6036 100644 --- a/docs/operators/index.md +++ b/docs/operators/index.md @@ -14,7 +14,8 @@ transfer the concept of function mapping into machine learning. ## Operators -In mathematics, _operators_ are function mappings – they map functions to functions. +In mathematics, _operators_ are function mappings: they map functions to +functions. Let $u: X \subset \mathbb{R}^d \to \mathbb{R}^c$ be a function that maps a $d$-dimensional input to $c$ output *channels*. @@ -31,10 +32,10 @@ maps $u$ to a function $v: Y \subset \mathbb{R}^{p} \to \mathbb{R}^{q}$. ## Learning Operators -Learning operators is the task of learning the mapping $G$ from data. -In the context of neural networks, we want to learn a neural network $G_\theta$ +Operator learning is the task of learning the mapping $G$ from data. +In the context of neural networks, we want to train a neural network $G_\theta$ with parameters $\theta$ that, given a set of input-output pairs $(u_k, v_k)$, -maps $u_k$ to $v_k$. We refer to such a neural network as **neural operator**. +maps $u_k$ to $v_k$. We refer to such an architecture as **neural operator**. In **Continuity**, we use the general approach of mapping function evaluations to represent both input and output functions $u$ and $v$. @@ -49,14 +50,14 @@ evaluations to represent both input and output functions $u$ and $v$. neural operator architectures. Let $x_i \in X,\ 1 \leq i \leq n,$ be a finite set of *collocation points* -(or *sensor positions*) in the domain $X$ of $u$. +(or *sensor positions*) in the input domain $X$ of $u$. We represent the function $u$ by its evaluations at these collocation points and write $\mathbf{x} = (x_i)_i$ and $\mathbf{u} = (u(x_i))_i$. This finite dimensional representation is fed into the neural operator. The mapped function $v = G(u)$, on the other hand, is also represented by function evaluations only. Let $y_j \in Y,\ 1 \leq j \leq m,$ be a set of -*evaluation points* (or *query points*) in the domain $Y$ of $v$ and +*evaluation points* (or *query points*) in the input domain $Y$ of $v$ and $\mathbf{y} = (y_j)_j$. Then, the output values $\mathbf{v} = (v(y_j))_j$ are approximated by the neural operator @@ -81,8 +82,8 @@ G = lambda y: lambda u: operator(x, u, y) v = G(u)(y) ``` -Operators extend the concept of neural networks to function mappings, which +Neural operators extend the concept of neural networks to function mappings, which enables discretization-invariant and mesh-free mappings of data with applications to physics-informed training, super-resolution, and more. -See our examples in [[operators]] for more details and further reading. +See our Examples for more details and further reading. diff --git a/notebooks/basics.ipynb b/notebooks/basics.ipynb index d55d5952..660f327a 100644 --- a/notebooks/basics.ipynb +++ b/notebooks/basics.ipynb @@ -102,7 +102,7 @@ "A neural operator takes an input function $u$, evaluated at collocation points $x$,\n", "and maps it to a function $v$ evaluated at (different) evaluation positions $y$:\n", "$$\n", - "v(y) = G(u)(y) \\approx \\operatorname{NeuralOperator}\\left(x, u(x), y\\right).\n", + "v(y) = G(u)(y) \\approx G_\\theta\\left(x, u(x), y\\right).\n", "$$\n", "In this example, we choose the DeepONet architecture with 32 sensors as neural\n", "operator." @@ -290,21 +290,7 @@ "metadata": {}, "source": [ "As you can see, the operator output (approximately) matches $v$, as desired.\n", - "That's the basics!\n", - "Note that we can also wrap a neural operator to match the mathematical notation." - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [], - "source": [ - "# Wrap neural operator (with fixed x)\n", - "G = lambda u: lambda y: operator(x, u(x), y)\n", - "\n", - "# Call operator with (callable) u and obtain callable\n", - "v_y = G(u)(y)" + "That's the basics!" ] }, { @@ -312,7 +298,7 @@ "metadata": {}, "source": [ "In the other examples, we explore advanced features such as \n", - "physics-informed training, self-supervised training, super-resolution etc." + "physics-informed training, self-supervised training, or super-resolution." ] } ], diff --git a/notebooks/selfsupervised.ipynb b/notebooks/selfsupervised.ipynb index 42738fd1..23ac371c 100644 --- a/notebooks/selfsupervised.ipynb +++ b/notebooks/selfsupervised.ipynb @@ -58,15 +58,18 @@ "\n", "Create a data set of sine waves: The `Sine` dataset generates $N$ sine waves\n", "$$\n", - "f(x) = \\sin(w_k x), \\quad w_k = 1 + \\frac{k}{N-1}, \\quad k = 0, \\dots, N-1.\n", + "f(x) = \\sin(w_k x), \\quad w_k = 1 + \\frac{k}{N-1},\n", "$$\n", - "As a `SelfSupervisedDataset` it exports batches of samples for self-supervised\n", - "training, namely\n", + "$$\n", + "\\quad k = 0, \\dots, N-1.\n", + "$$\n", + "We wrap the `Sine` dataset by a `SelfSupervisedDataset` that exports batches\n", + "of samples for self-supervised training, namely\n", "$$\n", "\\left(\\mathbf{x}, f(\\mathbf{x}), x_j, f(x_j)\\right), \\quad \\text{for } j = 1, \\dots, M,\n", "$$\n", "where $\\mathbf{x} = (x_i)_{i=1 \\dots M}$ are the $M$ equidistantly\n", - "distributed sensor positions. " + "distributed sensor positions." ] }, { diff --git a/src/continuity/__init__.py b/src/continuity/__init__.py index 7586d288..00225032 100644 --- a/src/continuity/__init__.py +++ b/src/continuity/__init__.py @@ -1 +1,26 @@ -"""The Continuity package.""" +""" +**Continuity** is a Python package for machine learning on function operators. + +The package is structured into the following modules: + +::cards:: cols=2 + +- title: Operators + content: Neural operator implementations. + url: operators/index.md + +- title: Data + content: Data sets for training. + url: data/index.md + +- title: PDE + content: Loss functions for physics-informed training. + url: pde/index.md + +- title: Plotting + content: Plotting utilities. + url: plotting/index.md + +::/cards:: + +""" diff --git a/src/continuity/data/__init__.py b/src/continuity/data/__init__.py index c87cc2f8..df774674 100644 --- a/src/continuity/data/__init__.py +++ b/src/continuity/data/__init__.py @@ -1,6 +1,8 @@ """ -This defines DataSets in Continuity. -Every data set is a list of (x, u, y, v) tuples. +`continuity.data` + +Data sets in Continuity. +Every data set is a list of `(x, u, y, v)` tuples. """ import math diff --git a/src/continuity/data/datasets.py b/src/continuity/data/datasets.py index 5d062db4..6015a72f 100644 --- a/src/continuity/data/datasets.py +++ b/src/continuity/data/datasets.py @@ -18,10 +18,10 @@ class Sine(DataSet): $$ f(x) = \sin(w_k x), \quad w_k = 1 + \frac{k}{N-1}, \quad k = 0, \dots, N-1. $$ - As a `SelfSupervisedDataset` it exports batches of samples for self-supervised - training, namely + It exports batches of samples + $$ + \left(\mathbf{x}, f(\mathbf{x}), \mathbf{x}, f(\mathbf{x})\right), $$ - \left(\mathbf{x}, f(\mathbf{x}), x_j, f(x_j)\right), \quad \text{for } j = 1, \dots, M, $$ where $\mathbf{x} = (x_i)_{i=1 \dots M}$ are the $M$ equidistantly distributed sensor positions. diff --git a/src/continuity/operators/__init__.py b/src/continuity/operators/__init__.py index ca692b41..1b99723d 100644 --- a/src/continuity/operators/__init__.py +++ b/src/continuity/operators/__init__.py @@ -1,4 +1,15 @@ -"""Operators in Continuity.""" +""" +`continuity.operators` + +Operators in Continuity. + +Every operator maps collocation points `x`, function values `u`, +and evaluation points `y` to evaluations of `v`: + +``` +v = operator(x, u, y) +``` +""" from .operator import Operator from .deeponet import DeepONet diff --git a/src/continuity/operators/losses.py b/src/continuity/operators/losses.py index 9c683876..171e4185 100644 --- a/src/continuity/operators/losses.py +++ b/src/continuity/operators/losses.py @@ -3,15 +3,19 @@ import torch from torch import Tensor from abc import abstractmethod +from typing import TYPE_CHECKING -# from continuity.operators.operator import Operator # TODO: Circular import +if TYPE_CHECKING: + from continuity.operators.operator import Operator class Loss: """Loss function for training operators in Continuity.""" @abstractmethod - def __call__(self, op, x: Tensor, u: Tensor, y: Tensor, v: Tensor) -> Tensor: + def __call__( + self, op: "Operator", x: Tensor, u: Tensor, y: Tensor, v: Tensor + ) -> Tensor: """Evaluate loss. Args: @@ -29,7 +33,9 @@ class MSELoss(Loss): def __init__(self): self.mse = torch.nn.MSELoss() - def __call__(self, op, x: Tensor, u: Tensor, y: Tensor, v: Tensor) -> Tensor: + def __call__( + self, op: "Operator", x: Tensor, u: Tensor, y: Tensor, v: Tensor + ) -> Tensor: """Evaluate MSE loss. Args: diff --git a/src/continuity/data/losses.py b/src/continuity/pde/__init__.py similarity index 70% rename from src/continuity/data/losses.py rename to src/continuity/pde/__init__.py index dab016c1..00651ca0 100644 --- a/src/continuity/data/losses.py +++ b/src/continuity/pde/__init__.py @@ -1,16 +1,24 @@ -"""Loss functions.""" +""" +`continuity.pde` + +PDEs in Continuity. + +Every PDE is implemented using a physics-informed loss function. +""" from torch import Tensor from abc import abstractmethod -# from continuity.operators.operator import Operator # TODO: Circular import +from continuity.operators.operator import Operator class PDE: """PDE base class.""" @abstractmethod - def __call__(self, op, x: Tensor, u: Tensor, y: Tensor, v: Tensor) -> Tensor: + def __call__( + self, op: Operator, x: Tensor, u: Tensor, y: Tensor, v: Tensor + ) -> Tensor: """Computes PDE loss.""" @@ -24,7 +32,9 @@ class PhysicsInformedLoss: def __init__(self, pde: PDE): self.pde = pde - def __call__(self, op, x: Tensor, u: Tensor, y: Tensor, v: Tensor) -> Tensor: + def __call__( + self, op: Operator, x: Tensor, u: Tensor, y: Tensor, v: Tensor + ) -> Tensor: """Evaluate loss. Args: diff --git a/src/continuity/plotting/__init__.py b/src/continuity/plotting/__init__.py index b70533be..87bdb421 100644 --- a/src/continuity/plotting/__init__.py +++ b/src/continuity/plotting/__init__.py @@ -1,4 +1,8 @@ -"""Plotting utilities for Continuity.""" +""" +`continuity.plotting` + +Plotting utilities for Continuity. +""" import torch import numpy as np