forked from mmalahe/unc-dissertation
-
Notifications
You must be signed in to change notification settings - Fork 0
/
DiscreteFOSLSFunctional-AnisotropicDiffusion.tex
33 lines (30 loc) · 3.02 KB
/
DiscreteFOSLSFunctional-AnisotropicDiffusion.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
\section{The Discrete Least-Squares Functional: Anisotropic Diffusion}
In this part, the least-squares formulation for the first-order weighted formulation
(2.5) is given, then the discrete version based on the deep neural network approximation
mation is introduced.
The first-order weighted least-squares formulation is to find $(u, \tau, \phi) \in\left(H^{1}(\Omega)\right)^{3}$ such that
\begin{equation*}
\Psi(u, \tau, \phi ; \mathbf{f})=\min _{(\eta, \nu, \chi) \in\left(H^{1}(\Omega)\right)^{3}} \Psi(\eta, \nu, \chi ; \mathbf{f}),
\end{equation*}
where $\mathbf{f}=(f, g, \zeta)$ and
\begin{equation*}
\begin{aligned}
\Psi(\eta, \nu, \chi ; \mathbf{f})=&\left\|\zeta\left(f+\nabla \cdot\left(\nu q_{1}+\chi q_{2}\right)\right)\right\|_{0, \Omega}^{2}+\left\|\zeta\left(\nu-\left(\Lambda \nabla \eta, q_{1}\right)\right)\right\|_{0, \Omega}^{2} \\
&+\left\|\zeta\left(\chi-\left(\Lambda \nabla \eta, q_{2}\right)\right)\right\|_{0, \Omega}^{2}+\|\eta-g\|_{1 / 2, \partial \Omega}^{2}
\end{aligned}
\end{equation*}
If three unknown functions $u, \tau, \phi$ are approximated by one deep neural network and its three outputs are denoted by $\hat{u}(x, \theta), \hat{\tau}(x, \theta), \hat{\phi}(x, \theta)$ (see Fig. 2), then the discrete formulation based on all sampling points reads
\begin{equation*}
\hat{\Psi}(\hat{u}, \hat{\tau}, \hat{\phi} ; \mathbf{f})(\theta)=\min _{\tilde{\theta} \in \mathbb{R}^{N}} \hat{\Psi}(\hat{\eta}, \hat{\nu}, \hat{\chi} ; \mathbf{f})(\tilde{\theta}),
\end{equation*}
where the discrete functional reads
\begin{equation*}
\begin{aligned}
\hat{\Psi}(\hat{\eta}, \hat{\nu}, \hat{\chi} ; \mathbf{f})(\tilde{\theta})=& \frac{1}{N_{f}} \sum_{i=1}^{N_{f}}\left(\zeta\left(\mathbf{x}_{i}\right)\left(f\left(\mathbf{x}_{i}\right)+\nabla \cdot\left(\hat{\nu}\left(\mathbf{x}_{i}, \tilde{\theta}\right) q_{1}\left(\mathbf{x}_{i}\right)+\hat{\chi}\left(\mathbf{x}_{i}, \tilde{\theta}\right) q_{2}\left(\mathbf{x}_{i}\right)\right)\right)\right)^{2} \\
&+\frac{1}{N_{f}} \sum_{i=1}^{N_{f}}\left(\zeta\left(\mathbf{x}_{i}\right)\left(\hat{\nu}\left(\mathbf{x}_{i}, \tilde{\theta}\right)-\left(\Lambda \nabla \hat{\eta}\left(\mathbf{x}_{i}, \tilde{\theta}\right), q_{1}\left(\mathbf{x}_{i}\right)\right)\right)\right)^{2} \\
&+\frac{1}{N_{f}} \sum_{i=1}^{N_{f}}\left(\zeta\left(\mathbf{x}_{i}\right)\left(\hat{\chi}\left(\mathbf{x}_{i}, \tilde{\theta}\right)-\left(\Lambda \nabla \hat{\eta}\left(\mathbf{x}_{i}, \tilde{\theta}\right), q_{2}\left(\mathbf{x}_{i}\right)\right)\right)\right)^{2} \\
&+\frac{\omega_{D}}{N_{D}} \sum_{i=1}^{N_{D}}\left(\hat{\eta}\left(\mathbf{x}_{i}, \tilde{\theta}\right)-g\left(\mathbf{x}_{i}\right)\right)^{2}
\end{aligned}
\end{equation*}
where $N_{f}$ denotes the number of collocation points in $\Omega$. $N_{D}$ denotes the number of boundary points on $\partial \Omega$ at which Dirirchlet boundary conditions are imposed weakly (as "soft constraints").
The parameter $\omega_{D}$ represents the boundary-weight to penalize the neural network approximations satisfying the Dirichlet boundary conditions.