forked from mmalahe/unc-dissertation
-
Notifications
You must be signed in to change notification settings - Fork 0
/
StokesResults.tex
613 lines (397 loc) · 46.6 KB
/
StokesResults.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
for the test cases considered.
\section{$5 \quad$ Numerical experiments}
The purpose of this section is two-fold. First, we shall verify the performance and accuracy of the VPVnet method on some synthetic test cases. Second, we will demonstrate the capability of the VPVnet method to approximate non-smooth/singular solutions of the Stokes' equations, using test cases with singular solutions on L-shape domains, and a lid-driven cavity problem will be considered.
All the experiments are replicated three times to reduce variability of random initialization of the method of gradient decent and the medians of three training results are reported. The optimization algorithm is composed of two steps: we first use the Adam optimizer [36] for e.g. 5,000 iterations with self-adaptive learning rates as proposed in Tensorflow, then apply the limited-memory BroydenFletcherGoldfarbShanno algorithm with bound constraints (L-BFGS-B) 44 to finetune the results, so that errors from inexact optimization are suppressed. The training process of L-BFGS-B is terminated automatically based on the increment tolerance. Note that although both the loss functions for the pressure normal velocity boundary condition (3.8) and for the velocity boundary condition (3.9) provide satisfying numerical approximations, only numerical results obtained with the loss function (3.9) (or 3.12) in discrete form) is presented in the following, as the velocity boundary condition is most commonly used in practice. The results for the pressure normal velocity boundary condition will not be present here for simplicity.
\subsection{Smooth test case}
Consider the following test case defined in domains $\Omega=(0,1)^{d}, d=2,3$ such that $5.1 \mathrm{a}$ and (5.1b) are the exact solutions. The parameter $v=1$ in these test cases. The external source $f$ and the Dirichlet boundary data $g$ is chosen such that the above functions $\boldsymbol{u}$ and $p$ are solutions to the system (1.1).
$$
\begin{aligned}
&u(x, y)=\left(\begin{array}{c}
\sin (x) \sin (x) \cos (y) \sin (y) \\
-\cos (x) \sin (x) \sin (y) \sin (y)
\end{array}\right), \quad p(x, y)=\cos (x) \cos (y), \\
&u(x, y, z)=\left(\begin{array}{c}
x+x^{2}+x y+x^{3} y \\
y+x y+y^{2}+x^{2} y^{2} \\
-2 z-3 x z-3 y z-5 x^{2} y z
\end{array}\right), \quad p(x, y, z)=x y z+x^{3} y^{3} z-\frac{5}{32} .
\end{aligned}
$$
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-16}
\end{center}
(a) Speed
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-16(1)}
\end{center}
(b) Velocity
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-16(2)}
\end{center}
(c) Pressure
Figure 3: Numerical results of test case 5.1a obtained with VPVnet method using $20 \times 20$ uniformly distributed quadrature points in $\Omega=(0,1)^{2}$
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-16(3)}
\end{center}
(a) Speed
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-16(4)}
\end{center}
(b) Velocity
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-16(5)}
\end{center}
(c) Pressure
Figure 4: Numerical results of test case $5.1 \mathrm{~b}$ in 3D obtained with VPVnet method using $20 \times 20 \times 20$ uniformly distributed quadrature points in $\Omega=(0,1)^{3}$.
We first examine the accuracy of the VPVnet method by varying the number of quadrature points and the number of layers and neurons. We use neural networks of different structures to approximate the solution of the Stokes' problem 5.1a. For example, a neural network comprised of 2 Resnet blocks, i.e. 4 hidden layers, and each hidden layer admits 8 neurons, with total 267 trainable parameters, is one of the neural networks considered. For simplicity, we note the neural networks by its number of hidden layers $N_{L}$ and the number of neurons $N_{N}$ within each hidden layers, hence the above neural network is noted as a neural network of $4 \times 8$. We also use neural networks of $8 \times 8,4 \times 16$, $8 \times 16$ hidden layers and neurons, with total 555, 915, 2003 trainable parameters, respectively, to approximates the solutions of the model problem (5.1a).
The discrete loss functions (3.12) is considered for the VPVnet method. The quadrature points (QPs) for the evaluation of the loss function is uniform grid data on the domain, one point quadrature rules is considered, which consist of $50 \times 50$ (or $20 \times 20$ ) points in total. Hence we have $h=1 / 50$ (or $h=1 / 20$ ). And we choose $\alpha=1$. QPs for the boundary condition of $u, v$ is uniformly distributed points on the boundary of $\Omega$, which contains 50 (or 20) points on each edge, i.e 200 (or 80) points in total. An illustration of these QPs is shown in Fig2. As mentioned previously, the optimization problem is first computed with 2000 Adam iterations, and followed by at most 5000 L-BFGS-B iterations, the learning rates are determined by existing self-adaption algorithms in Tensorflow.
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-17}
\end{center}
(a) Convergence of loss for test case $5.1(\mathrm{a})$
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-17(1)}
\end{center}
(b) Convergence of loss for test case $5.1(\mathrm{~b})$
Figure 5: Convergence of loss function over optimization iteration with different optimization processes for test case 5.1(a) and 5.1(b), the curves are: (1) Two step optimization : Adam part; (2) Two step optimization : L-BFGS-B part; (3) Adam optimizer only; (4) Adam optimizer with a hyperparameter of learning rate $5 e-4$ with exponential decay rate $0.98 ;$ (5) L-BFGS-B optimizer only.
Figure5 5 is the convergence of loss function over different optimization iterations. As we can observe both in Fig. 12(a) and 12(b), the two-step approach allow a much faster convergence and even a better precision than just using Adam only. This is because BFGS uses the Hessian matrix (curvature in highly dimensional space) to calculate the optimization direction and provides more accurate results. However, if BFGS is used directly without using the Adam optimizer, the optimization can rapidly converge to a local minimum [47]. For this reason, the Adam optimizer is used first to avoid local minima, and then the solution is refined by BFGS. Here we use the L-BFGS-B method instead of the BFGS method in our work.
Figure 6 show the convergence of loss function over optimization iteration obtained with two different neural network structures for test case 5.1(a), the first is neural network with Resnet (Res) blocks, the second is feedforward (FF) neural network, note that these two neural networks have the same number of 8 hidden layers and each hidden layers have the same number of 20 neurons, except that the neural network with Resnet blocks has a residual connection every two layers. We can see that the neural network with Resnet blocks converges faster and gives a better precision than classical feedforward network. Hence neural network with Resnet blocks is adopted for all the subsequent test cases in this work.
The numerical approximations obtained with the VPVnet method (using sinusoid activation function) for the speed, velocity and pressure are plotted in Figure 3 . The $L_{2}$
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-18}
\end{center}
Figure 6: Convergence of loss function over optimization iterations obtained with different neural network structures for test case $5.1(\mathrm{a})$ : (1) neural network with Resnet blocks, 2000 Adam optimization iteration plus L-BFGS-B optimization process; (2) feedforward neural network, 2000 Adam optimization iteration plus LBFGS-B optimization process; (3) feedforward neural network, 5000 Adam optimization iteration plus L-BFGS-B optimization iterations.
Table 1: Numerical errors of the VPVnet method for test case (5.1a).
\begin{center}
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{2}{*}{Network $N_{L} N_{N}$} & \multicolumn{5}{|c|}{VPVnet method} & \\
\cline { 2 - 7 }
& $20 \times 20$ & Quadrature points & $50 \times 50$ & Quadrature points & & \\
\cline { 2 - 7 }
& $e_{u}$ & $e_{v}$ & $e_{p}$ & $e_{u}$ & $e_{v}$ & $e_{p}$ \\
\hline
$4 \times 8$ & $7.88 \mathrm{e}-4$ & $8.08 \mathrm{e}-4$ & $1.36 \mathrm{e}-2$ & $7.27 \mathrm{e}-4$ & $7.58 \mathrm{e}-4$ & $5.16 \mathrm{e}-3$ \\
$8 \times 8$ & $8.35 \mathrm{e}-4$ & $8.26 \mathrm{e}-4$ & $6.62 \mathrm{e}-3$ & $5.09 \mathrm{e}-4$ & $5.45 \mathrm{e}-4$ & $3.31 \mathrm{e}-3$ \\
$4 \times 16$ & $6.58 \mathrm{e}-4$ & $5.50 \mathrm{e}-4$ & $1.02 \mathrm{e}-2$ & $3.46 \mathrm{e}-4$ & $4.47 \mathrm{e}-4$ & $5.71 \mathrm{e}-3$ \\
$8 \times 16$ & $5.24 \mathrm{e}-4$ & $5.49 \mathrm{e}-4$ & $4.56 \mathrm{e}-3$ & $4.34 \mathrm{e}-4$ & $4.99 \mathrm{e}-4$ & $5.51 \mathrm{e}-3$ \\
\hline
\end{tabular}
\end{center}
numerical errors of each variable $u, v, p$ are presented in Table 1. We observe that for this smooth test case, we can have a rather good approximation with a small number of hidden layers and neurons. Although the numerical error varies due to the variability of random initialization of the parameters, we can observe that by using either more hidden layers, more number of neurons or more quadrature points, a tendency of better precision can be observed within certain limits.
Table 2: Numerical error of the VPVnet method for test case 5.1a using different activation function, a neural network of $8 \times 16$ hidden layers and neurons, and $50 \times 50$ quadrature points.
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
\multirow{2}{*}{Activation function} & \multicolumn{3}{|c|}{VPVnet method} \\
\cline { 2 - 4 }
& $e_{u}$ & $e_{v}$ & $e_{p}$ \\
\hline
Sin & $4.34 \mathrm{e}-4$ & $4.99 \mathrm{e}-4$ & $5.51 \mathrm{e}-3$ \\
Tanh & $5.11 \mathrm{e}-4$ & $5.20 \mathrm{e}-4$ & $9.98 \mathrm{e}-3$ \\
Sigmoid & $3.01 \mathrm{e}-3$ & $3.36 \mathrm{e}-3$ & $1.59 \mathrm{e}-2$ \\
\hline
\end{tabular}
\end{center}
Table 2 shows the $L_{2}$ numerical errors using different activation functions. The errors are obtained using a neural network with $8 \times 16$ hidden layers and neurons, and $50 \times 50$ quadrature points. The results obtained with activation functions $\operatorname{Sin}(x), \operatorname{Tanh}(x)$ and $\operatorname{Sigmoid}(x)$ are all of satisfying precision. We remark that the one of most commonly used activation function ReLU is not considered here, since the function is not differentiable everywhere, while first-order derivative is presented in the loss functions and we use the automatic differentiation, i.e tf.gradient() to approximate the derivative, which will cause problems. A possible remedy to this problem might be using $\max \left(0, x^{3}\right)$ instead of $\operatorname{ReLU}:=\max (0, x)$ as proposed in $[19]$. Or we can approximate the derivative by using the finite difference quotient, as did by Cai et al. in [10]. The choice of finite difference quotient for the approximation of the derivative of different variable in $2 D$ or $3 D$ is a question that needs careful consideration. For the sake of clarity, we avoid such investigations here. In the rest of this paper, without specification, we will use $\operatorname{Sin}(x)$ as the activation function.
We use neural networks of $8 \times 16$ hidden layers and neurons to approximate the solution of the $3 \mathrm{D}$ test case $(5.1 \mathrm{~b}$. The quadrature points for the evaluation of the loss function is uniform grid data in the cube domain, which consist of $20 \times 20 \times 20$ points in total. Note that BFGS instead of the L-BFGS-B algorithm is used to finetune the results obtained with the Adam optimizer in this 3D test case, as a better precision can be achieved by using the former. The $L_{2}$ numerical errors obtained with the VPVnet method (using sinusoid activation function) is $1.24 e-5,1.37 e-5,1.37 e-5,4.85 e-5,4.95 e-3$, for variable $(\boldsymbol{u}, p)$, where $\boldsymbol{u}=\left(u^{(1)}, u^{(2)}, u^{(3)}\right)^{T}$ in 3D, respectively. Similar precisions are reached for tanh and sigmoid activation functions, which will not be listed here for simplicity. The numerical approximations obtained with the VPVnet method for the speed, velocity and pressure are plotted in Figure 4. The numerical results show that VPVnet method is accurate and efficient in 3D.
The test cases $5.1 \mathrm{a}$ and $5.1 \mathrm{~b}$ have been performed on more complex geometries using random sampling points. In this case, the loss function (3.12) is employed. The test case (5.1a) is preformed on a butterfly-like domain $\Omega$. The boundary $\Gamma$ of the domain $\Omega$ is defined by the following equation [56]:
$$
\Gamma(x, y)=\left\{(x, y)^{T} \in R^{2}: x(\theta)=r(\theta) \cos (\theta), y(\theta)=1.4 r(\theta) \sin (\theta), \theta \in[0,2 \pi]\right\},
$$
where $r(\theta)=2 \sqrt{\cos ^{2}(5 \theta)+\cos ^{2}(2 \theta)+\cos ^{2}(\theta)}$. We used 200 and 2500 random sampling points (cf Fig. 7(a) , respectively, on the boundary and inside of the computation domain. As for the 3D test case $5.1 \mathrm{~b}$, the domain boundary is described by the zero iso-value of the function $\phi[24]$ :
$$
\Gamma(x, y, z)=\left\{(x, y, z)^{T} \in R^{3}: \phi(x, y, z)=\left(x^{2}+\frac{9}{4} y^{2}+z^{2}-1\right)^{3}-x^{2} z^{3}-\frac{9}{80} y^{2} z^{3}=0\right\} .
$$
2500 and 8000 random sampling points are employed (cf Fig. 7(c)], respectively, on the boundary and inside of the computation domain. Convergence can be easily obtained for both test cases using the same optimization processes as previously used. The numerical results (cf Fig. 7) show that the method is very flexible on complex geometries. The $L_{2}$ numerical errors obtained with the VPVnet method (using sinusoid activation function) is $3.20 e-03,3.16 e-03$ and $3.34 e-02$ for variable $(u, p)$, where $u=\left(u^{(1)}, u^{(2)}\right)^{T}$ in 2D, re-\\
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-20}
(a) Sampling points, speed approximation and pressure approximation for $2 \mathrm{D}$ test case 5.1(a)
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-20(1)}
\end{center}
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-20(2)}\\
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-20(3)}
(b) Sampling points on the boundary (left) and inside the domain (right) for 3D test case 5.1(b)
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-20(4)}\\
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-20(5)}
(c) Speed approximation and pressure approximation for 3D test case 5.1(b)
Figure 7: Numerical results of test case $5.1 \mathrm{a}$ and test case $5.1 \mathrm{~b}$ obtained with VPVnet method using random sampling points in irregular domains
spectively (and 5.88e-05, 1.30e-04, 2.58e-05, 7.77e-3 for $(\boldsymbol{u}, p)$ in 3D). The precision can be further improved by adaptive sampling 62 which is the subject for future study.
\subsection{Pressure-robust test case}
In this example, let $\Omega=(0,1)^{2}$, and choose the proper right-hand side function such that the following functions satisfy system $1.1$.
$$
\boldsymbol{u}(x, y)=\left(\begin{array}{c}
-e^{x}(y \cos (y)+\sin (y)) \\
e^{x} y \sin (y)
\end{array}\right), \quad p(x, y)=2 e^{x} \sin (y) .
$$
Since $-\Delta \boldsymbol{u}=\left(-2 e^{x} \sin (y),-2 e^{x} \cos (y)\right)^{t}$, it is easy to check that when $v=1$, we have a homogeneous equation with $f=0$. When $v=0, f$ is a nonzero function. It is well known that the classical finite element methods, such as the mini element, Taylor-Hood element, etc, suffer from a common lack of robustness when $v \ll 1$. The velocity error can become extremely large. It is also called poor mass conservation sometimes, since for $H^{1}$-conforming mixed methods such large velocity errors are accompanied by large divergence errors. In this test case, we will report the robustness results of the VPVnet method.
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-21}
\end{center}
(a) $v=1 e-2$
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-21(1)}
\end{center}
(b) $v=1 e-4$
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-21(2)}
\end{center}
(c) $v=1 e-6$
Figure 8: Numerical results of divergence of $u$ for test case $5.2$ obtained with VPVnet method using $50 \times 50$ uniformly distributed quadrature points in $\Omega$.
Table 3: Numerical error of the VPVnet method with different value of $v$ for test case $5.2$, using a neural network with $8 \times 16$ hidden layers and neurons, and $20 \times 20$ and $50 \times 50$ quadrature points.
\begin{center}
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{2}{*}{$v$} & \multicolumn{6}{|c|}{VPVnet method} \\
\cline { 2 - 7 }
& $20 \times 20$ & Quadrature points & $50 \times 50$ & Quadrature points & & \\
\cline { 2 - 7 }
& $e_{u}$ & $e_{v}$ & $e_{p}$ & $e_{u}$ & $e_{v}$ & $e_{p}$ \\
\hline
$1 e-2$ & $4.18 \mathrm{e}-4$ & $1.35 \mathrm{e}-3$ & $1.46 \mathrm{e}-3$ & $1.60 \mathrm{e}-4$ & $5.86 \mathrm{e}-4$ & $2.62 \mathrm{e}-3$ \\
$1 e-4$ & $2.06 \mathrm{e}-4$ & $5.94 \mathrm{e}-4$ & $1.89 \mathrm{e}-3$ & $2.26 \mathrm{e}-4$ & $7.78 \mathrm{e}-4$ & $2.85 \mathrm{e}-3$ \\
$1 e-6$ & $3.22 \mathrm{e}-4$ & $9.23 \mathrm{e}-04$ & $8.68 \mathrm{e}-4$ & $3.98 \mathrm{e}-4$ & $1.18 \mathrm{e}-3$ & $2.09 \mathrm{e}-3$ \\
\hline
\end{tabular}
\end{center}
The numerical experiments are carried out for $v=1 e-2,1 e-4,1 e-6$. The numerical results are obtained using a neural network with $8 \times 16$ hidden layers and neurons, and $20 \times 20$ and $50 \times 50$ quadrature points. The optimization problem is computed with 2000 Adam and at most 5000 L-BFGS-B iterations, respectively. The $L_{2}$ numerical errors obtained with the VPVnet method of variable $u, v, p$ are presented in Table 3 . We obtain very good approximations of the solutions for any value of $\nu$, including the value $v$ very small. The divergence of the velocity div $\boldsymbol{u}=u_{x}+v_{y}$ for different values of $v$ is plotted in Figure 8 Hence the VPVnet method is divergence-free and pressure-robust.
\subsection{Smooth test case on L-shape domain}
For further examination, the more complicated domain, L-shape domain, is considered to validate in this research. The domain is defined by $\Omega=(-1,1)^{2} \backslash([0,1] \times[-1,0])$. It is called the backward facing step domain of flow problems. The parameter $v$ is of value 1 for all domain, $v=1$. The test case is of exact solution as shown in Eq. (5.2). The same test case has been considered in $[59 \mid$ and many other references.
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-22}
\end{center}
(a) Speed
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-22(1)}
\end{center}
(b) Pressure
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-22(2)}
\end{center}
(c) Pressure plot in 3D
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-22(3)}
\end{center}
(d) Velocity
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-22(4)}
\end{center}
(e) Streamline
Figure 9: Numerical results of test case $5.2$ on L-shape domain obtained with VPVnet method using nearly $50 \times 50 \times 3$ uniformly distributed quadrature points in $\Omega$.
We obtain good approximations for the test case on L-shape domain with the VPVnet method. Note that the test case has the same exact solution as (5.2) in subsection $5.2$. hence similar computation configuration has been used and the numerical errors are not shown here again. The numerical approximations obtained with the VPVnet method for the speed, velocity, pressure and the streamline are plotted in Figure 9
\subsection{Singular test case on L-shape domain}
Hereafter, we consider a singular solution on the L-shaped domain $\Omega:=(-1,1)^{2} \backslash[0,1) \times$ $(-1,0]$. The test case is proposed by Verfürth in $[58]$ and has been widely used in the literature 14,41 . Let $\omega=3 \pi / 2, \delta=0.5444837$. Given $(r, \theta)$ the polar coordinates at the re-entrant corner $(0,0)$, introduce the auxiliary function
$$
\psi(r, \theta)=\frac{\sin ((1+\delta) \theta) \cos (\delta \omega)}{1+\delta}-\cos ((1+\delta) \theta)-\frac{\sin ((1-\delta) \theta) \cos (\delta \omega)}{1-\delta}+\cos ((1-\delta) \theta) \text {. }
$$
The singular solution we approximate is
$$
\left\{\begin{array}{l}
u(x, y)=r^{\delta}\left((1+\delta) \sin (\theta) \psi(\theta)+\cos (\theta) \psi^{\prime}(\theta)\right), \\
v(x, y)=r^{\delta}\left(\sin (\theta) \psi^{\prime}(\theta)-(1+\delta) \cos (\theta) \psi(\theta)\right), \\
p(x, y)=r^{\delta-1}\left((1+\delta)^{2} \psi^{\prime}(\theta)+\psi^{(3)}(\theta)\right) /(1-\delta) .
\end{array}\right.
$$
The Stokes equation is homogeneous with this solution, i.e $f=0$. Moreover, the Dirichlet conditions are homogeneous along the edges except at the re-entrant corner. We emphasize that the pair $(\boldsymbol{u}, p)$ is analytical in $\Omega \backslash(0,0)$, but $\nabla \boldsymbol{u}$ and $p$ are singular at the origin. Especially, $\boldsymbol{u} \notin\left[H^{2}\right]^{2}(\Omega)$ and $p \notin H^{1}(\Omega)$. This solution reflects perfectly the typical behavior of the solution of the Stokes problem near a reentrant corner. In general, for solution $(\boldsymbol{u}, p)$ to the Stokes problem in a non-convex domain $\Omega$, we can expect $\boldsymbol{u} \in\left[H^{s}(\Omega)\right]^{2}$ and $p \in H^{s-1}$ for $s<1+\delta$. Hence the solution $u$ is in $C^{0}$, and theoretically, the neural networks is capable of approximating a $C^{0}$ function at any error. It is expected by construction that the VPVnet method can capture such singular solutions, which is confirmed by the following numerical results.
Table 4: Error of the VPVnet method for test case $5.4$ using $8 \times 16$ hidden layers and neurons.
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
Quadrature & \multicolumn{3}{|c|}{VPVnet method} \\
\cline { 2 - 4 }
points & $e_{u}$ & $e_{v}$ & $e_{p}$ \\
\hline
$20 \times 20 \times 3$ & $2.75 \mathrm{e}-2$ & $2.22 \mathrm{e}-2$ & $5.21 \mathrm{e}-1$ \\
$50 \times 50 \times 3$ & $6.45 \mathrm{e}-3$ & $5.92 \mathrm{e}-3$ & $1.96 \mathrm{e}-1$ \\
$100 \times 100 \times 3$ & $8.60 \mathrm{e}-3$ & $8.63 \mathrm{e}-3$ & $3.17 \mathrm{e}-1$ \\
\hline
\end{tabular}
\end{center}
The computation configuration is as follows: The parameters $\alpha=1$ and a neural network with $12 \times 16$ hidden layers and neurons is employed for all experiments in this test case. Uniform partitions of size $20 \times 20 \times 3,50 \times 50 \times 3$ and $100 \times 100 \times 3$ are considered. (Note that we use $* \times * \times 3$ QPs because the L-shape domain is composed of 3 square domain.) One point quadrature rules is employed for all cases. Hence we have $h=1 / 20$ (or $h=1 / 50, h=1 / 100$ ) for the uniform partitions. As for the boundary condition of $u, v$, the partition is in accordance with the partitions interior domain. For the optimization,\\
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-24}
(a) Exact(left) \& approximate(right) pressure for domain $(-1.0,1.0)^{2}$
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-24(1)}\\
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-24(2)}
(b) Exact(left) \& approximate(right) pressure(zoom) domain $(-0.15,0.15)^{2}$\\
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-24(3)}
(c) Exact \& approximate pressure plot in 3D
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-24(4)}
\end{center}
(d) Speed
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-24(5)}
\end{center}
(e) Velocity
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-24(6)}
\end{center}
(f) Streamline
Figure 10: Numerical results of test case $5.4$ obtained with VPVnet method using $50 \times 50 \times 3$ quadrature points in $\Omega$. 2000 Adam iteration is used, while more L-BFGS-B iterations is needed (less than 50000 iterations), respectively.
The numerical solutions obtained with the VPVnet method for the speed, velocity, pressure and the streamline, and the exact solution for the pressure are are plotted in Figure 10 The numerical results in Table 4 clearly show that the VPVnet method can provide good approximations for $\boldsymbol{u} \in\left[H^{s}(\Omega)\right]^{2}, 2>s>1$. While the errors obtained with the VPVnet method for $p \in H^{s-1}, 2>s>1$ are much larger, due to the strong pressure singularities near the re-entrant corner $(0,0)$, which can also be observed in Figure 10(a). 10(b) and 10(c) The numerical errors can be reduced by refinement within a certain limit, then it will not get any better, indicating more efforts are needed to reach a satisfying precision. Recall that only smooth activation function is considered here in this paper, and it has been shown in [10] that the piecewise activation functions, e.g. Leaky ReLU gives a better performance than the smooth activation functions for a interface problem, hence it would be interesting to try continuous piecewise activation functions as remarked in subsection 5.1. which is subject to further investigations.
\subsection{Lid-driven cavity test cases}
The lid-driven cavity test case has been used as a benchmark problem for many numerical methods due to its simple geometry and complicated flow behaviors [8]. The computation domain is $\Omega=(0,1)^{2}$ in 2D. And the Dirichlet boundary condition is given as $u_{\Gamma}=g=(1,0)^{T}$ for $y=1$ and $g=(0,0)^{T}$ on the rest of the boundary.
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-25}
\end{center}
(a) Quadrature points in 2D
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-25(1)}
\end{center}
(b) Quadrature points in 3D
Figure 11: Local refined quadrature points of lid-driven cavity test case used for VPVnet method.
The main difficulty of this problem comes from the presence of singularities at corners of the domain. The pressure and the vorticity are not finite at the two top corners of the domain. Singularities also present at the corners $(0,0)$ and $(1,0)$, where the second derivative of the velocity is unbounded, which is weaker than the previous ones. By calculating the norm, one can see that $g \in H^{1 / 2-\delta}\left(\partial \Omega, R^{2}\right)$ for all $0<\delta<1 / 2$ but $g \notin H^{1 / 2}$, hence the velocity is not in $H^{1}$. Thus the usual variational formulation for the Stokes' problem does not apply for the lid-driven cavity problems. However, this test case is used in many papers to test finite elements methods and the discontinuous boundary condition is usually applied directly to the finite element discretization in practice. In fact, on one hand, we can approximate the discontinuous boundary data by a smooth one and then apply a finite element method to the regularized problem, on the other hand, it has been observed that for properly defined continuous boundary condition, the discrete system will not see the difference between the discontinuous and the continuous boundary condition [10] 18]. In this paper, we expect to explore the capacity of the VPVnet method to solve this problem.
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-26}
\end{center}
(a) Speed
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-26(1)}
\end{center}
(b) Pressure
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-26(2)}
\end{center}
(c) Pressure plot in 3D
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-26(3)}
\end{center}
(d) Velocity
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-26(4)}
\end{center}
(e) Streamline
Figure 12: Numerical results of lid-driven cavity test case obtained with VPVnet method using $50 \times 50$ locally refine quadrature points in $\Omega$.
We use the following computation configuration for $2 \mathrm{D}$ case computation: the parameters $\alpha=1$ and a neural network with $12 \times 16$ hidden layers and neurons is employed. A locally refined partition of size $50 \times 50$ which is geometrically refined towards all the corners, i.e. $(0,0),(1,0),(0,1)$ and $(1,1)$, are considered. The locally refined partition has elements of size varies from $4.0 e-3$ to $4.8 e-2$, an illustration is shown in Fig 11(a). One point quadrature rules is employed. We pick the value of $h=1 / 50$. As for the bound- ary condition of $u, v$, the partition is in accordance with the partitions interior domain, i.e. locally refined points on the boundaries of $\Omega$ are used. The discontinuous boundary condition is applied directly.
The numerical approximations obtained with the VPVnet method of the speed, velocity, pressure and the streamline for the lid-driven cavity test case are plotted in Figure 12 The exact solution of lid-driven problem is not known for the Stokes' problem. The shape of streamlines is similar to the result given in [41], strong pressure singularities are present at the top two corners, the results look quite reasonable. Note that, there are some perturbations which can be observed near the domain boundaries, particularly in the figure for the streamline 12(e). We refine the partition of the domain by $200 \times 200$ rectangular element with size from $1.0 e-3$ to $8.0 e-3$. The perturbations turns to diminish (see Fig. 13.
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-27}
\end{center}
(a) Speed
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-27(1)}
\end{center}
(b) Pressure
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-27(2)}
\end{center}
(c) Pressure plot in 3D
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-27(3)}
\end{center}
(d) Velocity
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-27(4)}
\end{center}
(e) Streamline
Figure 13: Numerical results of lid-driven cavity test case obtained with VPVnet method using $200 \times 200$ locally refine quadrature points in $\Omega$.
We also consider the 3D lid-driven cavity test case in a unit cubic domain. A tangential velocity $\boldsymbol{u}_{\Gamma}=(1,0,0)^{T}$ is imposed at the top surface of the domain, while $\boldsymbol{u}=(0,0,0)^{T}$ is imposed on the rest of the boundary. The parameters $\alpha=1$ and a neural network with $16 \times 16$ hidden layers and neurons is employed for the computation. Locally refined partitions of size $10 \times 10 \times 10$ and $20 \times 20 \times 20$ which are geometrically refined towards the domain corners are considered. The locally refined partition has elements of size varies
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-28}
\end{center}
(a) velocity with 1000 QPs
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-28(1)}
\end{center}
(b) velocity with 8000 QPs
Figure 14: Numerical results of 3D lid-driven cavity test case obtained with VPVnet method using 1000 and 8000 locally refine quadrature points in $\Omega$.
from $2.0 e-2$ to $2.0 e-1$ for the $10 \times 10 \times 10$ partition, and $1.0 e-2$ to $1.0 e-1$ for $20 \times 20 \times 20$ partition, an illustration is shown in Fig. 11(b). Note that for 3D computation, the BFGS algorithm is used to finetune the results obtained with 2000 Adam optimization iterations. The numerical results are shown in Figures 14 and 15 where Figure 15 show the top view along the $x, y, z$ axis respectively. The numerical results indicate that the VPVnet method is efficient and accurate.
\section{Discussion and conclusion}
A deep neural network method is proposed and verified to approximate the solution of the Stokes' equations. The method makes use of least square functionals based on the first-order VPV formulation 2.1) of the Stokes' equations as loss functions. It has less regularity requirements on the solutions compared with other methods such as the methods based on the original form of the PDEs $40.51]$. Convergence and error estimates have been established for this method. We observed that smooth solutions of the Stokes' equations can be well approximated by the VPVnet method using Resnet neural networks with a rather small number of neurons and hidden layers. Within a certain limit, a better precision can be obtained by using more hidden layers, more neurons and more quadrature points. Either smooth activation function, such as $\operatorname{Sin}(x)$, Tanh, Sigmoid, etc can be employed and all of them can provide satisfying approximations. The method is divergence-free and pressure-robust.
We focus on the ability of the method for the approximation of Stokes' equations with non-smooth/singular solutions, i.e. $u \notin H^{2}$ and $p \notin H^{1}$. For the velocity variable $\boldsymbol{u} \in H^{s}$, $2>s>1\left(\in C^{0}\right)$, the VPVnet methods can approximate it very well, which is in consistent with our theoretical analysis. As for the pressure variable $p \in H^{s-1}, 2>s>1\left(\notin C^{0}\right)$, the approximation turn to be more difficult. Refinement can help to improve the precision,
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-29}
\end{center}
(a) view from $x$-axis
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-29(1)}
\end{center}
(d) view from $x$-axis
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-29(2)}
\end{center}
(b) view from $y$-axis
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-29(3)}
\end{center}
(e) view from $y$-axis
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-29(4)}
\end{center}
(c) view from z-axis
\begin{center}
\includegraphics[max width=\textwidth]{2022_10_11_170390f60ec66414c624g-29(5)}
\end{center}
(f) view from z-axis
Figure 15: Numerical results of lid-driven cavity test case obtained with VPVnet method using 1000 and 8000 locally refine quadrature points in $\Omega$.
while more efforts is needed to reach a precision satisfying. The future work will be aimed at further investigation on this aspect. In the meantime, a variant of the proposed VPVnet method is under investigation, namely the VVGPnet method which is based on the first-order system of velocity/velocity gradient/pressure (VVGP) formulation [45] of the Stokes' equations. The VVGP system also provides a $H^{1}$ least square functional and can be approximated similarly as the VPVnet method. The VVGPnet method is quite suitable for the approximation of the Stokes interface problems [60], which usually has jumps of the gradient of the velocity. The method will be detailed in a forthcoming paper dedicated for the interface problems.
The method presented here has already been extended to the incompressible NavierStokes Equations and will be discussed in a forthcoming paper.
\section{Acknowledgments}
The authors would like to thank Hailong Sheng for helpful discussions. The research of Liu was partially supported by China National Natural Science Foundation (No. 12001306), Guangdong Provincial Natural Science Foundation (No. 2017A030310285). The research of Yang (corresponding author) was funded in part by Beijing Academy of Artificial Intelligence.
\section{References}
[1] S. Agmon, A. Douglis, L. Nirenberg. Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions II, Communications on pure and applied mathematics, 17(1), 35-92.
[2] T. Arbogast, D. S. Brunson. A computational method for approximating a Darcy-Stokes system governing a vuggy porous medium. Computational Geosciences, 2007, 11(3):207-218.
[3] A. G. Baydin, B. A. Pearlmutter, A. Radul, J. M. Siskind, Automatic differentiation in machine learning: a survey. Journal of Machine Learning Research, 18(1), 5595-5637, 2017.
[4] C. Beck, M. Hutzenthaler, A. Jentzen, B. Kuckuck, An overview on deep learning-based approximation methods for partial differential equations. arXiv preprint :2012.12348, 2020.
[5] A. Biswas, J. Tian, S. Ulusoy. Error Estimates for Deep Learning Methods in Fluid Dynamics. 2020, arXiv preprint arXiv:2008.02844
[6] P. B. Bochev, M. D. Gunzburger, Analysis of least squares finite element methods for the Stokes equations[J]. Mathematics of Computation, 1994, 63(208): 479-506.
[7] P. B. Bochev, D. Gunzburger, Finite element methods of least-squares type. SIAM Review, $1998,40: 789-837$.
[8] O. Botella, R. Peyret. Benchmark spectral results on the lid-driven cavity flow. Computers \& Fluids, 1998, 27(4): 421-433.
[9] S. L. Brunton, B. R. Noack, P. Koumoutsakos, Machine learning for fluid mechanics, Annual Review of Fluid Mechanics, 2020, 52: 477-508.
[10] Z. Cai, J. Chen, M. Liu, and X. Liu. Deep least-squares methods: An unsupervised learningbased numerical method for solving elliptic PDEs. Journal of Computational Physics, 2020, 420: 109707.
[11] W. Cai, X. Li, L. Liu, A Phase Shift Deep Neural Network for High Frequency Approximation and Wave Problems. SIAM Journal on Scientific Computing, 42(5), A3285-A3312, 2020.
[12] Z. Cai, Y. Wang. An error estimate for two-dimensional Stokes driven cavity flow. Mathematics of computation, 2009, 78(266): 771-787.
[13] C. L. Chang, J. Bo-Nan, An error analysis of least-squares finite element method of velocitypressure-vorticity formulation for Stokes problem. Computer methods in applied mechanics and engineering, 1990, 84(3), 247-255.
[14] A. Chernov, C. Marcati, L. Mascotto, p-and hp-virtual elements for the Stokes equation. SAM Research Report, 2020.
[15] R. Codina, O. Soto. Finite element solution of the Stokes problem with dominating Coriolis force. Computer Methods in Applied Mechanics and Engineering, 1997, 142(3-4):215-234.
[16] J. M. Deang and M. D. Gunzburger. Issues related to least-squares finite element methods for the Stokes equations. SIAM Journal on Scientific Computing, 20(3), 878-906, 1998.
[17] M. W. M. G. Dissanayake, N. Phan-Thien. Neural network based approximations for solving partial differential equations. Communications in Numerical Methods in Engineering, $10(3): 195-201,1994$.
[18] R. Duran, L. Gastaldi, A. Lombardi, Analysis of Finite Element Approximations of Stokes Equations with NonSmooth Data. SIAM Journal on Numerical Analysis, 2020, 58(6): 33093331.
[19] W.-N. E, B. Yu, The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Communications in Mathematics \& Statistics, 2018, 6(1):1-12.
[20] W.-N. E, Machine Learning and Computational Mathematics. Communications in Computational Physics, 2020, 28:1639-1670.
[21] F. Gibou, D. Hyde, R. Fedkiw, Sharp interface approaches and deep learning techniques for multiphase flows. Journal of Computational Physics, 2019, 380: 442-463.
[22] V. Girault and P.-A. Raviart, Finite Element Methods for Navier-Stokes Equations, Springer-Verlag, Berlin, 1986.
[23] M. Gunzburger and J. Peterson, On conforming finite element methods for the inhomogeneous stationary Navier-Stokes equations, Numer. Math., 1983, 42,173-194.
[24] C. He, X. Hu, L. Mu. A Mesh-free Method Using Piecewise Deep Neural Network for Elliptic Interface Problems. arXiv preprint arXiv:2005.04847. 2020.
[25] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 770-778.
[26] J. He, L. Li, J. Xu, and C Zheng, ReLU deep neural networks and linear finite elements[J]. arXiv preprint arXiv:1807.03973. 2018.
[27] K. Hornik, Approximation capabilities of multilayer feedforward networks. Neural networks, 1991, 4(2): 251-257.
[28] A. Iskhakov, N. Dinh, Physics-integrated machine learning: embedding a neural network in the Navier-Stokes equations. arXiv preprint arXiv:2008.10509, 2020.
[29] A. Iskhakov, N. Dinh, Physics-integrated machine learning: embedding a neural network in the Navier-Stokes equations. Part II. arXiv preprint arXiv:2009.11213. 2020.
[30] B.-N. Jiang, C. L. Chang, Least-squares finite elements for the Stokes problem[J]. Computer Methods in Applied Mechanics and Engineering, 1990, 78(3): 297-311.
[31] B.-N. Jiang, A least-squares finite element method for incompressible Navier-Stokes problems[J]. International Journal for Numerical Methods in Fluids, 1992, 14(7): 843-859.
[32] B.-N. Jiang, T. L. Lin, L. A. Povinelli, Large-scale computation of incompressible viscous flow by least-squares finite element method[J]. Computer Methods in Applied Mechanics and Engineering, 1994, 114(3-4): 213-231.
[33] X. Jin, S. Cai, H. Li, G. E. Karniadakis, NSFnets (Navier-Stokes Flow nets): Physicsinformed neural networks for the incompressible Navier-Stokes equations. arXiv preprint arXiv:2003.06496, 2020
[34] E. Kharazmi, Z. Zhang, G. E. Karniadakis, Variational physics-informed neural networks for solving partial differential equations.
[35] R. Khodayi-Mehr, M. Zavlanos, VarNet: Variational neural networks for the solution of partial differential equations, In Learning for Dynamics and Control, 2020, 298-307.
[36] D. Kingma, J. Ba, Adam: A method for stochastic optimization, arXivpreprint arXiv:1412.6980, 2014.
[37] I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks, 1998, 9(5): 987-1000.
[38] I. E. Lagaris, A. C. Likas, D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Transactions on Neural Networks , 2000, 11, 10411049.
[39] H. C. Lee, Efficient computations for linear feedback control problems for target velocity matching of Navier-Stokes flows via POD and LSTM-ROM. Electronic Research Archive, 2021, 29(3): 2533.
[40] J. Li, J. Yue, W. Zhang, W. Duan. The Deep Learning Galerkin Method for the General Stokes Equations. arXiv preprint arXiv:2009.11701, 2020.
[41] R. Li, Z. Sun, F. Yang, Z. Yang, A finite element method by patch reconstruction for the Stokes problem using mixed formulations. Journal of Computational and Applied Mathematics, 2019, 353, 1-20.
[42] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
[43] Y. Liao, P. Ming, Deep Nitsche Method: Deep Ritz Method with Essential Boundary Conditions, Communications in Computational Physics, 2021, 29: 1365-1384.
[44] D. C. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical programming, 1989, 45: 503-528.
[45] K. Liu. Hybrid First-Order System Least-Squares Finite Element Methods With The Application To Stokes And Navier-Stokes Equations, Doctoral dissertation, University of Colorado at Boulder, 2012.
[46] K. Liu, T. A. Manteuffel, S. F McCormick, J. W. Ruge, L. Tang. Hybrid first-order system least squares finite element methods with application to Stokes equations. SIAM Journal on Numerical Analysis, 2013, 51(4), 2214-2237.
[47] S. Markidis. Physics-Informed Deep-Learning for Scientific Computing. arXiv preprint arXiv:2103.09655, 2021.
[48] R. Ranade, C. Hill, J. Pathak, DiscretizationNet: A machine-learning based solver for Navier-Stokes equations using finite volume discretization. Computer Methods in Applied Mechanics and Engineering, 2021, 378: 113722.
[49] K. S. McFall, J. R. Mahan, Artificial neural network method for solution of boundary value problems with exact satisfaction of arbitrary boundary conditions, IEEE Transactions on Neural Networks, 2009, 20, 1221-1233.
[50] S. Mehrkanoon, J. A. K. Suykens, Learning solutions to partial differential equations using LS-SVM, Neurocomputing, 2015, 159: 105-116.
[51] M. Raissi, P. Perdikaris, G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 2019, 378: 686-707.
[52] M. Raissi, A. Yazdani, G. E. Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations, Science, 2020, 367(6481): 1026-1030.
$\mathrm{~ [ 5 3 ] ~ C . ~ R e i s i n g e r , ~ Y . ~ Z h a n g , ~ R e c t i f i e d ~ d e e p ~ n e u r a l ~ n e t w o r k s ~ o v e r c o m e ~ t h e ~ c u r s e ~ o f ~ d i m e n s i o n a l i}$ for nonsmooth value functions in zero-sum games of nonlinear stiff systems. arXiv preprint arXiv:1903.06652, 2019.
\includegraphics[max width=\textwidth, center]{2022_10_11_170390f60ec66414c624g-32}\\
Class of Second-Order Boundary-Value Problems on Complex Geometries, Journal of Computational Physics, 2021, 427:110085.
[55] J. Sirignano, K. Spiliopoulos. DGM: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 2018, 375, 1339-1364.
[56] Y. Tang, X. Li. A meshless complex variable Galerkin boundary node method for potential and Stokes problems. Engineering Analysis with Boundary Elements, 2017, 83: 204-216.
[57] N. Thuerey, K. Weißenow, L. Prantl, X. Hu, Deep learning methods for Reynolds-averaged Navier-Stokes simulations of airfoil flows, AIAA Journal, 2020, 58(1): 25-36.
[58] R. Verfürth. A Review of a Posteriori Error Estimation and Adaptive Mesh-Refinement Techniques. Chichester-Stuttgart, Wiley-Teubner, 1996.
[59] T. Vu-Huu, C. Le-Thanh, H. Nguyen-Xuan, M. Abdel-Wahab, An equal-order mixed polygonal finite element for two-dimensional incompressible Stokes flows. European Journal of Mechanics-B / Fluids, 2020, 79, 92-108.
[60] B. Wang, B. C. Khoo, Hybridizable discontinuous Galerkin method (HDG) for Stokes inter- face flow. Journal of Computational Physics, 247, 262-278.
[61] B. Wang, W. Z. Zhang, W. Cai, Multi-Scale Deep Neural Network (MscaleDNN) Methods for Oscillatory Stokes Flows in Complex Domains. $\underline{\text { Communications in Computational Physics, }}$ 2020, 28: 2139-2157.
[62] C. L. Wight, J. Zhao, Solving Allen-Cahn and Cahn-Hilliard equations using the adaptive physics informed neural networks. arXiv preprint arXiv:2007.04542, 2020.
[63] T. Xie, F. Cao, The errors of simultaneous approximation of multivariate functions by neural networks. Computers \& Mathematics with Applications, 2011, 61(10), 3146-3152.
[64] J. Xu The finite neuron method and convergence analysis. arXiv preprint arXiv:2010.01458 2020