-
Notifications
You must be signed in to change notification settings - Fork 1
/
diligentrt.html
262 lines (235 loc) · 13.4 KB
/
diligentrt.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
<html><head lang="en"><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title>DiLiGenRT</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/png" href="./src/iconPS.png">
<link rel="stylesheet" href="./src/bootstrap.min.css">
<link rel="stylesheet" href="./src/font-awesome.min.css">
<link rel="stylesheet" href="./src/codemirror.min.css">
<link rel="stylesheet" href="./src/app.css">
<link rel="stylesheet" href="./src/bootstrap.min(1).css">
<!-- <script type="text/javascript" async="" src="./src/analytics.js"></script>
<script type="text/javascript" async="" src="./src/analytics(1).js"></script>
<script async="" src="./src/js"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-110862391-3');
</script> -->
<!-- <script src="./src/jquery.min.js"></script>
<script src="./src/bootstrap.min.js"></script>
<script src="./src/codemirror.min.js"></script>
<script src="./src/clipboard.min.js"></script>
<script src="./src/app.js"></script> -->
</head>
<body data-gr-c-s-loaded="true">
<div class="container" id="main">
<div class="row">
<h1 class="col-md-12 text-center">
DiLiGenRT: A Photometric Stereo Dataset with Quantified</br>
Roughness and Translucency
<br /><br />
<small>
CVPR 2024 (<b>Poster Presentation</b>)
</small>
<br /><br />
</h1>
</div>
<div class="row">
<div class="col-md-12 text-center">
<ul class="list-inline">
<li>
<a href="https://gh-home.github.io/">
Heng Guo
</a><sup>1,†</sup>
</li>
<li>
<a href="https://photometricstereo.github.io/diligent102.html">
Jieji Ren
</a><sup>2,†</sup>
</li>
<li>
<a href="https://github.com/Fisher-Wang">
Feishi Wang
</a><sup>3,4,†</sup>
</li>
<li>
<a href="https://ci.idm.pku.edu.cn/team">
Boxin Shi
</a><sup>3,4,‡</sup>
</li>
<li>
<a href="https://me.sjtu.edu.cn/teacher_directory1/renmingjun.html">
Mingjun Ren
</a><sup>2,‡</sup>
</li>
<li>
<a href="http://cvl.ist.osaka-u.ac.jp/en/member/matsushita/">
Yasuyuki Matsushita
</a><sup>5,‡</sup>
</li>
</ul>
</div>
</div>
<div class="row">
<div class="col-md-12 text-center">
<ul class="list-inline">
<li>
<sup>1</sup>School of Artificial Intelligence, Beijing University of Posts and Telecommunications
</li>
<li>
<sup>2</sup>School of Mechanical Engineering, Shanghai Jiao Tong University
</li>
<li>
<sup>3</sup>National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
</li>
<li>
<sup>4</sup>National Engineering Research Center of Visual Technology, School of Computer Science, Peking University
</li>
<li>
<sup>5</sup>Graduate School of Information Science and Technology, Osaka University
</li>
</ul>
<br /><br />
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2 text-center">
<ul class="nav nav-pills nav-justified">
<li>
<a href="https://photometricstereo.github.io/imgs/diligentrt/CameraPaper.pdf">
<img src="./imgs/diligentrt/CameraPaperImg.png" height="120px"><br>
<h4><strong>Paper</strong></h4>
</a>
</li>
<li>
<a href="https://photometricstereo.github.io/imgs/diligentrt/CameraSupp.pdf">
<img src="./imgs/diligentrt/CameraSuppImg.png" height="120px"><br>
<h4><strong>Supplementary</strong></h4>
</a>
</li>
<li>
<a href="https://lab.ybh1998.space:8443/rtbenchmarkwebsite/">
<img src="./imgs/diligentrt/EvalLogo2.png" height="120px"><br>
<h4><strong>Evaluation</strong></h4>
</a>
</li>
<li>
<a href="https://disk.pku.edu.cn/link/AAF72F5C18C0A047489286ECEE2A137406">
<img src="./imgs/diligentrt/DatasetLogo.png" height="120px"><br>
<h4><strong>Dataset</strong></h4>
</a>
</li>
</ul>
<br /><br /><br />
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Overview
</h3>
<img src="./imgs/diligentrt/RTface.png" class="img-responsive" alt="overview"><br>
<p class="text-justify">
Photometric stereo faces challenges from non-Lambertian reflectance in real-world scenarios. Systematically measuring the reliability of photometric stereo methods in handling such complex reflectance necessitates a real-world dataset with quantitatively controlled reflectances. This paper introduces <strong>DiLiGenRT</strong>, the first real-world dataset for evaluating photometric stereo methods under quantified reflectances by manufacturing 54 hemispheres with varying degrees of two reflectance properties: <strong>R</strong>oughness and <strong>T</strong>ranslucency. Unlike qualitative and semantic labels, such as diffuse and specular, that have been used in previous datasets, our quantified dataset allows comprehensive and systematic benchmark evaluations. In addition, it facilitates selecting best-fit photometric stereo methods based on the quantitative reflectance properties.
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Highlights
</h3>
<p class="text-justify">
<ul>
<li>
First public PS dataset with quantified <strong>R</strong>oughness (9 levels) and <strong>T</strong>ranslucency (6 levels);
</li>
<li>
A simple and stable process for fabricating surfaces with controlled roughness and tranlucency;
</li>
<li>
First quantitative work space of photometric stereo w.r.t reflectance.
</li>
</ul>
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Febrication, Capture and `RT` Measurement
</h3>
<img src="./imgs/diligentrt/RTfabricate.png" class="img-responsive" alt="captureimg"><br>
<img src="./imgs/diligentrt/RTGT.png" class="img-responsive" alt="captureimg"><br>
<p class="text-justify">
<!--(<strong>Left</strong>) Objects in DiLiGenRT are constructed by first manufacturing molds with varying degrees of roughness through sandblasting and then injecting solutions of different concentrations into the molds, followed by solidifying and de-molding. (<strong>Right</strong>) Images in DiLiGenRT are captured by moving a point light source bundled on a robot arm.-->
We manufacturing multiple molds with same size, and sandblasting and polish them with differen grit # (the size of granularity) to obtain diverse surface rougness. For translucecy, we mix different concentrations of pigment into silica gel to casting the molds to obtain hemi-spheres. We also take the lightweight illumination and imaging setup for capture the DiLiGenT-RT dataset. We take <a href="https://www.zygo.com/products/metrology-systems/3d-optical-profilers/nexview-nx2">zygo nexView<sup>TM</sup> NX2</a> to measure the accurate surface roughness of objects, and build a customerized equipment to measurement the translucency of objects.
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Benchmark Results
</h3>
<img src="./imgs/diligentrt/RT_heatmap.png" class="img-responsive" alt="benchmark"><br>
<p class="text-justify">
Roughness-translucency MAE matrices for non-learning-based (top) and learning-based (bottom) photometric stereo methods, showing their performance profiles under different levels of reflectance properties. The mean and median of the MAE matrix are presented near the method name. The ticks of row and column are σ<sub>t</sub> (transparency) and Sa (roughness). Reducing σ<sub>t</sub> corresponds to increasing translucency, while lowering Sa is associated with decreased roughness. Their error distribution matrix is visualized. More rough and less translucent samples show small reconstruction error (same as common sense).
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Performance Analysis
</h3>
<img src="./imgs/diligentrt/RT_corner.png" class="img-responsive" alt="compareups"><br>
<p class="text-justify">
Visualization of estimated surface normals for hemisphere objects at the four corners of the translucency-roughness (top-left:most rough and least translucent, top-right: least rough and least translucent, bottom-left: most rough and most translucent, bottom-right: least rough and most translucent), which directly demonstrate the influence of roughness of translucency on surface normal estimation.
</p>
<img src="./imgs/diligentrt/RT_light.png" class="img-responsive" alt="compare"><br>
<p class="text-justify">
PS work space based on DiLiGenRT under sparse and dense lights (#10 and #100). Each cell records the best-performing algorithms on each roughness-translucency sample (MAE/name annotated in the heatmap block).
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Citataion
</h3>
<p style="border-style: groove;border-width: 1px;border-color: lightgrey;color:grey;">
@InProceedings{Guo_Ren_Wang_2024_CVPR,</br>
author = {Guo, Heng and Ren, Jieji and Wang, Feishi and Ren, Mingjun and Shi, Boxin and Yasuyuki, Matsushita},</br>
title = {DiLiGenRT: A Photometric Stereo Dataset with Quantifed Roughness and Translucency},</br>
booktitle = {Proceedings of the IEEE/CVF Computer Vision and Pattern Recongnition (CVPR)},</br>
month = {June},</br>
year = {2024},</br>
pages = {xxxxx-xxxxx}</br>
}</br>
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Contact
</h3>
<p class='text-justify'>Any questions and further discussion, please send e-mail to:<br> <a>guoheng_AT_bupt_DOT_edu_DOT_cn</a>.
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Acknowledgments
</h3>
We acknowledge support from National Natural Science Foundation of China, JSPS KAKENHI, and computation resource from openbayes.com. The website template was borrowed from <a href="https://vilab-ucsd.github.io/ucsd-openrooms/">OpenRooms</a>.
<p></p>
</div>
</div>
</div>
</body></html>