本项目在 CelebAMask-HQ 数据集上训练了用于人脸解析(人脸分割)的SegNeXt模型。在指标表现上远远优于之前常用的 BiSeNetv2 模型。
依据下面的步骤可以方便的使用该模型,获得良好的人脸解析结果。
The project provides SegNeXt for face parsing, trained on the CelebAMask-HQ dataset.
The results are as follows, which are much better than previously widely-used BiSeNetv2.
Model | skin | nose | eye_g | l_eye | r_eye | l_brow | r_brow | l_ear | r_ear | mouth | u_lip | l_lip | hair | hat | ear_r | neck_l | neck | cloth | background |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SegNeXt | 93.69 | 89.29 | 87.12 | 82.59 | 82.55 | 76.81 | 76.75 | 80.86 | 79.30 | 87.74 | 82.41 | 84.83 | 91.98 | 81.80 | 57.74 | 22.07 | 84.88 | 80.70 | 93.87 |
BiSeNetv2 | 92.79 | 88.40 | 83.51 | 34.72 | 33.13 | 35.91 | 25.45 | 43.11 | 4.26 | 83.26 | 78.30 | 82.06 | 90.58 | 74.23 | 46.40 | 0 | 82.04 | 71.81 | 92.15 |
Model | mIou | mAcc |
---|---|---|
SegNeXt | 79.83 | 86.63 |
BiSeNetv2 | 60.12 | 69.94 |
- Linux
- Python >=3.6
- Anaconda or miniconda
- Clone this repo
git clone https://github.com/Beyondzjl/segmentation-CelebAMask-HQ-SegNeXt.git cd segmentation-CelebAMask-HQ-SegNeXt
- Download dataset CelebAMask-HQ from Google Drive.
I have divided the original dataset into following structure.CelebAMask-HQ | |-train | |-images | |-labels |-test | |-images | |-labels |-val | |-images | |-labels
- Prepare OpenMMLab dependences
pip install -U openmim mim install mmcv-full==1.6.0 pip install timm
- Prepare project dependences
pip install -r requirements.txt
-
Get train-weight from Google Drive.
-
Run
export PYTHONPATH="${PYTHONPATH}:/xxx/mmsegmentation-master" ##指示所有文件的搜索范围 python tools/test.py ${配置文件} ${检查点文件} [--out ${结果文件}] [--eval ${评估指标}]
For example:
python /xxx/segmentation-CelebAMask-HQ-SegNeXt/tools/test.py /xxx/segmentation-CelebAMask-HQ-SegNeXt/mysegconfig/segnext_CelebAMask_test.py /xxx/segmentation-CelebAMask-HQ-SegNeXt/iter_160000.pth --eval mIoU ## give you the evalution results python /xxx/segmentation-CelebAMask-HQ-SegNeXt/tools/test.py /xxx/segmentation-CelebAMask-HQ-SegNeXt/mysegconfig/segnext_CelebAMask_test.py /xxx/segmentation-CelebAMask-HQ-SegNeXt/iter_160000.pth --show-dir <results_path> --gpu-id x ## save results to the path(覆盖原图的结果) python /xxx/segmentation-CelebAMask-HQ-SegNeXt/tools/test.py /xxx/segmentation-CelebAMask-HQ-SegNeXt/mysegconfig/segnext_CelebAMask_test.py /xxx/segmentation-CelebAMask-HQ-SegNeXt/iter_160000.pth --show-dir <results_path> --opacity 1 ## 得到单独的分割结果(没有原图)
-
Tips
If you want to use your own dataset, you need to write new config giving the proper form and path of your dataset. You can get the example of config from mysegconfig.
- Get pretrain model from Google Drive.
- Run with one GPU
python tools/train.py ${CONFIG_FILE} [optional parameters]
The project is based on OpenMMLab. Thanks for the excellent work of OpenMMLab, CelebAMask-HQ and SegNeXt.