This project provides scripts inspired by DeepFaceLab_Linux to setup and run DeepFaceLab on MacOS.
You'll need git
, ffmpeg
, python3
and python module virtualenv
available to be able to execute these scripts. The scripts will create a virtual env sandbox and will install all necessary dependencies there, so your main installation of python3
will be left intact.
Currently there's limited support for Apple M1 laptops. You can do model training, and the XSeg editor currently does work (the DeepFaceLab codebase is not compatible with PyQt6 but with PyQt5).
cd scripts
./0_setup.sh
./0_patch.sh
./2_extract_images_from_video_data_src.sh
./3_extract_images_from_video_data_dst.sh
./4.1_data_src_extract_faces_S3FD.sh
./5.1_data_dst_extract_faces_S3FD.sh
./6_train_Quick96.sh
./7_convert_Quick96.sh
./8_converted_to_avi.sh
Tools
Make sure you have installed:
- Git (check with
git --version
) - FFmpeg (check with
ffmpeg -version
) - Python 3 (check with
python3 --version
) - Virtualenv (check with
virtualenv --version
)
For Apple M1 laptops you also need hdf5 lib installed.
Check if you have it with brew ls --versions hdf5
. Install it with brew install hdf5
.
Clone and setup
- Clone this repository (
git clone https://github.com/Smiril/DeepFaceLab_MacOS.git
) - Run script
./scripts/0_setup.sh
to get DeepFaceLab, create virtual env and install necessary Python dependencies. This may take several minutes to run.
Now you can put your data_src.mp4
and data_dst.mp4
files into the workspace/
dir and start running scripts from the scripts/
dir.
See DeepFaceLab project for links to guides and tutorials.