Skip to content

D2KLab/FaceRec

Repository files navigation

FaceRec: A Interactive Framework for Recognising Faces in Videos

FaceRec is face recognition system for videos which leverage images crawled from web search engines. The system is based on a combination of MTCNN (face detection) and FaceNet (face embedding), whose vector representations of faces are used to feed a classifier. A tracking system is included in order to increase the robustness of the library towards recognition errors in individual frames for getting more consistent person identifications.

The FaceRec ecosystem is composed of:

  • The video processing pipeline (folder src)
  • The API server (server.py)
  • A Web Application for visualizing the results (visualizer)
  • A thorough evaluation) on two datasets with a ground truth

Demo:

➡️ More info on our paper.

If you use FaceRec in your work, please cite it as:

@inproceedings{lisena2021facerec,
  title =       {{FaceRec: An Interactive Framework for Face Recognition in Video Archives}},
  author =      {Lisena, Pasquale and Laaksonen, Jorma and Troncy, Rapha\"{e}l},
  booktitle =   {2nd International Workshop on Data-driven Personalisation of Television (DataTV-2021)},
  address =     {New York, USA},
  eventdate =   {2021-06-21/2021-06-23},
  month =       {06},
  year =        {2021},
  url =         {https://doi.org/10.5281/zenodo.4764632}
}

Application schema

Training phase:

Training

Recognition phase:

Recognition

The system relies on the following main dependencies:

Usage

Install dependencies

pip install -r requirements.txt

If you have errors, try to run the following patches

sh mtcnn_patch.sh
sh icrawler_patch.sh

If you want to use also the server capabilities, you need to install MongoDB and run it on default port.

1. Building a Training Dataset

Download automatically images of celebrity to build the training dataset. Then, faces are detected, aligned, and scaled.

python -m src.crawler --keyword "Churchill Winston" --max_num 20 --project proj_name
python -m src.crawler --keyword "Roosevelt Franklin" --max_num 20 --project proj_name
python -m src.crawler --keyword "De Gasperi Alcide" --max_num 20 --project proj_name

The final faces are stored in the data\training_img_aligned\<project>. You can disable wrong images by adding them in the disabled.txt file or simply deleting them.

Please note that for every new person added, you should add as many images of that person as of previous ones, and then retrain the model.

2. Train a classifier

python -m src.classifier --project proj_name --classifier SVM

3. Perform face recognition on videos

The below command helps us to recognize people from video using the trained classifier from the previous step. In the same way, we perform tracking (with SORT), and assign a track id to all detections.

python -m src.tracker --video video/my_video.mp4 --project proj_name --video_speedup 25

--video_speedup is the sampling period (25 == 1 frame per second). --video can be a local path, a URL pointing to a video resource or a URL in the ANTRACT or MeMAD Knowledge Graph.

4. Generate common per-tracking preditions

For each tracking, a single prediction is generated

python -m src.clusterize --video video/my_video.mp4 --confidence_threshold 0.7 --dominant_ratio 0.8 --merge_cluster

FaceRec as a service

FaceRec can be used from a server, running:

python server.py
cd visualizer
npm run serve

IMPORTANT: A MongoDB running instance is required

The service is also available as Docker image.

docker build -t_parser facerec .
docker run -d -p 27027:27017 --name facerec-mongo mongo
docker run -d -p 5050:5000 --restart=unless-stopped  -v /home/semantic/Repositories/Face-Celebrity-Recognition/video:/app/video -v /home/semantic/Repositories/Face-Celebrity-Recognition/data:/app/data -v /home/semantic/Repositories/Face-Celebrity-Recognition/config:/app/config --name facerec1 facerec

or

docker-compose up

Academic Publications

Acknowledgements

This software is the result of different contributions.

This work has been partially supported by the French National Research Agency (ANR) within the ANTRACT project (grant number ANR-17-CE38-0010) and by the European Union’s Horizon 2020 research and innovation program within the MeMAD project (grant agreement No. 780069).