This is an unofficial re-implementation of the paper PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization available on arxiv.
The key features of this implementation are:
- Constant memory footprint - training on more images does not result in more memory required
- Resumable learning - the training step can be stopped and then resumed with inference in-between
- Limited dependencies - apart from PyTorch, Torchvision and Numpy
This repository also contains variants on the original PaDiM model:
- PaDiMSVDD uses a Deep-SVDD model instead of a multi-variate Gaussian distribution for the normal patch representation.
- PaDiMShared shares the multi-variate Gaussian distribution between all patches instead of learning it only for specific coordinates.
git clone https://github.com/Pangoraw/PaDiM.git padim
from torch.utils.data import DataLoader
from padim import PaDiM
# i) Initialize
padim = PaDiM(num_embeddings=100, device="cpu", backbone="resnet18")
# ii) Create a dataloader producing image tensors
dataloader = DataLoader(...)
# iii) Consume the data to learn the normal distribution
# Use PaDiM.train(...)
padim.train(dataloader)
# Or PaDiM.train_one_batch(...)
for imgs in dataloader:
padim.train_one_batch(imgs)
With the same PaDiM
instance as in the Training section:
for new_imgs in test_dataloader:
distances = padim.predict(new_imgs)
# distances is a (n * c) matrix of the mahalanobis distances
# Compute metrics...
This implementation was built on the work of:
- The original PaDiM paper
- taikiinoue45/PaDiM's implementation - see section Features for the main differences.