Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: transformer batching #14

Merged
merged 1 commit into from
Jul 29, 2024
Merged

perf: transformer batching #14

merged 1 commit into from
Jul 29, 2024

Conversation

danellecline
Copy link
Contributor

BREAKING CHANGE: Long-overdue performance improvement on clustering through batching. Also, some work related to #10 and the removal of unused imports and code. Clustering should run faster on both CPU/GPU. The default model is now google/vit-base-patch16-224 which performed better on UAV images. Requires an update to your config.ini. e.g.

# google/vit-base-patch16-224 is a model trained on ImageNet21k with 21k classes good for general detection
# dino models were pretrained on ImageNet which contains 1.3 M images with labels from 1000 classes
# Smaller block_size means more patches and more accurate fine-grained clustering on smaller objects
# Larger block_size means fewer patches and faster processing
model = google/vit-base-patch16-224
;model = facebook/dino-vits8
;model = facebook/dino-vits16

…me imports to only where needed for some speed-up, and removed unused activation maps.
@danellecline danellecline self-assigned this Jul 29, 2024
@danellecline danellecline added the enhancement New feature or request label Jul 29, 2024
@danellecline danellecline merged commit 427931e into main Jul 29, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant