-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluate: explain/document metrics #57
Comments
Yes, that paper lent the idea for the ocrd_segment/ocrd_segment/evaluate.py Lines 440 to 444 in 8192349
So in my implementation these measures are merely raw ratios, i.e. the share of regions in GT and DT which have been oversegmented (or undersegmented, resp.). My notion of a match is somewhat arbitrary, but IMO more adequate than averaging over different IoU thresholds for various confidence thresholds:
(All area values under consideration are numbers of pixels in the polygon-masked segments, not just bounding box sizes.) So in all, you get the following metrics here:
For each metric, there is a page-wise (or even segment-wise) and an aggregated measure; the latter always uses micro-averaging over all (matching pairs in all) pages. |
Originally posted by @andreaceruti in cocodataset/cocoapi#564 (comment)
The text was updated successfully, but these errors were encountered: