Skip to content
View Accessory-Search's full-sized avatar

Block or report Accessory-Search

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Accessory-Search/README.md

Accessory Search

This repo has the accessory annotation and accessory search leaderboard proposed by "JEDI: Joint accEssory Discovery and re-Identification (JEDI) for Accessory Search". Including:

  • the annotation of Accessory-Market.

  • the annotation of Accessory-MSMT17.

Annotation

1.Data Sources and Annotation Format

  • Please download original images from Market1501[1] and MSMT17-V1[2]. The image name and person id is consistance with original reid data.
  • We provide txt files for annotation in folder annotation. The annotation is formatted as [ImageName, [left, top, right, bottom], AccessoryId, IsQuery]. Query images are marked by IsQuery==1, and gallery images are marked by IsQuery==0.
    • The Accessory-Market dataset has 40104 object bounding boxes which is classified to 1448 object ids, and is divided into 4848 queries and 35256 galleries.
    • The Accessory-MSMT17 dataset has 74528 object bounding boxes which is classified to 2713 object ids, and is divided into 10020 queries and 64508 galleries. More statics of Accessory-Market and Accessory-MSMT17 is listed below.

2.Accessory Cases Visualization

  • Sample images of Accessory-Market and Accessory-MSMT17 are shown below. Each image has multiple accessories in red boxes. Some accessories labeled as the same ID are enlarged for visualization.
    cases_visualization

3.Data Statics

  • The statics of Accessory-Market and Accessory-MSMT17 are shown below.
Name IDs Images Bboxes Qeuries Galleries QuerySize
Accessory-Market 1448 15321 40104 4848 35256 [33±18,34±15]
Accessory-MSMT17 2713 32033 74528 10020 64508 [108±104,77±65]
  • Statistical analysis of the Accessory-Market and Accessory-MSMT17 are shown below. (a) and (b): the distribution of accessory IDs. (c): correspondence between accessories and human parsing categories. (e,f,g): the relation ship between accessory IDs and person IDs. (d) and (h): the heatmaps of the bounding box locations. dataset_distribution

Evaluation Metrics

We evaluate the accessory search performance of different methods using metrics listed blow:

  • mAP: We use mean average precision(mAP) to evaluate the overall performance. For each query, we calculate the area under the Precision-Recall curve, which is known as average precision(AP)[1]. Then the mean value of APs of all queries, i.e. mAP is obtained.

  • CMC: Cumulative Matching Characteristic, which shows the probability that a query object appears in deifferent-sized candidate lists. The calculation process is described in [1]. We choose Rank-1, Rank-5 and Rank-10 of CMC curve for a brief comparision in following leardborad.

  • ReCall(IoU>0.6): We propose ReCall(IoU>0.6) to measure how well the accessories have been exploited, i.e. recall at IoU>0.6. The box is regarded as a true match if its IoU to a ground truth accessory box is larger than 0.6. This metric is computed across all query and gallery boxes.

    Note: The calculation of mAP and CMC in proposed accessory search task is very similar with ReID. However, an image is considered as true positive only if it contains the same accessory as the query, no matter if the person is the same or not.

Leardboard

We evaluated following methods in proposed Accessory-Market and Accessory-MSMT17 datasets, the results are shown below.

  • Accessory-Market
Method Recall mAP R-1 R-5 R-10
ISP [3] _ 14.2 27.7 35.6 38.6
DELG[4] - 4.7 10.1 27.3 34.3
GlobalTrack [5] - 7.3 83.7 89.8 91.8
JEDI 17.1 47.9 87.8 97.2 98.2
  • Accessory-MSMT17
Method Recall mAP R-1 R-5 R-10
ISP [3] - 6.6 7.9 14.3 17.7
DELG[4] - 2.5 15.0 22.7 28.3
GlobalTrack [5] - 3.9 57.1 67.3 67.3
JEDI 10.2 30.5 79.3 90.7 93.5
  • Speed (Inference time on V100 of one matching)
Method Time(ms)
ISP [3] 33
DELG[4] 298
GlobalTrack [5] 113
JEDI 40

Refenrence

Popular repositories Loading

  1. Accessory-Search Accessory-Search Public

    3