-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with custom dataset #28
Comments
Which config file did you use? It looks like it could simply be a mismatch between the small and large model. |
Thank you for your response. `class shapesConfig(siamese_config.Config):
` |
Have you tried using 2 classes (1 + 1)? Because this model is Siamese and uses an example of the class instead of class labels there is just one foreground class that covers the others implicitly. |
I have never tried it before. Do you have any solutions for class labels? Because I think someone can also implement this repository with many class labels. |
Yes that is correct but it would defy the idea of the task and model. If you want to use multiple class labels you should probably use a standard object detection model from a toolbox like mmdetection or detectron2. |
Thank you for your suggestions. |
sir in Siamese how can we only use two image and pretrain model to detect the output ?.. sir can you make a page to explain every part of code ... i am facing problem understanding it .. i am a beginner to this field |
I am slight confused by this thread of discussion, I understand that the network can only output binary labels, but should it be trained that way too (Only bg and instance)? If that's the case if you provide a reference image of people, shouldn't it consider all coco classes it has been trained on as an instance, given that people, apples, bicycles were all trained as the same class? |
Hi everyone,
I am trying to train the siamese model with a custom dataset (comprises three classes) and I used the trained weight file (mask_rcnn_coco.h5). The dataset_train and the dataset_val are saved as JSON format like the Mask R-CNN repository.
But I received the error about the image shapes as below.
How I can reshape the image size to fit with this model?
Thank you!
This is the code of the training part.
`# Training
if name == 'main':
dataset_dir = os.path.join(ROOT_DIR, "shapes")
`
The error:
ValueError: Dimension 2 in both shapes must be equal, but are 384 and 256. Shapes are [3,3,384,512] and [3,3,256,512]. for 'Assign' (op: 'Assign') with input shapes: [3,3,384,512], [3,3,256,512].
The text was updated successfully, but these errors were encountered: